- benji/gary_poster: Different ways to keep Juju from forcing you to repeatedly accept SSH keys
[introduction] [project report] [tricks] [topics]
benji/gary_poster: Different ways to keep Juju from forcing you to repeatedly accept SSH keys
Juju is a great tool, and this week we used it to develop some integration tests.
We focused on external back ends (like EC2 and local OpenStack) rather than LXC containers. This is because we need tests that create LXC containers. Even though we've been told repeatedly that you can now use LXC containers-in-containers, we haven't tested our code in that configuration and we don't want to add another variable in to debug while we are working in another direction. With Juju, we can transparently try LXC later.
One of the annoyances that Juju has for this use case is requiring interactive prompts. A simple one is juju destroy-environment. It requires that you confirm the decision. It's pretty easy to hack around this with echo y | juju destroy-environment. It would be nice if there were a --yes option to the destroy-environment, though.
A more prevalent interactive prompt annoyance with Juju and external providers right now is that you have to manually accept SSH keys every time you connect to a new machine.
- When run interactively and manually, this is minor--though it annoys people frequently enough that it is a fairly common complaint.
- When run interactively but as part of a larger, longer-running process, this can be quite annoying if the process stops somewhere in the middle to ask you for input. This scenario is often but not always avoidable: see discussion below.
- When you are running automated integration tests, the annoyance becomes a showstopper.
So how can we prevent the annoyance and/or showstopper? We could work around it, or we could fix Juju.
Workaround: Devil May Care, Most of the Time
There are three Juju commands that can trigger SSH key acceptance prompts: juju bootstrap, juju ssh, and juju scp. The last two are nice in that they will accept the same command line arguments as ssh and scp. Therefore, you can use, for instance, juju ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null and you will squelch all those prompts. These are easy to script.
However, juju bootstrap does not support these options.
The upside is that this makes it much less annoying to run scripts that drive Juju. You only have to interactively accept keys when the script bootstraps Juju. For many scripts, that could be once, at the beginning of the script.
What are the downsides?
- You are not verifying the SSH keys when you talk to the Juju nodes (other than the initial ZooKeeper node), opening yourself up to a man in the middle attack.
- This is an incomplete solution for automated scripts using Juju, because bootstrap will still require you to accept keys.
- Moreover, if you must bootstrap more than once within an automated script (because you are using destroy-environment to get a completely clean slate for your next machines), it can require a response to an interactive prompt within an long-running script, losing a lot of the advantage for improving the interactive story with these scripts.
- This does not help normal (non-scripted) Juju usage, unless you want to remember to always provide these options. You could alias jujussh='juju ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'.
In sum, it might help sometimes, but it doesn't solve much of anything by itself.
Workaround: Devil May Care
If you don't care about verifying your keys, and you don't mind changing your ~/.ssh/config, the most straightforward way is to add something like this to your SSH .config file to suppress the many yes/no prompts:
Host *.amazonaws.com StrictHostKeyChecking no UserKnownHostsFile /dev/null
The upside is that this works in all use cases.
What are the downsides?
- You are not verifying the SSH keys, opening yourself up to a man in the middle attack.
- You have to make the assertion that you will not verify keys for all connections with the cloud provider--AWS in this case. That kind of assertion is probably only appropriate for an ssh config that is only used for testing.
- You must change a file that is very important and sensitive to many users. Having an automated process change this file for a normal user would be very unattractive.
In sum, this will be acceptable for some dedicated testing scenarios. It is not a broadly good solution. However, as you'll see below, this is pretty much as good as it gets right now, unless Juju changes.
Workaround: Devil May Care But At Least We Don't Modify Your SSH Config
On freenode's #juju, hazmat had an idea of a variant of the approach above, appropriate for automated scripts that want to use Juju. The scripts could locally modify $HOME before running Juju. Then they can configure a custom .ssh (and .juju, for that matter, to set up a custom environments.yaml) to do what they want.
The upside is that it might work for automated scripts run by a normal user, and not just within a dedicated testing environment.
What are the downsides?
- Still no SSH key verification.
- If you have custom ssh configuration for an environment (such as identity files or proxies for a given openstack deployment), that will need to be copied over to the new location and then modified. All of that might be fragile, and all of it feels like a hack to us.
- Doesn't help the simple Juju use case (without automated scripts).
In sum, like the other workarounds, it still is problematic.
Workaround: Devil May Care, And He Might Like Your Hacks Too
Want hacks? We can give more! Another approach would be to programmatically make a tty and run the juju command in it. You can then provide the tty with any input you need. Talk to benji if you're interested.
It would work, but it doesn't verify keys and it is a major hack.
In sum, this is crazy and fun! Talk to benji! But let's not promote this as a good solution.
Change Juju: Slow But Safe
Those are all the workarounds we know. If you are interested in long term fixes for the problem, on #juju, mgz reported that one approach is to scrape console for key, verify it, and add it to your ~/.ssh/known_hosts. [UPDATE: smoser wrote a post about this approach a couple of years ago.]
This is reportedly quite fast in OpenStack, but reportedly slow in EC2--as long as 10 minutes some times--so unfortunately my understanding is that it wouldn't work at the moment as a general purpose Juju change.
Change Juju: Mostly Careful after Extra Preparation
Also on #juju, smoser pointed out that another solution would be for Juju to generate ssh public/private key on the local system, and then pass those through user-data to cloud-init on a custom image that accepts them. Juju would then connect to the host with the initial credentials, and then destroy them and create new ones. According to smoser, the ones that were sent insecurely via user-data are usable for less than a minute. This is what cloud-sandbox does.
This might be the best compromise. It requires a custom virtual machine image, as we understand it, which might be a significant problem in terms of rollout and usability; and for some uses the user-data insecurity might be unacceptable. Juju devs would have to weigh the pros and cons.
[UPDATE: smoser corrected me that this approach does not require a custom virtual machine image. Beyond that, I had not clarified that the user-data approach was specific to and necessary only for the bootstrap node. We have other out-of-band methods available after the bootstrap. Finally, he clarified that only the temporary public key needs to be sent via user-data, and that he does not feel that this is a security concern at all. That makes sense. This sounds like a long term winner to me.]
In summary...
We're hacking our ~/.ssh/config. Come on, it's fun, everybody's doing it!
2 comments:
On freenode's #juju, hazmat had an idea of a variant of the approach above, appropriate for automated scripts that want to use Juju. The scripts could locally modify $HOME before running Juju. Then they can configure a custom .ssh (and .juju, for that matter, to set up a custom environments.yaml) to do what they want.
Aside from the problems you mention, assuming juju uses the command line ssh under the hood, this won't work!. OpenSSH reads doesn't look at $HOME/.ssh/config, it reads /etc/passwd to find out the home directory and then uses that to find .ssh/config. I found this out the hard way :-)
Heh, good to know, Michael. Thank you.
Post a Comment