SSH is Dead. Long live SSH.

AWS is complex. Sign up for free, useful lessons like this.

SSH is dead, by which I mean: we don't need to expose our servers to the internet anymore.

There's two ways I've accomplished recently:

  1. VPN (Tailscale)
  2. AWS Systems Manager

The use case for these are slightly different, but both can get you SSHing into your private-network servers.

Tailscale

Tailscale is pretty great (in my humble opinion). What we can effectively do is join an AWS VPC from our local computers. This means we can connect to just about any resource in a given VPC over the private network.

We can SSH into a server, and use other services such as internal load balancers.

To use it, you spin up an EC2 server and install Tailscale on it. Once that's installed, we configure it as a subnet router. You tell Tailscale what IP addresses to "advertise" (the IPs of your VPC) and, assuming you have Tailscale running locally as well, you're in!

The caveats:

  1. You'll want to advertise all the correct IP address ranges that you want Tailscale to make available for you - if you're doing any VPC peering, you may want to advertise the IP ranges of peered networks to reach them as well
  2. It's possible your VPC has the same private network IP address space as your local network (e.g. your wifi), in which case they'll conflict with each other
  3. Security groups can still get in your way. It a server's security group(s) block port 22, for example, then you can't SSH into that server via port 22.

Follow the instructions and you should be good to go! Depending on your usage, you may be able to stay on the free plan.

AWS SSM

AWS Systems Manager provides a way to connect to an individual EC2 instance - basically starting an SSH session. It works with IAM permissions, so you don't need to setup SSH keys.

This means your instances can be in private subnets, and it will still work!

In fact, this should be your default way to connect to an instance, since you don't need to expose SSH to the world, and it's free.

To use it:

  1. Use a base image with the SSM agent running (all official base images, but you can install it yourself also)
  2. Ensure the EC2 has an instance profile that includes role AmazonSSMManagedInstanceCore or equivalent permissions

When you spin up such an instance, you'll see it become listed in SSM's inventory. Then you can "SSH" via an aws command.

1aws ssm start-session --target <instance-id>

Like all AWS features, there are gotchas:

  1. Make sure the EC2 has an Name tag or it won't get registered to SSM when it boots up
  2. Make sure an instance profile is used, and it has the correct role
  3. Make sure egress rules allow network traffic to SSM (usually be allowing egress anywhere)

In addition to point 3, it's possible to accidentally break SSM in wonky ways - for example, by adding an EC2 to a public subnet but not assigning it a public IP address (which, in most configurations, disallows network access to the internet and thus the SSM service).

Fancy Alias

Writing aws commands is annoying, so let's not. I use this alias:

1# file ~/.bashrc, or ~/.zshrc, etc
2ssm() {
3 aws ssm start-session --target $@
4}

After sourcing that change to your rc file, you can use the new alias it like this:

1ssm <instance id>

Fancy SSH Config

You can get really fancy with this and configure your local SSH client to use SSM in nifty ways. This setup lets you use SSH based commands such as scp. You can set an SSH ProxyCommand to use SSM, and ssh <instance-id> or scp ./foo <instance-id>:/foo to your heart's content!

For additional tricks, check out this article.

RIP SSH.

Don't miss out

Sign up to learn when new content is released! Courses are in production now.