SSH with SOCKS as a VPN

When the only connection I have into a network is SSH, yet I need to do more work than is possible over a single SSH session, I set up a simple VPN-like solution using the following.

I add to my ~/.ssh/config file:

DynamicForward 16000

Host *
User shane
ProxyCommand ~/.bin/ %h %p 16000

The first host listed is the machine where I connect using SSH. I set up a SOCKS proxy on port 16000. The second host section is generic for the entire environment. It specifies a ProxyCommand which calls a script that pipes the SSH connections over the SOCKS proxy when the proxy is up.

Here is the script:

# netcat-openbsd needed for -x proxy option

netstat -an | grep LISTEN | grep -q "$PROXY"
if [ $STAT = 1 ] ; then
    /bin/nc $HOST $PORT
    /bin/nc -x localhost:$PROXY $HOST $PORT

One thing to note is that you need to have the OpenBSD version of Netcat, not the GNU version. The OpenBSD version supports connecting to SOCKS proxies. If you are using Debian or Ubuntu, install the netcat-openbsd package.

The reason for using the script is that if a SOCKS proxy is not up, the SSH connection goes directly to the host. But if the proxy is running, all new SSH connections use the proxy.

I frequently use the initial SSH connection to open an IRC session. That way I’m not likely to accidentally close it thinking it is an unused terminal and stop the proxy.

A SOCKS proxy is a general solution that you can send web, email, and other traffic through it to the machines behind the SSH gateway. You can set your applications to use the SOCKS proxy on localhost:$PORT and everything should work.

I like to take it a step further though for web browsing: I install FoxyProxy in Firefox and configure FoxyProxy to only send requests for * through the proxy. All other traffic is sent using the normal methods for the machine.

An example FoxyProxy config:

  • Create a new proxy, enable SOCKS5
  • Add a new pattern
  • Name the pattern
  • Provide a URL pattern (I prefer regular expression patterns; here’s an example: https?://.**)

Manual Puppet Deployments

After including WordPress in my puppet configuration, I noticed that puppet runs were taking significantly longer than before. I had included WordPress in puppet to facilitate easy upgrades and make a consistent publishing system for multiple independent blogs. I wasn’t aware of the network features of WordPress when I set this up, but in hindsight I would do it this way again, as management is painless. I may experiment with the network feature in the future.

To keep the flexibility of puppet without making all puppet runs take longer and longer, I moved the WordPress configuration to a separate manifest that is applied manually as needed.

To begin I created a manifest called manual-wordpress.pp that contains the following:

import "classes/*"

node '' {
    wordpress { "":
        domain => "",
        path => "/path/to/",
        db => "database_name",
        db_user => "dbuser",
        db_pass => "dbpass",

This manifest includes the node directive to ensure that the blog is installed only on the host I want it installed on. The manual-wordpress.pp could be considered to be an equivalent of the site.pp file.

Under classes/ I have the same wordpress.pp file as mentioned in the installing WordPress via puppet post.

To complete the task of applying the manual manifests, I created a PHP script that calls puppet as follows:

sudo /usr/bin/puppet --color html --verbose 

Puppet should be run as root to enable the puppet client to access the host’s SSL key files and therefore communicate with the puppet master for retrieving files.


Playing with EC2 Micro Instances

Last night I decided to give the t1.micro instance size a try on Amazon EC2. I used the Ubuntu 32-bit EBS AMI for the test.

--kernel aki-5037dd39 
--instance-type t1.micro 
--region us-east-1 
--key test1 
--user-data-file ~/svn/ec2/ 
--group default 
--instance-initiated-shutdown-behavior stop

The instance started as usual. Once it was up and configured (via Puppet), I was able to log in and start poking around.

During the course of my testing, I upgraded some packages to the latest versions (I ran sudo apt-get update ; sudo apt-get upgrade) and rebooted the instance. After the reboot, I was unable to log in. As I dug into the problem, I noticed the following on the console:

$ ec2-get-console-output i-87705bed | tail
[    0.850326] devtmpfs: mounted
[    0.850368] Freeing unused kernel memory: 216k freed
[    0.852042] Write protecting the kernel text: 4328k
[    0.852509] Write protecting the kernel read-only data: 1336k
init: console-setup main process (63) terminated with status 1
%Ginit: plymouth-splash main process (215) terminated with status 2
init: plymouth main process (45) killed by SEGV signal
cloud-init running: Thu, 16 Sep 2010 22:00:31 +0000. up 7.09 seconds
mountall: Disconnected from Plymouth

It appeared that there was a problem with the mounts. On another clean instance, I noticed that the /etc/fstab file had a reference to /mnt, which isn’t valid on t1.micro instances. All other instance types include ephemeral storage, but t1.micro instances do not.

Once I removed the conflicting line in /etc/fstab, the problem instance was able to boot normally.

For the curious, here’s my script:

apt-get update
apt-get --yes install puppet
wget --output-document=/etc/puppet/puppet.conf
perl -pi -e 's/START=no/START=yes/' /etc/default/puppet
/etc/init.d/puppet restart

After discovering this problem with micro instances, I added the following to the script:

if [ "`curl -s`" = "t1.micro" ] ; then
    sed -i -e '5d' /etc/fstab

This deletes the line in /etc/fstab that mounts the ephemeral storage.


SSH Agent in GNU Screen

When starting a GNU Screen session, the current SSH agent is passed through to the new virtual terminal.  As long as you do not disconnect from the screen session, SSH agent forwarding should continue to work as normal.

Once you disconnect from the screen session and end the SSH connection, the SSH agent settings in the screen session are no longer valid.  If you reconnect to the screen session using a new SSH connection, the SSH agent socket has changed.

To fix this problem, I added the following bit of code to my ~/.bashrc file:

if ! [ -S $SSH_AUTH_SOCK ] ; then
    # delete old/lingering agent files
    for i in `find /tmp/ssh-* -maxdepth 2 -name agent* -user $USER 2>/dev/null` ; do
        if ! [ -S $i ] ; then
            rm $i
    unset i
    # set agent string
    SSH_AUTH_SOCK="`find /tmp/ssh-* -maxdepth 2 -name agent* -user $USER 2>/dev/null  | head -n1`"
    echo "Set SSH_AUTH_SOCK to $SSH_AUTH_SOCK"

This code does the following:

  1. Checks if the current $SSH_AUTH_SOCK environment variable is a valid socket
  2. If not, delete all old SSH agent socket files that may be lingering
  3. Set the $SSH_AUTH_SOCK environment variable to the first valid SSH agent socket file found

I normally open new virtual terminals to do work and close them when I’m complete.  That way I always have a current environment for the applications to use.  If you’re someone that uses only one virtual terminal and leaves it running forever, this trick won’t work as well for you.

If you connect to a server using multiple SSH connections, there’s a chance that a new virtual terminal in the screen session could use the SSH agent socket from a different SSH connection.  If you disconnect the other SSH session, you may loose access to the SSH agent and need to open a new virtual terminal (or run source ~/.bashrc) to regain access to an SSH agent.