Manual Puppet Deployments
0After including WordPress in my puppet configuration, I noticed that puppet runs were taking significantly longer than before. I had included WordPress in puppet to facilitate easy upgrades and make a consistent publishing system for multiple independent blogs. I wasn’t aware of the network features of WordPress when I set this up, but in hindsight I would do it this way again, as management is painless. I may experiment with the network feature in the future.
To keep the flexibility of puppet without making all puppet runs take longer and longer, I moved the WordPress configuration to a separate manifest that is applied manually as needed.
To begin I created a manifest called manual-wordpress.pp
that contains the following:
import "classes/*" node 'host.example.com' { wordpress { "blog.shanemeyers.com": domain => "blog.shanemeyers.com", path => "/path/to/blog.shanemeyers.com", db => "database_name", db_user => "dbuser", db_pass => "dbpass", } }
This manifest includes the node
directive to ensure that the blog is installed only on the host I want it installed on. The manual-wordpress.pp
could be considered to be an equivalent of the site.pp
file.
Under classes/
I have the same wordpress.pp
file as mentioned in the installing WordPress via puppet post.
To complete the task of applying the manual manifests, I created a PHP script that calls puppet as follows:
sudo /usr/bin/puppet --color html --verbose /path/to/manual-wordpress.pp
Puppet should be run as root to enable the puppet client to access the host’s SSL key files and therefore communicate with the puppet master for retrieving files.
Playing with EC2 Micro Instances
0Last night I decided to give the t1.micro instance size a try on Amazon EC2. I used the Ubuntu 32-bit EBS AMI for the test.
ec2-run-instances ami-1234de7b --kernel aki-5037dd39 --instance-type t1.micro --region us-east-1 --key test1 --user-data-file ~/svn/ec2/init-instance.sh --group default --instance-initiated-shutdown-behavior stop
The instance started as usual. Once it was up and configured (via Puppet), I was able to log in and start poking around.
During the course of my testing, I upgraded some packages to the latest versions (I ran sudo apt-get update ; sudo apt-get upgrade
) and rebooted the instance. After the reboot, I was unable to log in. As I dug into the problem, I noticed the following on the console:
$ ec2-get-console-output i-87705bed | tail [ 0.850326] devtmpfs: mounted [ 0.850368] Freeing unused kernel memory: 216k freed [ 0.852042] Write protecting the kernel text: 4328k [ 0.852509] Write protecting the kernel read-only data: 1336k init: console-setup main process (63) terminated with status 1 %Ginit: plymouth-splash main process (215) terminated with status 2 init: plymouth main process (45) killed by SEGV signal cloud-init running: Thu, 16 Sep 2010 22:00:31 +0000. up 7.09 seconds mountall: Disconnected from Plymouth
It appeared that there was a problem with the mounts. On another clean instance, I noticed that the /etc/fstab
file had a reference to /mnt
, which isn’t valid on t1.micro instances. All other instance types include ephemeral storage, but t1.micro instances do not.
Once I removed the conflicting line in /etc/fstab
, the problem instance was able to boot normally.
For the curious, here’s my init-instance.sh
script:
#!/bin/sh apt-get update apt-get --yes install puppet wget --output-document=/etc/puppet/puppet.conf http://example.com/puppet.conf perl -pi -e 's/START=no/START=yes/' /etc/default/puppet /etc/init.d/puppet restart
After discovering this problem with micro instances, I added the following to the init-instance.sh
script:
if [ "`curl -s http://169.254.169.254/latest/meta-data/instance-type`" = "t1.micro" ] ; then sed -i -e '5d' /etc/fstab fi
This deletes the line in /etc/fstab
that mounts the ephemeral storage.
SSH Agent in GNU Screen
2When starting a GNU Screen session, the current SSH agent is passed through to the new virtual terminal. As long as you do not disconnect from the screen session, SSH agent forwarding should continue to work as normal.
Once you disconnect from the screen session and end the SSH connection, the SSH agent settings in the screen session are no longer valid. If you reconnect to the screen session using a new SSH connection, the SSH agent socket has changed.
To fix this problem, I added the following bit of code to my ~/.bashrc
file:
if ! [ -S $SSH_AUTH_SOCK ] ; then # delete old/lingering agent files for i in `find /tmp/ssh-* -maxdepth 2 -name agent* -user $USER 2>/dev/null` ; do if ! [ -S $i ] ; then rm $i fi done unset i # set agent string SSH_AUTH_SOCK="`find /tmp/ssh-* -maxdepth 2 -name agent* -user $USER 2>/dev/null | head -n1`" echo "Set SSH_AUTH_SOCK to $SSH_AUTH_SOCK" fi
This code does the following:
- Checks if the current
$SSH_AUTH_SOCK
environment variable is a valid socket - If not, delete all old SSH agent socket files that may be lingering
- Set the
$SSH_AUTH_SOCK
environment variable to the first valid SSH agent socket file found
I normally open new virtual terminals to do work and close them when I’m complete. That way I always have a current environment for the applications to use. If you’re someone that uses only one virtual terminal and leaves it running forever, this trick won’t work as well for you.
If you connect to a server using multiple SSH connections, there’s a chance that a new virtual terminal in the screen session could use the SSH agent socket from a different SSH connection. If you disconnect the other SSH session, you may loose access to the SSH agent and need to open a new virtual terminal (or run source ~/.bashrc
) to regain access to an SSH agent.