Over at the SmugMug Sorcery blog I wrote a new post about creating instance store HVM AMIs: http://sorcery.smugmug.com/2014/01/29/instance-store-hvm-amis-for-amazon-ec2/.
Category: SysAdmin
I use mercurial for my personal projects. I run mercurial-server. I wanted to have a hook on the server that would clone the repository so additional tasks could be performed against the contents of the repository without affecting the server’s repository.
I started by creating a hook script at /var/lib/mercurial-server/repos/somerepo/.hg/push-hook.sh
:
#!/bin/bash PUSH_COPY="/var/lib/mercurial-server/checkout-for-push/somerepo" if ! [ -d $PUSH_COPY ] ; then echo "CLONE: $PUSH_COPY" /usr/bin/hg clone /var/lib/mercurial-server/repos/somerepo $PUSH_COPY else echo "UPDATE: $PUSH_COPY" /usr/bin/hg pull -R $PUSH_COPY -v -u fi echo "do more work here..."
Then I added the following to /var/lib/mercurial-server/repos/somerepo/.hg/hgrc
:
[hooks] changegroup = /var/lib/mercurial-server/repos/somerepo/.hg/push-hook.sh
This got me to the point of being able to check out the server repository but updating it failed with the following message:
remote: UPDATE: /var/lib/mercurial-server/checkout-for-push/somerepo remote: pulling from /var/lib/mercurial-server/repos/somerepo remote: searching for changes remote: 2 changesets found remote: adding changesets remote: calling hook outgoing.aaaaa_servelog: mercurialserver.servelog.hook remote: transaction abort! remote: rollback completed remote: abort: outgoing.aaaaa_servelog hook is invalid (import of "mercurialserver.servelog" failed)
After a lot of digging and Google searches, I wasn’t coming up with any answers. One person mentioned that an environment variable may be set wrong, causing errors. Once I dumped the environment variables out, I unset each one at a time to see if any were causing problems. I ended up needing to add unset HGRCPATH
to the top of the hook script before any hg
commands run. So my push script now looks like:
#!/bin/bash unset HGRCPATH PUSH_COPY="/var/lib/mercurial-server/checkout-for-push/somerepo" if ! [ -d $PUSH_COPY ] ; then echo "CLONE: $PUSH_COPY" /usr/bin/hg clone /var/lib/mercurial-server/repos/somerepo $PUSH_COPY else echo "UPDATE: $PUSH_COPY" /usr/bin/hg pull -R $PUSH_COPY -v -u fi echo "do more work here..."
And the output of a push command now looks a lot better:
pushing to ssh://hg@hg.example.com/somerepo searching for changes remote: adding changesets remote: adding manifests remote: adding file changes remote: added 1 changesets with 1 changes to 1 files remote: UPDATE: /var/lib/mercurial-server/checkout-for-push/somerepo remote: pulling from /var/lib/mercurial-server/repos/somerepo remote: searching for changes remote: 3 changesets found remote: adding changesets remote: adding manifests remote: adding file changes remote: added 3 changesets with 3 changes to 2 files remote: resolving manifests ...
Over at the SmugMug Sorcery blog I posted about how we scale puppet in Amazon EC2: http://sorcery.smugmug.com/2013/01/14/scaling-puppet-in-ec2/. You should definitely take a look.
Ping is not enabled to ec2 instances by default. A lot of guides tell you to simply allow all ICMP traffic through in the security group configuration. That is overkill. Simply add the following two rules to your security group and pinging the instance will work:
Custom ICMP rule -> Type: Echo Request
Custom ICMP rule -> Type: Echo Reply
While opening up additional ICMP types may be harmless, I always like to error on the side of only allowing what I explicitly want rather than allowing everything.
Bridging Networks using TP-Link Routers
Recently I wanted to set up a wireless network bridge between my garage and house without running any Ethernet cables. To do this I purchased a TP-Link TL-WR841ND 300Mbps Wireless N Router with hopes that it could talk to my existing router/access point in the house. The TP-Link supports a mode called WDS which enables bridging two or more wireless LANs.
When WDS is enabled, it causes the remote access point to act as a bridge for both wired and wireless clients. This way a network can easily be expanded without a lot of trouble of extra wiring. Wireless clients can connect to the remote access point and benefit by the increased wireless coverage area as well.
I first tried associating the garage router with my existing TP-Link DSL modem/router, but it turns out that a remote bridge must connect using WEP encryption instead of WPA2, so that wasn’t secure enough for my desires. Testing using WEP showed the bridge working exactly as expected.
To get WPA2 encryption working, I tried associating the remote access point with a 2wire router also in the house. The WDS connection was established, but the firewall in the 2wire would not allow connections to any devices other than the access point.
So in an attempt to combine the two methods, I purchased a second TL-WR841ND wireless router to live in the house and provide the final hop to the Internet for the remote access point.
To set up the network, I first connected the new house access point directly to a computer with an Ethernet cable, opened http://192.168.1.1/ on a browser, and made the following configuration changes:
- Under the DHCP tab, select Disable for the DHCP server, then click Save.
- Under the Forwarding -> UPnP tab, click the Disable button.
- Under the Wireless -> Wireless Settings tab, enter the following settings:
- SSID: name of new (bridge) network
- Region: enter the appropriate region for your location
- Channel: enter the number of the least-congested channel in your area
- Mode: 11bgn mixed
- Channel Width: Auto
- Max Tx Rate: 300Mbps
- Enable Wireless Router Radio: checked
- Enable SSID Broadcast: checked
- Enable WDS: unchecked (WDS is only enabled on the remote access point)
- Click Save
- Under the Wireless -> Wireless Security tab, enter the following settings:
- SelectWPA-PSK/WPA2-PSK
- Version: WPA2-PSK
- Encryption: AES
- PSK Password: chose a password for your network
- Group Key Update Period: 0
- Under the Network -> LAN tab, enter an IP address for the new access point, click Save, then click reboot.
Once that was done, I was ready to connect the new house access point into the network using one of the four LAN Ethernet ports on the back of the device. I connected to the new access point using a laptop to verify that everything was working as expected, then moved on to configuring the garage access point.
The garage access point is configured similarly to above, with only a few changes:
- Under the DHCP tab, select Disable for the DHCP server, then click Save.
- Under the Forwarding -> UPnP tab, click the Disable button.
- Under the Wireless -> Wireless Settings tab, enter the following settings:
- SSID: name of new remote AP network
- Region: enter the appropriate region for your location
- Channel: same as house AP
- Mode: 11bgn mixed
- Channel Width: Auto
- Max Tx Rate: 300Mbps
- Enable Wireless Router Radio: checked
- Enable SSID Broadcast: checked
- Enable WDS: checked
- Click Survey to find the access point created above, click Connect to connect to the house AP
- Key type: same as house AP
- Password: same as house AP
- Click Save
- Under the Wireless -> Wireless Security tab, enter the following settings:
- SelectWPA-PSK/WPA2-PSK
- Version: WPA2-PSK
- Encryption: AES
- PSK Password: chose a password for your network
- Group Key Update Period: 0
- Under the Network -> LAN tab, enter an IP address for the new access point, click Save, then click reboot.
I then connected a computer to the garage access point using an Ethernet cable and suddenly, I was online through the bridge. I also tested a wireless connection with the garage AP and was able connect to the Internet using the remote wireless as well.
Now all devices on the network are able to communicate (including wired-only devices in the garage, thanks to the built-in 4-port switch). This enables me to move my extra devices to the garage and remove some noise from the house.
Using WDS ended up being relatively simple, but I must warn you that WDS does not work very well across different vendors (or even different models of the same vendor as I found above). I recommend using all of the same model of access points to have this work best.
To help a service get unlisted by a spam block list, try the following addresses for each blocking service:
- Barracuda
- Cisco IronPort SenderBase Security Network
- Hostkarma / Junk Email Filter (useful as it contains details about blocked emails)
- Trend Micro Email Reputation Service / MAPS
- DNS WL
- mx25.net
To get in touch with many major ISPs, these links are of help:
ISP | Postmaster | Feedback loop | Whitelist |
---|---|---|---|
AOL | Postmaster | FBL | Whitelist |
Comcast | Postmaster | FBL | |
Hotmail / MSN | Postmaster | FBL |
I will add to this list as I find more resources that I’m actually using for delisting. Another good source for information on feedback loops is Word to the Wise.
Simple Bash Completions
Having Bash complete hostnames when SSH’ing to a host is very useful. I use the following on my computers that don’t have useful completion support:
KNOWN_HOSTS_LIST=$(echo `sed -e 's/^ *//' -e '/^#/d' -e 's/[, ].*//' -e '/[/d' ~/.ssh/known_hosts | sort -u`) complete -W "$KNOWN_HOSTS_LIST" ping complete -W "$KNOWN_HOSTS_LIST" ssh complete -W "$KNOWN_HOSTS_LIST" telnet complete -W "$KNOWN_HOSTS_LIST" traceroute complete -c -f command sudo complete -o dirnames cd
If running Debian or Ubuntu, install the bash-completion
package to pick up more useful completions.
SSH with SOCKS as a VPN
When the only connection I have into a network is SSH, yet I need to do more work than is possible over a single SSH session, I set up a simple VPN-like solution using the following.
I add to my ~/.ssh/config
file:
Host sshgw.example.net DynamicForward 16000 Host *.example.net User shane ProxyCommand ~/.bin/nc-ssh-autoproxy.sh %h %p 16000
The first host listed is the machine where I connect using SSH. I set up a SOCKS proxy on port 16000. The second host section is generic for the entire environment. It specifies a ProxyCommand
which calls a script that pipes the SSH connections over the SOCKS proxy when the proxy is up.
Here is the nc-ssh-autoproxy.sh
script:
#!/bin/sh # netcat-openbsd needed for -x proxy option HOST=$1 PORT=$2 PROXY=$3 netstat -an | grep LISTEN | grep -q "$PROXY" STAT=$? if [ $STAT = 1 ] ; then /bin/nc $HOST $PORT else /bin/nc -x localhost:$PROXY $HOST $PORT fi
One thing to note is that you need to have the OpenBSD version of Netcat, not the GNU version. The OpenBSD version supports connecting to SOCKS proxies. If you are using Debian or Ubuntu, install the netcat-openbsd
package.
The reason for using the nc-ssh-autoproxy.sh
script is that if a SOCKS proxy is not up, the SSH connection goes directly to the host. But if the proxy is running, all new SSH connections use the proxy.
I frequently use the initial SSH connection to open an IRC session. That way I’m not likely to accidentally close it thinking it is an unused terminal and stop the proxy.
A SOCKS proxy is a general solution that you can send web, email, and other traffic through it to the machines behind the SSH gateway. You can set your applications to use the SOCKS proxy on localhost:$PORT
and everything should work.
I like to take it a step further though for web browsing: I install FoxyProxy in Firefox and configure FoxyProxy to only send requests for *.example.net through the proxy. All other traffic is sent using the normal methods for the machine.
An example FoxyProxy config:
- Create a new proxy, enable SOCKS5
- Add a new pattern
- Name the pattern
- Provide a URL pattern (I prefer regular expression patterns; here’s an example:
https?://.*.example.net(:d+)?/.*
)
Manual Puppet Deployments
After including WordPress in my puppet configuration, I noticed that puppet runs were taking significantly longer than before. I had included WordPress in puppet to facilitate easy upgrades and make a consistent publishing system for multiple independent blogs. I wasn’t aware of the network features of WordPress when I set this up, but in hindsight I would do it this way again, as management is painless. I may experiment with the network feature in the future.
To keep the flexibility of puppet without making all puppet runs take longer and longer, I moved the WordPress configuration to a separate manifest that is applied manually as needed.
To begin I created a manifest called manual-wordpress.pp
that contains the following:
import "classes/*" node 'host.example.com' { wordpress { "blog.shanemeyers.com": domain => "blog.shanemeyers.com", path => "/path/to/blog.shanemeyers.com", db => "database_name", db_user => "dbuser", db_pass => "dbpass", } }
This manifest includes the node
directive to ensure that the blog is installed only on the host I want it installed on. The manual-wordpress.pp
could be considered to be an equivalent of the site.pp
file.
Under classes/
I have the same wordpress.pp
file as mentioned in the installing WordPress via puppet post.
To complete the task of applying the manual manifests, I created a PHP script that calls puppet as follows:
sudo /usr/bin/puppet --color html --verbose /path/to/manual-wordpress.pp
Puppet should be run as root to enable the puppet client to access the host’s SSL key files and therefore communicate with the puppet master for retrieving files.
Playing with EC2 Micro Instances
Last night I decided to give the t1.micro instance size a try on Amazon EC2. I used the Ubuntu 32-bit EBS AMI for the test.
ec2-run-instances ami-1234de7b --kernel aki-5037dd39 --instance-type t1.micro --region us-east-1 --key test1 --user-data-file ~/svn/ec2/init-instance.sh --group default --instance-initiated-shutdown-behavior stop
The instance started as usual. Once it was up and configured (via Puppet), I was able to log in and start poking around.
During the course of my testing, I upgraded some packages to the latest versions (I ran sudo apt-get update ; sudo apt-get upgrade
) and rebooted the instance. After the reboot, I was unable to log in. As I dug into the problem, I noticed the following on the console:
$ ec2-get-console-output i-87705bed | tail [ 0.850326] devtmpfs: mounted [ 0.850368] Freeing unused kernel memory: 216k freed [ 0.852042] Write protecting the kernel text: 4328k [ 0.852509] Write protecting the kernel read-only data: 1336k init: console-setup main process (63) terminated with status 1 %Ginit: plymouth-splash main process (215) terminated with status 2 init: plymouth main process (45) killed by SEGV signal cloud-init running: Thu, 16 Sep 2010 22:00:31 +0000. up 7.09 seconds mountall: Disconnected from Plymouth
It appeared that there was a problem with the mounts. On another clean instance, I noticed that the /etc/fstab
file had a reference to /mnt
, which isn’t valid on t1.micro instances. All other instance types include ephemeral storage, but t1.micro instances do not.
Once I removed the conflicting line in /etc/fstab
, the problem instance was able to boot normally.
For the curious, here’s my init-instance.sh
script:
#!/bin/sh apt-get update apt-get --yes install puppet wget --output-document=/etc/puppet/puppet.conf http://example.com/puppet.conf perl -pi -e 's/START=no/START=yes/' /etc/default/puppet /etc/init.d/puppet restart
After discovering this problem with micro instances, I added the following to the init-instance.sh
script:
if [ "`curl -s http://169.254.169.254/latest/meta-data/instance-type`" = "t1.micro" ] ; then sed -i -e '5d' /etc/fstab fi
This deletes the line in /etc/fstab
that mounts the ephemeral storage.