One of the advantages of using a cloud provider such as Amazon Web Services is the flexibility to quickly terminate and create new instances. You can even do this automatically with autoscaling.
But then you are faced with the problem of provisioning; do you create a base AMI for each server’s “role”?
This is one option but could easily turn into dozens of AMIs to keep track of, not to mention
there’s no way to really keep things up-to-date, roll out changes across the infrastructure, etc. Configuration management is the obvious solution here, and Puppet
is a popular choice. One “issue” with AWS’s EC2 instances is the machine’s hostname will be something like ip-10-0-0-113 while the public hostname would be somthing like ec2-75-101-128-23.compute-1.amazonaws.com. This is rather
inconvenient.
I’m going to explain how you can automatically set sane hostnames and CNAME DNS records for instances as they boot up and connect/remove them from the puppetmaster automatically on boot/shutdown.
In my infrastructure, I identify a server’s “role” by a file, /etc/ROLE. This would contain something like someapp_webserver which identifies the application to which the server belongs and it’s
role, in this case “webserver”. Since I wanted a machine’s hostname to identify what it does and what application it’s a part of, I decided to format the hostnames as follows:
someapp-somerole-xxx.environment.mydomain.net
where xxx is the lowest available number beginning with 000 and environment is something like prod or staging (I use the environment variable from puppet to set this). So, for example, if a server’s
role (content of /etc/ROLE) is myawesomeapp_webserver and there are already the hostnames myawesomeapp-webserver-000.prod.mydomain.net and myawesomeapp-webserver-001.prod.mydomain.net, then the next
availble hostname would be myawesomeapp-webserver-002.prod.mydomain.net. To accomplish setting this automatically, we need some bash script magic to run at boot which queries Route53, figures out which
hostname to use, sets it on the server and creates the CNAME record. Here’s what my bash script looks like:
There’s alot going on there, so let me break it down a bit. This script looks at /etc/ROLE to determine the role and the pattern of what the hostname should look like.
It queries Route53 for a list of all the CNAME records matching the pattern. Then it begins iterating over the sorted hostnames from Route53, and when it finds a missing number,
it uses that as the machine’s hostname. The puppet environment is used to set the environment portion of the hostname. Once the hostname is set, puppet is started. The puppet
daemon should not run automatically at boot (chkconfig puppet off), but should be started by this script to ensure that it contacts the puppetmaster using the correct hostname
(not the EC2 assigned hostname).
You will need an accompanying script to handle removing the CNAME record from Route53 and deactivating the node on the puppetmaster on shutdown/reboot. Here’s my script for this:
This script issues a couple of API calls to the puppetmaster to deactivate the node and remove it’s certificates and removes the CNAME record from Route53. You’ll need to make sure you’ve got the puppetmaster
configured to accept these incoming API calls.
With these scripts baked into your server image, you can now configure autoscaling and have human readable hostnames set automatically. You just need the user-data used by the autoscaling group to
set the value of /etc/ROLE.
Hopefully someone finds this helpful. Happy DevOps-ing!.