December 25, 2008

Day 25 - dotfiles and power users

Dotfiles are precious. They help you maintain your desired environment. My dotfiles have been built very slowly over time as I find features I like or change the way I use a tool. Additionally, I learn by reading other people's rc files. Ignoring your ability to make very useful changes in behavior and operation of your favorite tools will leave you at a very minimal level of productivity.

The more time you spend using a tool should mean putting more time into configuring it and learning about it. Using a program with its defaults alone, over time, is a massive productivity killer. For example, the speed at which I was able to do things in a unix shell skyrocketed when I learned that I could have vi keybindings in my shell (with 'set -o vi' or 'bindkey -v' in most shells).

As mentioned above, part of learning how to configure a tool is simply reading documentation, or searching online for how to do something. Another important part is by learning from others: reading their dotfiles. In order to read your dotfile, it must be available somewhere. I heavily encourage you to post your rc files online. Not only publish them, but make sure you put comments describing what each configuration decision does. Knowledge grows faster when there's a community contributing to it, so post documented snippets online!

Rather than covering another best practice or tool, today's article is an attempt to try and fill your mind with some useful things you may want to try in your own tools. Covered below are some of my configurations for various tools. Each configuration has a link to respective documentation (if available online) about that option. Futher, it is not my hope that you agree with my configurations, but that you find options here that you didn't know about that might help you.

In the process of reading peer rc files and gathering data for this article, I found a neat website where people can publish their own dotfiles, This site lets you view everyone's uploaded dotfiles.

There are far too many tools and options to cover, so I'll cover the three tools closest to my heart: zsh, vim, and screen. However, before I get into it, I want to make a few, important points.

  1. Vi mode in your shell is one of the best features available if you are familiar with vi. Bash, ksh, zsh, and tcsh all support 'vi mode' in varying degrees of compatibility. In your shell, type 'set -o vi' (bash, ksh) or 'bindkey -v' (zsh, tcsh), and be happy with your increase in productivity.
  2. Set your terminal (screen or terminal) title! There are lots of existing rc files that show you how to do this. For zsh, try searching zsh screen title.
  3. Don't ignore configuration of tools you use every day. Being a power user doesn't mean you automatically do lame things like recompiling vim with -O9999, it means understanding the tools you use and how to configure them to best fit your pattern of work and your style preferences.
To repeat one more time, publish and document your dotfile configurations. Ok, on to some options for zsh, vim, and screen.

December 24, 2008

Day 24 - Message Brokers

You know that cron job you have that runs every 5 minutes, checks if there is work to do, and does work only if there is work to do? Stop that. The old ways of primitive, distributed processing should be put behind you. Cease the old habit of having your pipeline members check "Is there input?" periodically; whether it's asking mysql for signal data, looking for an empty file as a signal, or whatever. There's a better way: use a message broker.

Let's take a small example. You have a machine database which includes useful data about your hardware such as hardware type, mac addresses, services, etc. You were smart and decided that your dhcp configs would be autogenerated from this database, so now your dhcp server has a cron job that runs every 5 minutes and regenerates the dhcp config and restarts dhcp. Bonus points if you only restart dhcp if the config is actually different.

What you should have done is hooked whatever interface that changes (or permits change) to your machine database into sending a message telling your dhcp server to regenerate it's config. How can we easily do that?

Message brokers act as a channel for processes to communicate with each other easily, and they facilitate reliable, cross-platform, cross-language, cross-network messaging. AMQP has support for a variety of messaging models (see further reading). The messaging model we want here is the 'store and forward' kind, since we only have one writer (machine database) and one reader (dhcp config updater), but if we had more readers on a single channel, we would want a 'publish and subscribe' (pubsub) model. Message brokers support multiple independent channels for your processes to communicate with. For convenience, you choose the name of that channel.

What are our options? AMQP, Advanced Message Queueing Protocol, is a fancy standard that is supported by software called message brokers. Popular message brokers include ActiveMQ, RabbitMQ, and OpenAMQ. In addition to AMQP, there are other protocols designed for messaging, such as JMS and STOMP. STOMP is simple and can work on just about any message broker. With STOMP on ActiveMQ, you can do both queue and topic message models.

I'm assuming you have a message broker that supports STOMP already configured. If you don't, try out ActiveMQ. There are other brokers that support STOMP. Alternately, you can use the StompConnect to add STOMP functionality to anything supporting JMS.

First, we'll want to write the code that sends a message. Since STOMP's message contents are just text, let's send the 'UPDATED' message to notify that the machines database has been modified. Here's an example in ruby:

require "rubygems"
require "stomp"   # install with 'gem install stomp'

# Connect to the stomp server
client = "stomp://mystompserver:5906"

# Send "UPDATED" to the destination '/topic/dhcp'
client.send("/topic/dhcp", "UPDATED");
Assuming your machines database (or the interface too it) can be told to run this script when a modification occurs, you are halfway to completing this project.

The other part is the receiver. You need a script that will listen for notifications and regenerate the dhcp config as necessary.

require "rubygems" 
require "stomp"

while true do
  client = "stomp://mystompserver:5906"

  client.subscribe("/topic/dhcp", :ack => :client) do |msg|
    if msg.body == "UPDATED"

The receiver code is a bit longer. We subscribe to the same topic as the sender and send the server an acknowledgement that we received the message. The 'client.join' at the end is using a thread join function to wait until the stomp client disconnects or dies. We wrap the whole bit in an infinite loop so we will reconnect in the event that the stomp server dies.

With this configuration, any changes made to your machines database will cause a dhcp config regeneration. The benefits here are two fold: first, that you no longer wake up every 5 minutes trying to do work, and second, that any changes are propogated to your dhcp server immediately (for some small definition of immediately).

Message brokers are extremely useful in helping automate your production systems, especially across platforms and languages. There are STOMP, AMQP, and JMS libraries for just about every language you might use. Lastly, with the availability of free and open source messaging tools and libraries, the cost of deploying a broker is pretty low while the gains in automation and reliability can be high.

Further reading:


December 23, 2008

Day 23 - Change Management

This post was contributed by Matt Simmons. Thanks, Matt! :)

It's been said that change is the only thing that ever stays the same, and whoever said that probably worked in IT. Transitions are a part of life, but we administrators are burdened by what I would judge to be more than our fair share.

Too frequently, we find ourselves picking up the pieces from the last major system change we made, while at the same time designing the next iteration of the infrastructure that we'll be putting in place. How many times have you chosen an implementation that wasn't ideal now, because a bigger change was just around the corner, and you wanted to "future proof" your design? Bonus points for having to make that decision due to a previous change that was still being implemented. It doesn't seem to matter how precisely you've planned a major upgrade, snags and snafus are expected to rear their ugly heads.

Is this something that we just have to deal with? Are we at the mercy of Murphy, or are there ways we can induce these issues to work to our benefit? Sure, it would be easy if we had a crystal ball, but too often we don't even have a rough guess as to where our plans will encounter problems.

Change itself isn't the enemy. Change promotes progress, and from the 10,000ft view, our long-term goals should work towards this progress. Dealing with change is a natural and positive endeavor.

Instead of being thrown about by the winds of chance, lets put some sails on our boat, and see if we can make headway by trying to manage the change on our terms. If we know that problems are going to be encountered, and we face those facts before we edit the first configuration, then we've taken the first step towards real change management.

The enemies of successful change (and the resulting progress) are imprecise requirements and lack of project leadership. Unless you plan around these pitfalls, your project may very well go into ventricular fibrillation, flip-flopping back and forth, unable to decide between two unforeseen evils midway through the work flow. While it's possible to recover from this with an injection of leadership, it's much easier to inoculate against the problem in the beginning.

If you're going to be planning a big project, you will probably want to follow a methodology. There are just about as many methods of managing a change as there are people who want you to pay them to do it, but with IT projects, I've found what I consider to be the most efficient for me. Your mileage may vary, of course.

  1. Team and goal formation

    Assuming your change is moderate to large scale, you've (hopefully) got a team of people involved, and one of them has been appointed leader. This is the point where you want to decide on your goals. Determine what success will be defined as at the end of the project, and how best to get there.

    Many times we don't yet know what or how success will be defined, or even what the target should be. Because of this, it's natural to perform step 2 before your goals have been decided upon. In fact, I'd recommend it.

  2. Analysis (Research) & Information Organization

    Too often (or not often enough, depending on your view point) we're asked to do too much with too little. Frequently, we don't even know how to do it. This Analysis step is here to allow you to make informed decisions, and to acquire the skills and resources necessary to succeed in your task. Sometimes the resources are people, in the form of new employees or contractors, or both.

  3. Design

    By this time, you know what the task entails, but you don't have a road map of to how you're going to get there. This step makes you the cartographer, planning the route from where you are to the implementation of your project and beyond. Some details of the design may change during development, but it's important to have the major framework laid out in this step as you proceed.

  4. Development

    In a perfect world, you would take the design produced in step three and translate it straight into something usable. We all know that this rarely, if ever happens. Instead, you encounter the first set of really difficult problems in this stage. Issues spring up with the technology that you're using, or with kinks in the design that you thought were smoothed over, but weren't. Development appears to follow Hofstadter's Law: 'It always takes longer than you expect, even when you take into account Hofstadter's Law'. Thorough testing at the end of the development stage will prevent misery in the next step.

  5. Implementation

    Here we find the second repository of unforeseen bugs and strange glitches that counteract your carefully planned designs. The good thing about issues at this point is that, provided you've tested thoroughly enough in development, you won't find many show stoppers. On the other hand, sometimes these bugs can appear as niggling details and intermittent issues, hard to reproduce.

  6. Support

    If you're designing, developing, and implementing a product, support is just another part of the game. This is where you pay for how carefully you performed the preceding steps. Garbage In, Garbage Out, they say, but because you've designed and built a solid system, your support tasks will be light, possibly just educating the users and performing routine maintenance.

  7. Evaluation

    Remember that part in step 1, where you decided what success would be defined as? Dust it off and evaluate your project according to those requirements. Discuss with your team what you could have improved on, and don't forget to give credit where it is due. Hard work deserves appreciation.

This method is really a modified ADDIE design, so named because it consists of Analysis, Design, Development, Implementation, and Evaluation. We've added a couple of steps to help it flow better in the IT world we live in. There are certainly other methods to look at. The Instructional Systems Design (ISD) is another one which is well known.

However you decide to manage change, it's important to stay with your plan and follow through. Remember to work and communicate with your teammates, and don't stress because the project is too big. Just take it one step at a time, follow your plan, and you'll get the job done. s

December 22, 2008

Day 22 - What's the problem?

From Wikipedia's software development process article, "the most important task in creating a software product is extracting the requirements or requirements analysis." Requirements analysis is an early part in the engineering process. This means learning what problem needs to be solved and the parameters and constraints with which you must solve them.

The wikipedia article goes on, "customers typically have an abstract idea of what they want as an end result, but not what software should do." My experience in systems administration is that customers may have an idea of what they want as an end result, but they may not know what problem they are solving and often present their idea in the form of a solution.

To describe this with an example, let's say you have a small team of sysadmins who are familiar with mysql and your group supports a few mysql deployments in your company.

A customer says, "I need postgresql installed." This is a request for action, not a description of the problem. You are only given the solution that the customer believes will bring about their desired end result. Do you simply install postgresql for them, or do you ask why they need it? If you already have a well-supported mysql deployment, you should be asking why you need to support another database.

Ask about the problem they need solved. Get details. Why does he or she need postgres? Can the existing mysql deployment and knowledge be used instead? Most of the time customers who simply ask for actions, "please implement this solution," often are unaware of existing, similar options already available. It's also possible that this customer is trying to solve a problem that doesn't exist, doesn't affect your company, or isn't feasible to solve completely.

If you get requirements, you might find they are simply "I need a database that speaks SQL." Alternately, you might find that the requirements include "I need to run this 3rd party tool which requires postgres." Dig deeper. What does this tool do? Can it's features be provided by another tool that doesn't require burdening your team with additional products to support? Is the problem the customer wants to solve even in the scope of your team?

In addition to getting the necessary information about the problem, you should also make sure you are given other constraints and parameters. Is there a deadline? What is the scope of the problem, who is affected, etc? What is the priority?

Let's examine another common situation. Another customer says, "I need apache on serverfoo restarted." Again, you should ask for a description of the problem. What are the symptoms the customer is observing? Restarting apache is an action that could bring about a solution, but what are you solving? What is broken? What if a customer reports "mail is down?" What does "mail is down" mean? What are the symptoms being observed?

When digging for a description of the problem from your customer(s), be careful to not offend the customer. It's easy to dismiss the customer as an idiot if you the information you are given doesn't make sense or doesn't help you fix a problem. This issue can easily occur when a non-domain-expert interacts with a domain expert. Remember that perspective is reality, and that "mail is down" makes total sense to your customer but is confusing to you. Make sure your fellow sysadmins follow the advice in this paragraph, too.

Asking for requirements can be a tool to help push back on bad ideas. Sometimes a management hammer comes down from above and says you must implement something that you disagree with or don't understand. Being a domain expert, you might be disagreeing or lacking understanding because the request doesn't make sense. Ask for requirements! Sometimes ideas manifest themselves into requests (or mandates) without the idea being actually thought out.

Lastly, always remember you can say, "no." Not every idea is a good one. Bad ideas can come with urgency. Be understanding of any urgency from your customers, but remember that you have the most information about what makes a change bad. Otherwise, why would they be asking you or your team to do it? Be aware of things people will say to convince you to do something even though you can show it is incorrect, such as "the CEO said we have to do this." Facts are your ally, so use facts to show why a proposal is wrong, why a request doesn't make sense, or why a set of requirements are impossible to fulfill.

December 21, 2008

Day 21 - Out-of-Band Management

Knowing I don't have to drive to the datacenter to reboot a machine gives me a warm fuzzy feeling. Do you have remote out-of-band management on your servers? Do you need it? If you need it, read on.

Remote management features include power management, KVM, serial console, and other things. Remote management is most critical when the host is not fully booted: when it's off, or you need to configure bios settings, or debug a kernel crash.

When deciding on what vendor and model to buy, knowing which of these features will save you time and money in the long term will help you decide what features to buy. More features often more money. For example, KVM-over-LAN is probably going to increase the server cost by quite a bit, so be sure to only buy features you need.

Remote management systems offer many different ways to interface: serial console, web browser, IPMI, OPMA, SSH, telnet, and others. Serial (ie; RS232) costs the most to own because the rest work over the network (which you already provide). Serial port control probably requires a separate device to provide you remote access to that serial port.

The most basic remote management feature, I think, is power management: Power on, off, and reset. Power management alone comes in two main forms, smart power strips and remote access controllers (RAC). Smart power strips offer an interface for controlling the state of each power port. RACs live closer to your system and connect to the power, reset, and other controls on your system motherboard (via wires or a separate interface).

Smart power strips are an easy way to provide remote power control to systems that don't have RACs, but there are drawbacks. A smart power strip toggling power won't do anything if your server is plugged into a UPS that's plugged into your power strip (for obvious reasons). Further, if your servers have redundant power supplies, you'll need one managed power port per power supply, and rebooting the server requires turning off all power ports before turning any back on, for a given server.

RAC modules come in varying forms. Like smart power strips, they offer interfacing over serial or network, depending on the model. Avoid serial if you can for reasons already stated. There are some standardized RAC network interfaces, such as IPMI and OPMA. Exact vendor support varies. Many Dell and HP server models come with IPMI. SuperMicro offers 'Supermicro Intelligent Management' which supports IPMI. Rackable's RAC goes by the 'Roamer' name, some of which support IPMI. Recent Intel chipsets support AMT (branded with the 'vPro' name).

IPMI RACs live on the same server and share power and often share layer 1 connectivity with an onboard network device. IPMI can be configured while the server is online, which lends itself to easy automation. In Linux, for example, you'll want the IPMI kernel drivers (from OpenIPMI) and the ipmitool tool. Ipmitool will let you talk to the local system's IPMI (via OpenIPMI kernel drivers) or to remote hosts using the IPMI protocol.

Simple power management isn't the only feature provided by the RACs mentioned here. IPMI, the protocol, supports serial-over-lan, sensor information, event logging, etc, but the features supported will vary by hardware. I don't have experience with OPMA or Intel AMT, but from their respective descriptions, they sound similar to IPMI in features.

Be sure to include out-of-band management (power, serial, etc) when considering your future purchases. I don't want to define your own server requirements, but for a point of note, even Dell's cheapest 1U rack server appears to come with IPMI support, so there may not be any reason for you not to buy hardware that supports remote, out-of-band management.

Further reading:

December 20, 2008

Day 20 - Ganglia

Ganglia is a monitoring tool designed originally to help scalably monitor computing grids and clusters. How can it help you, even if you don't run traditional computing grids or clusters?

Ganglia is an RRDtool-based (like Cacti) monitoring and graphing system. Ganglia differs from Cacti in that configuration is much more automatic. Ganglia's design centers around two programs: gmond and gmetad. The gmond program listens for metric reports from other gmond programs (or tools that emit the same messages). The gmetad program periodically polls a single gmond for data on an entire cluster. The trick, here, is that every time gmond gets data, it sends that data via multicast to other gmonds, so every gmond has state for the whole cluster. I presume that the actual gmond used by gmetad is chosen at random, and if the chosen gmond host fails, another gmond host is chosen.

In addition to clusters (one gmetad for N gmonds), Ganglia supports a higher level collection they call a grid. A grid is automatically learned when you have one gmetad polling from another gmetad. I am unaware if you can have more than these levels (host, cluster, grid).

Multicast: This means your network gear will need multicast routing enabled if you hope to span broadcast domains with this monitoring. Alternately, gmond can be configured to send updates to a unicast address which can avoid needing multicast routing and other potentially difficult network features.

Both gmond and gmetad have reasonably easy-to-use configuration files and come with very reasonable default values. Simply running gmond and gmetad from the default configurations will result in data you can access easily. The primary Ganglia human interface is through a webserver.

Getting data out of Ganglia is easy. The historical data is stored in RRD files in a known location organized by cluster and hostname, so you can use your favorite rrdtool interface to query data. The current data is stored on any gmond which is queryable by connecting to ganglia's xml port (default 8649). The service listening on that port will dump the metric data by cluster and host in XML. XML might make you groan, but it's use will help you write tools (like nagios checks) to use the current data.

My first question after playing with Ganglia for a few minutes was, "How do I monitor my network gear?" Typical network gear won't allow you to run arbitrary binaries on them. Luckily, Ganglia comes with a tool for broadcasting metric messages, gmetric. With gmetric, you can spoof the source of a piece of data and easily claim that it came from your switch or router. This tool is also the easiest way to extend the metrics ganglia monitors for you. For example:

% gmetric -S "" -n uptime -v 3644 -t uint32 -u seconds
  spoofName: myrouter    spoofIP: 
And the metric reported on the xml port is:
<HOST NAME="myrouter" IP="" REPORTED="1229761211" TN="28" TMAX="20" DMAX="0" LOCATION="unspecified" GMOND_STARTED="0">
<METRIC NAME="uptime" VAL="3644" TYPE="uint32" UNITS="seconds" TN="28" TMAX="60" DMAX="0" SLOPE="both" SOURCE="gmetric"/>
You can also specify the lifetime of the value, which will cause it to automatically be dropped from your gmond processes.

In addition to the gmetric command, there is are C and Python interfaces to gmetric. Additionally, if you want your own software to emit gmetric messages, there's an embedded gmetric library you can use.

My second question was, "What if a host is retired, is renamed, or is reconfigured?" If a host never again reports data about itself, the last state is still kept on every gmond and gmetad. To expire hosts that haven't spoken in a while, you should set the host_dmax value in gmond.conf to some value of seconds after which the host's state will be dropped. However, I am not sure the RRDs for this host will be cleaned up automatically. Host renaming probably requires that you rename the rrd directory holding the host's data if you wish to maintain historical data across the rename. If you reconfigure a host, deleting no-longer-used metric rrds is probably prudent. All of the above changes will likely require walking around in the ganglia rrd storage directory in your gmetad.

My last concern with ganglia was that the monitoring unit was a host. I think most often in services, not hosts. Thankfully, there are workarounds. Since you can spoof hostnames with gmetric, I decided to try something not-hostname-like to identify a service+host combo. I tried to spoof "service/hostname," but Ganglia uses this as the directory name and fails mkdir for not doing it recursively. Choosing another delimiter, comma, works fine:

Note: gmetric -S "foo/bar" will succeed, but gmetad will crash trying to write to that file path (mkdir not so smart). If you try this, you'll have to stop all gmond and gmetad instances then start them all again to clear the knowledge of the host named "foo/bar"

% gmetric -S ",aaa123" -n "apache errors" -v 5 -t uint32 -u "errors"
 spoofName: apache,aaa123    spoofIP: 
However, it appears that ganglia keys on IP as unique, so trying to add another entry of ",aaa123" will appear in the web interface. Further testing revealed that the IP portion of the spoof is not validated. If we use the unique combination of "service,host" as both the IP and hostname, everything is peachy:
% gmetric -S "apache,aaa123:apache,aaa123" ...
% gmetric -S "mysql,aaa123:mysql,aaa123" ...
Pushing only apache-related data to the 'apache' host prefixes will help you organize your monitoring by service. This means you could easily view only apache data for a host, etc.

Ganglia makes me happy because I can worry only about giving it data. Adding a new data source only requires knowing how to get the data and feed it periodically to gmetric. After that, the data is automatically available in Ganglia's web interface. I fed (with gmetric) some fake data about mysql connections to see what happens, and this was the result.-

The wish-list of Ganglia features includes a talk of making it easy to provide custom graphs and views. Adding your own custom graphs requires, currently, hacking on the php code that presents the web interface. Glancing at the PHP powering the web interface doesn't make me cringe (easy to read!), so extending ganglia by adding your own views is probably reasonable.

Considering what I've found so far with Ganglia, the shortcomings mentioned here are easily worked around, and the potential benefits of less-painfully-configured data trending and monitoring seem quite good.

Oh, by the way, gmond works in Windows. The project has a cygwin binary that works by itself and can report data about a windows host. You can also use gmetric from windows hosts to report info about themselves.

Further reading:

December 19, 2008

Day 19 - Visibility and Communication

A friend said to me tonight, "Once again, at a company function, the CEO has forgotten operations exists." This is a visibility problem that often hits operations and support teams.

Good systems administration is about hidden work and effort nobody ever sees. You are probably accustomed to only being highly visible when there is a problem - interrupts asking if something is broken, when it will be fixed, etc. If this is the only visibility you receive, how can you expect to be loved and adored by the world? You need to help yourself and your teammates with visibility. Your manager should help you, too.

Improving local visibility, that of your teammates and manager, is just as important as external visibility to your customers (employees or otherwise).

Keep track of your work: code commits, ticket resolutions, etc. A habit I developed while working at Google was to maintain a weekly report of things done. I've sinced modified this habit to include not only finished tasks, but things not done yet, progress blockers, and future todos. At the end of each week, send this data to your manager. Have the rest of your team do the same. For bonus points, send it to your team, so your coworkers will know what you did last week.

This weekly tracking will help you do two things: first, to maintain a high quality stream of communication to your manager and your coworkers, and second, to help you better track things that need to get done. If you're lucky (and you probably are), a fellow coworker will see that you have something on your todo list that he or she would like to do and offer to relieve you of this burden.

Tracking this data will also help you show how you can or can't take on that new project for time management reasons. Further, it's a huge help to have a document of accomplishments for career advancement.

To enhance visibility and communication with your manager, have periodic (weekly, etc) one-on-one meetings. Email is good for status reports, like above, but face to face contact is best for discussion. It's a two-way street, so use this time to make sure your visibility and perceived performance is what you expect it to be. Additionally, make requests of your manager if you have any. If you submit status reports that include things that are blocking you, your manager should ask how he or she can help remove these blocks.

Visibility to your customers and to your management should be handled differently. Your manager should be the funnel of information up (and down!) the management stack. Make sure he or she is performing this task. A good time to ask about this is in your one-on-one meetings.

Your customers are very important. Your work will directly, positively or negatively, affect their work. This is power and responsibility that can lead to resentment and anger if not handled properly. Creating interaction policies and informed expectations is critical to customer visibility and happiness.

Your interaction policy should explain how to contact your team, and be sure it's accessibly documented. I find bug systems to be great for tracking customer requests or problem reports, so require usage of this system for such things. Define escalation criteria, such as "if a critical problem is not responded to in X minutes, please email this pager address." You need to define "critical" in the previous statement, too. Don't use email alone for problem reporting, as it doesn't easily lend itself to historical tracking.

Set expectations! Planned changes that will cause outages should be announced ahead of time, at the start of the work, and at the end of the work. Any changes in planned change should be announced in a clear way. Announce known issues to anyone affected as soon as you are aware of the problem and include a contact (if not you), a time estimate on repair, and a description of the scope of the outage. Define an SLA for any service you support. An SLA is a common form of expectation declaration.

Additionally, don't waste someone's time. If you send an email about an upgrade to a specific component that only a subset of your customers use, then put a very clear header at the top, such as:

This is regarding an upgrade to the internal mysql servers. If you don't know what this is or don't use these systems, you can stop reading now.

Lastly, visibility and related communication does not have to be manually generated. Automation is sexy, and automatically informing customers about information important to them is a great way to avoid getting 15 tickets filed for the same problem. Have a web-based dashboard that includes a list of known problems and links to related trouble tickets, a list of upcoming planned changes, perhaps a "tip of the day," and any other useful information you see fit. There's plenty of content management systems available for free to help you get this dashboard site up and running in a very short time.

Healthy visibility is about good communication. Systems can go down and customers still be happy because you've involved them in the process by telling them and setting appropriate expectations. Your manager will be happy knowing your team is working effectively by knowing what everyone is working on without having to ask. Happy customers and happy managers means happy and appreciated sysadmins, even when things are on fire.

December 18, 2008

Day 18 - Logging Tools

Logging is good, right? There are lots of logging libraries for various languages to help you with logging events and behavior in your software.

You can log to files, log to email, log to the network, log from the network, and log to stdout, etc. As a software developer, you get to make a choice on where, how, and why you log. What if you're a systems administrator? What if the software developer chose to write everything to stdout without identifying information such as timestamps?

If the tools you are using all have configurable logging, you may not benefit from this article. If you have something without configurable logging, you're still in luck. There are a few tools out there that will help you turn output into good logging.

For this task, you have at least two options: multilog or logger.


Logger is a helpful command-line interface to sending syslog messages. Since it uses syslog, you get all the beneficial configurability of syslogd.conf. The logger tool come standard on just about every unix-like OS (Linux, Solaris, FreeBSD, etc), and vary by feature. Here's an example of logger usage:
% logger "hello world"
% tail -1 /var/log/messages
Dec 17 22:20:13 snack jls: hello world
The default tag (-t flag) is $USER, or "jls" for me. Most programs use the tag for the program name and pid. If you run logger with no message, it reads messages from stdin:
% echo "hurray logging" | logger -t myprocess
% tail -1 /var/log/messages
Dec 17 22:21:58 snack myprocess: hurray logging
If you're on FreeBSD, logger can log to a remote host using logger -h <hostname>, meaning you can log to a remote host without modifying syslogd.conf, if you desire.

As a more useful example, here's how to make a cronjob log its output to syslog:

0 3 * * * rsync -av /data filer:/backups/data | logger -t "backup-data"
All of rsync's output will show up in syslog logs:
Dec 17 22:31:48 snack backup-data: sending incremental file list
Dec 17 22:31:48 snack backup-data: data/
< other output trimmed >
Configurability: You can choose where your log messages go (by file or host) with /etc/syslog.conf. Log rotation is done by using logrotate (linux) or newsyslog (freebsd), and most systems come with log rotation enabled by default.


The multilog tool is part of the daemontools suite of tools. Multilog's configuration is done completely on the command-line. With multilog, you can timestamp logs and even filter them to different logfiles. As a bonus, it supports logfile rotation.

Multilog logs to directories, not files. Each directory contains a few files: current, lock, and state. Other files that will show up here are rotated logfiles. A very simple example of multilog with timestamps follows:

% echo "Hello there" | multilog t ./log
% ls log
current  lock  state
% cat log/current
@400000004949f46513a31f5c Hello there
The wacky @4000....f5c is a timestamp in tai64n format. To convert tai64n to local, readable time, use the tai64nlocal command that comes with daemontools:
% tai64nlocal < log/current
2008-12-17 22:57:31.329457500 Hello there
The timestamps in the logs are very high precision, as you can see. And yes, tai64nlocal will convert to the correct timezone based on your environment (/etc/localtime or the TZ environment variable).

As a more complex example, we can have multilog log lines starting with 'ERROR' in them to another log directory in addition to the normal one. Here's how:

multilog t '-*' '+* ERROR*' ./errors '+*' ./log
This says to timestamp. By the way, 't' is required to be the first config option if you want timestamps. Then we use a few line selections: '-*' means deselect everything and '+* ERROR*' will select any line starting with error. This may not make sense, but the '* ' before 'ERROR' is because timestamps are added immediately, so the '*' matches the timestamp. Awkward, but it is what it is.
% alias mylog="multilog t '-*' '+* ERROR*' ./errors '+*' ./log"
% echo "Hello world" | mylog
% echo "ERROR something is wrong" | mylog
% cat errors/current
@400000004949f71c36d3a30c ERROR something is wrong
% cat log/current 
@400000004949f71c362af374 Hello world
@400000004949f71c36d3a30c ERROR something is wrong
Once you understand the configuration syntax for multilog, you can do some neat things with it as exampled above. Using multilog in the cronjob example above:
0 3 * * * rsync -av /data filer:/backups/data | multilog t /var/log/data-backups
And the output looks like this:
% head -2 /var/log/data-backups/current
@400000004949f9930274befc sending incremental file list
@400000004949f993038ece04 data/

# Let's make it readable:
% head -2 /var/log/data-backups/current | tai64nlocal
2008-12-17 23:19:37.041205500 sending incremental file list
2008-12-17 23:19:37.059690500 data/

One benefit to tai64n is that your log files are sorted simply by being prefixed with the timestamp (no sort(1) call necessary), so if you have gigs and gigs of logs, you could use binary search to very quickly find a small time range to investigate. Syslog's timestamp format does not sort properly, so you can't do this.

Logging is super useful. The tools described here will help you bring sane logging with timestamps and log rotation to any program that only outputs to stdout.

Further reading:

December 16, 2008

Day 17 - Time Management

This post was contributed by Ben Rockwood. Thanks, Ben!

During the holiday season there seems to be a mad rush between Thanksgiving and Christmas. An odd force compels us to hurry things up and get organized or finish projects before the long Christmas break. The great and glorious pay off is a new January, fresh with possibility and reward. Out with the old, in with the new.

The new year offers something special and unique... perspective. Maybe you don't finish your projects prior to Christmas but things somehow feel different in the new year. New plans, new schedules, and a fresh perspective. So how can we get that perspective on a more regular basis?

Principles of Time Management

  1. Write everything down:

    Really, everything. Work or home, write it all down. If the thought cross your mind, it should be recorded. The thought may be "get mail", or "see new Bond film", or "implement new backup solution". If it's not written down you will think about it again, and again, and again. So get it out of your head.

    I encourage you to set aside about 30 minutes to an hour for this purpose. Set aside some time, go to some place relaxing, get a cup of coffee and just let your mind flow. Things you always wanted to do as a kid. A book, comic, or movie you never really understood or only caught part of. A hobby you've always wanted to try. A skill you think would be fun and useful. A new language you've wanted to learn (programming or spoken).

    In this alone you'll feel a great sense of relief and comfort.

  2. Keep it all in a central place:

    I'll offer some systems below, but which system you use isn't as important as simply consistantly using it. For SysAdmin's this may actually be a small collection of places, such as your company ticket system and a personal planner. If you simply use scraps of paper or a legal pad you'll innevitably loose it, so opt for a specific peice of software or organizer or note book. When your brain knows that all your projects and tasks are in a place it can easily reference it will leave your concious mind alone.

  3. Keep multiple lists

    You should not just have a single simple TODO list... life just isn't that linear. Rather, you should have a big "braindump" list of things to do, and then break that out into daily, weekly, monthly lists, or whatever granularlity you need.

    The sad fact is that in our minds, we tend to have a hodge-podge of TODO tasks, large and small alike. We constantly are steam-rolling this in our concious mind, and eventually it becomes overwhelming. When you lay everything out, and then look at that list and say to yourself "What can I accomplish this month?", then create that list, you start making life managable.

    This is key. When you have everything written down, you can create smaller, more approachable sublists to execute on it and have a greater sense of confidence that you might actually do it.

  4. Create daily TODO lists:

    Each day you should have a TODO list. This is the end-result of your other lists, which can actually be directly executed on. Get mail, walk dog, buy milk, install backup agents on systems 1-14, upgrade customerX's MySQL instance, close at least 4 tickets. These tasks need to be very granular... do-able.

    When you have daily lists you have two big benefits. Firstly, you have a historical record of what you were doing day-by-day. What were you trying to get done on March 23rd? Now you can look back and find out. Secondly, you can "push" tasks from one day to the next. Don't have time to finish something today? Push it onto tomorows list now and move on with other tasks you can complete.

    This is something paper planners do very well, but is difficult to accomplish in software task managers.

  5. 'Clean the Garage' is a Project:

    The key to using software management tools, such as 'Things' or 'OmniFocus', is to think in projects. A "project" is defined as any goal or objective that is not accomplished in a single task. As an example, replacing a lightbulb could be either one. If you have lightbulbs and you just need to swap it, its a task. However if your out of light bulbs and need to go to the store first, it is now a project consisting of the tasks "Buy Lightbulbs" and "Replace bulb in hallway".

    I make special emphasis of this because if you don't think like this the software tools will be very difficult to manage. You'll just have growing piles of TODO's that seem unrelated and you'll spend more time digging through lists of tasks than accomplishing them.

  6. Think out the steps:

    It's very important to not just think about the end result you want, such as "Backup Oracle Database", but rather to think about its individual tasks and then lay them out. Even tasks that may be fairly simplistic can seem overhwelmingly complex when you mind floods your concious with all the possible permutations and unclear decisions you will need to make. If you just break it down you can stay more relaxed, focused, and objective. You may even need to create sub-projects to evaluate your options, such as "Benchmark RMAN", "Evaluate BakBone Oracle Agent", etc.

Management Systems

  1. Franklin Covey Paper Planners

    Franklin Covey is a leading name in time management tools. If you haven't heard of them, think of the old "DayRunners" or other paper planners you've seen, but more flexable and customized.

    When you get started with Franklin Covey you will need to select and purchase a binder, starter kit, and verious types of filler pages. If you visit a store an employee will walk you through it, and if you visit their website there is a guide. Planners come in all sizes and styles to meet your needs directly. I personally prefer the smallest size, so that I can put it in a pocket.

    The advantages to a paper planner is that you have a perminant record of each days work and schedule to archive, its easy and fast to use, and you can take it everywhere you go. On the downside, its more expensive than software. Expect a nice setup to run you about $80.

    If you are on a budget, you can emulate the same thing in any cheap binder. Many people use Moleskin notebooks for this purpose.

  2. OmniFocus

    OmniFocus for the Mac is the most popular time management software around these days. Its very powerful but can take some time to learn. Thankfully there are some excellent tutorial videos and the online help is useful.

    The software lets you easily gather new tasks in an inbox and then later sort them into categories and projects. One of the most useful concepts in OmniFocus is that of "Context". Any particular place in which tasks can be done is a "context". For instance, you can only do laundry when your at home, and you can only check the tape robots in the office. When you assign context to tasks you can later look only at tasks that you can actually accomplish now. You might need to "Wash the dog" today, but you don't need to look at that task when your in the office working.

    The key to OmniFocus is to use it as it was intended. If you don't take time to actually learn how the software is intended to be used you'll find it nifty for about a week and then start getting irritated or frustrated.

    One added bonus of OmniFocus is that if you have an iPhone you can buy OmniFocus for iPhone and sync it with your desktop, bringing the "everywhere you want to be" advantage of paper to OmniFocus.

  3. Things

    Thinks is also for the Mac and currently free. It adopted many of the concepts of OmniFocus but is not nearly as strict and rigid. Rather than structure it relies on tagging tasks, which can allow you to organize more freely.

    I highly recommend that anyone considering the software route start with OmniFocus's free trial. Learn OmniFocus and then move to tools like Things. If you don't and just try Things you'll immediately find it too free flow and unhelpful.

    There is also an iPhone version of Things, however at last check it did not support sync'ing with the desktop.

Further reading:

Day 16 - Relax!

Always make time for relaxation or distraction. Systems administration comes with interrupts, strategy changes, management decisions, and beeping pagers. All of these can lead to stress. Stress is bad and can lead to burnout.

There are plenty of ways to manage stress, and you can easily find volumes of books on the subject. Mostly, I think managing stress is making a context switch away to something you enjoy doing: exercise, reading, learning, TV, whatever. You had things you enjoyed doing before the pager started beeping at 4AM on Saturdays, right?

Managing stress doesn't just mean relaxing, it might mean setting expectations for your fellow coworkers. This means explaining (and having your manager back you), for instance, that the build server doesn't need a serious SLA that includes late nights and weekends.

When pondering what to publish tonight, I decided that I needed a break. Writing and researching for these articles has taken away from my normal non-work activites after work and added some stress. With that said, I'm going to take so my own advice and keep this short, and go play with my doggy and hang out with the wife.

See you all tomorrow!

Further reading:

  • Puppet FAQ - If you destress by learning things (like I sometimes do), check out Puppet.

December 15, 2008

Day 15 - Documentation

Documentation is like automation. Good documentation will save you and your cowokers time, effort, and mistakes. Bad documetation will frustrate, anger, and annoy. No documentation means you get more interruptions, and you spend time not making progress on other tasks.

There are many things to document: designs, changes, APIs, policies and procedures. Each document should focus on a specific audience. Sending a change notification to your own group could be terse: "I rebooted system3 because <reason>." Documenting an important procedure that your customers will be invoking should be well written and probably not terse.

Good documentation takes effort and thought. Documentation should be written for a known audience. Choose your audience before writing a document. Express your intended audience before you begin your document. If you're revising a document, make sure the intended audience will still benefit after your revisions.

Documentation is about content, audience and findability. If the content is wrong or out of date, then your documentation is hurting the situation. If you don't know your audience, you can't best help the people who most need your information. If your document can't be found, no one will know it exists.

The content of your document is very important. The assumptions of prior knowledge in your content must be molded around your chosen audience. For instance, don't use an acronym without defining it unless you are certain your audience will know what it is or how to find out what it is. Content isn't necessarily always text. Good documents include diagrams or links to other resources, where possible.

The medium (wiki, email, paper, etc) of your documentation should reflect the needs of that documentation. Don't document long-term information over email. A printed new-hire todo list is good to have on said new-hire's desk on the first day. A network design should be published (perhaps on your wiki) with details including the problem, the solution, and benefits.

If the medium is electronic, you need to consider findability. Findability means that any person needing a piece of information can find it. Findability is difficult to provide without good search facilities and/or a easily browsable structure. Books have indeces to enhance findability, and your documentation should, too. These days, a wiki is a reasonable choice for containing your documents as they help you provide search and structure which improves findability. If you make roll out a major change, update your documentation and send an email to your audience (interested parties) indicating the change and where to learn about it.

You probably have a whole bucket of things that merit documentation. Prioritize these by what will gain you the most first. If you get multiple interrupts a week from customers making a frequently asked question, documenting it and respond with "Look on the wiki for <foo>" is a good way keep an interrupt short. Helping customers help themselves helps you. Of course, by "customer" I mean the people you are, as a sysadmin, supporting. Even if they're other employees, they're still customers.

Documentation, like code, needs to be tested. Testing documentation means having someone from your intended audience read it and report their level of understanding. If they didn't understand the information you were expressing, then you need to revise and re-test. If a document is intended for your customers, don't have a fellow team member review it.

Further, documentation being like code, suffers bit rot if neglected. Unmaintained documentation means people reading it will be misinformed. Don't ignore your existing documentation. It's worth more to you updated than neglected.

Knowing that documentation is important means that you should prioritize the act of improving and creating documentation among your other duties and tasks. Such things take time and effort, so be sure to consider documentation when budgeting your own time on work.

Lastly, it's worth pointing out that some documentation can lead to automation. For example, a well-documented alerts (or failure scenario) playbook often looks like a flowchart detailing operations to perform to debug and fix a problem. This kind of detail often lends itself to being transformed into a script. Once you have the script, you could have your alert system run the new script instead of paging you, or even just use the script to automate collection of some diagnostic information to help you more quickly debug a problem.

December 14, 2008

Day 14 - UDPcast

In the introduction of "The Practice of System and Network Administration," the authors point out a few critical things to do first. One of these things is "start every new host in a a known state."

Having a single system image for each platform you support is a great way to save yourself time and mistakes. If you deploy Ubuntu on your servers, you can save yourself time by installing Ubuntu on a single server, configuring some defaults (which you document somewhere or automate), and then take a snapshot of the harddrive for cloning. Once you've done this, you have a very easy method for moving from a host in an unknown state to a host in a known state: clone the drive.

Automation depends highly on starting from a known state and ending in a known state. Some platforms give you a system for automated system installation, such as Solaris and JumpStart, or Red Hat and KickStart. These tools often perform all the normal tasks an installation would, which may take more time than you can budget for if you need to reimage machines often. Having a standard system image you push to every server (or set of servers) will help you automate both the installation and repair processes at the same time.

Building an imaging infrastructure yourself isn't the most trivial of tasks. Lots of moving parts, depending on the tools and hardware you have: dhcp, tftp, pxe, disk cloning, an image file server, etc. There are many disk imaging and cloning tools available to you. Some are stand-alone suites, some are single tools that can help you build an imaging infrastructure.

If you don't have the budget to invest in imaging tools, or can't find a project (commercial or open source) that suits your needs, you may have to build your own. For sending and receiving disk images, you should use UDPcast.

UDPcast is a tool that can use multicast to transfer data to multiple systems at once. High-packet rates lead to dropped packets, which can lead to broken file transfers, so UDPcast compensates by periodically confirming if each connected client has received all packets and retransmits those that were dropped. It supports unicast too, if you only need one client or can't support multicast between the sender and receiver.

The UDPcast project has good documentation and tools for making a system for imaging under various situations. The UDPcast itself consists of two programs, udp-receiver and udp-sender. All configuration occurs in command-line flag arguments. To show UDPcast in a simple example, on two different hosts, A and B, on the same LAN:

HostA % echo "hello" > /tmp/udptest
HostA % udp-sender --file=/tmp/udptest
Udp-sender 2007-12-28
Using mcast address
UDP sender for /etc/motd at on eth0 
Broadcasting control to
At this point, HostA is listening on the address listed above. When it says "broadcasting control" it means it broadcast a hello packet so any already-active udp-receiver processes.
HostB % udp-receiver --file=/tmp/testoutput
Udp-receiver 2007-12-28
UDP receiver for /tmp/testoutput at on eth0
received message, cap=00000009
Connected as #0 to
Listening to multicast on
Press any key to start receiving data!
sending go signal
bytes=              9  (  0.00 Mbps)              9 
Transfer complete.

HostB % cat /tmp/testoutput
Seems simple enough. If you wanted to use it for imaging, you would send a cloned disk image with udp-sender and write it to your target harddrive. For example, "udp-receiver --file=/dev/hda"

You might get better performance if you use compression with your transfer. Keep your disk image in a compressed format and then using udp-receiver's --pipe flag to decompress it. If you compress it with gzip, using 'udp-receiver --pipe "gzip -dc"' will decompress it before writing to disk. Personally, I've had better experience with lzop for compression. Gzip beats lzop slightly in compression ratio, but lzop makes up for it by much faster decompression and less cpu usage.

How can UDPcast fit into your imaging infrastructure?

If you're stuck rolling your own system, reimaging a system might involve netbooting a linux and using udp-receiver to dump a disk image to /dev/sda. You'll need another server broadcasting these disk images on request. For that, udp-sender has a daemon mode. If daemon mode doesn't suit you, running it in a loop will work too.

Automation needs to have a starting point, and automated system installs is critical to saving yourself or others trips to the datacenter.

As a side note, udpcast works for deploying a Windows disk image too, not just Linux, Solaris, and other unix-like platforms.

Further reading:

December 13, 2008

Day 13 - Accessible Automation

Modern systems administration often involves saving yourself (and your company) time and money. If I had to list skills (of people) and features (of software) by priority, automation would be near the top.

This is usually why I get so mad at software that doesn't lend itself easily to automation. Day 6 pointed out some potential difficulty in automating Tripwire. I'm willing to forgive Tripwire since it's security software. I don't expect security guys to think about systems administration problems.

But what about something like Cacti? Cacti is a monitoring tool aimed at helping you track data and data trends on your systems. You can configure it to graph many things on many hosts; sounds like a systems administration tool, right? Monitoring sounds sysadmin-ey. Transitivity from sysadmin goes to automation. Therefore, I expect that Cacti will fall happily into my family of other automated configurations. If I have a few hundred machines in various, known configurations, can I easily put this data into Cacti and keep it up to date?

Cacti's main interface is a web interface, and such things do not easily lend themselves to automation.

Searching for 'cacti automation' will point you at Cacti's command-line scripts (add_device.php, for example). These scripts only support adding things, not modifying or deleting them. If you want to do that, you're almost on your own. Automation features started showing up in Cacti 0.8.6 from a feature request that there was no way to mass add devices. This request lead a few new functions and the scripts mentioned previously.

With that version and beyond, you can add devices, graphs, and other things, in an easily automated way. For all other interactions, you'll need to click your way through the web interface. Removing devices, etc... click click click. If you want to import or export graph, data, or host templates, you can do that using the web interface, but only one template at a time.

There is a file called "api_automation_tools.php" (or the other api_xxxx.php scripts) in Cacti that sounds promising, but has no documentation. Many of the functions are self documenting by name or simplicity, but others are in great need of documenting and simplicity. Reading over the code, it's not obvious to me how I can do automated maintenance of cacti's devices, graphs, etc. Looking at Cacti's roadmap, I don't see 'improvements in automation' on it. Plugins are on the roadmap, but it's unclear if plugins will help with automation. There is an existing plugin effort for Cacti, but none of the plugins appear to aid with automation.

My guess is that the reason that the available automation is poorly documented or doesn't exist is because of two reasons: first, that the developers didn't develop cacti with systems administrator priorities in mind, and second, that no one has made the feature request. The second point makes me wonder, is cacti only used by people with a handful of systems to monitor? As I research Cacti, it's starting to feel more and more like it isn't for people who want automation or is mainly for those who have plenty of time to click few zillion times keeping Cacti's information aligned with reality.

Automation should be accessible. It should be documented. If I can't find Cacti's automation documentation, if it exists, then it's not accessible automation. Yes, you could read the code and figure out exactly functions you needed to call to modify or remove a device, graph, or whatever. Or skip the code and look at the storage system to modify the configuration. Hacks produced with this method are troublesome and will not scale. You will be lucky if the hacks survive the upgrade to the next version. Further, reading the code so you can implement your own hacks is not accessible automation.

If an open source tool fits your needs but lacks automation, get involved in the community, if there is one. File bug reports and feature requests, or send patches if possible. If a commercial tool fits your needs but lacks automation, contact the vendor. Find out if they can implement what you need, and how long it might take. Don't get stuck with a tool that sucks away resources because the best maintenance interface is with the mouse and keyboard.

I'm not trying to pick on Cacti, specifically. Many software tools are simply not written with accessible automation or other important sysadmin features in mind. This is a very important feature to consider when looking for tools to solve a problem, because, as I've repeated previously, automation saves you time, effort, and errors.

Further reading:

December 12, 2008

Day 12: Capistrano or Puppet?

I'm compelled to write about this subject today because of having received this question multiple times since sysadvent began.

Capistrano or Puppet? Both.

Puppet provides you with a way to specify a state your system should be in. Puppet's features will help you keep a machine in the same state. If someone hand-edits an apache config, you can have puppet automatically replace it with the correct one and reload apache, for example. Puppet runs on each of your servers.

Capistrano lets you describe what to do to a system or set of systems: Upload a file, run a program, restart a service, etc. Capistrano runs from your workstation and does work for you on remote systems.

So, why both?

Puppet needs a source of state information, and that source has to come from somewhere. If you always run on the bleeding edge of your configurations, you can feed the puppet master with your revision control system and use the state described in the head revision. Bleeding edges tend to be bloody for a reason. You could feed puppet with data from a branch of your revision control, too, and both not need capistrano for the feeding and not run on the head revision. You could deploy new state to puppet with Capistrano on a planned release schedule.

Puppet lets you specify that the 'httpd' package should be installed, and even what version. If you maintain your own package repository, you can control what version is installed (which you should). To upgrade the 'httpd' package, you could use Capistrano to upload new packages to your package repository and to deploy puppet manifests to keep 'httpd' automatically updated to whatever version you decide.

As an example, here's how the state management with puppet and capistrano might look for your apache configuration:

  1. Modify httpd.conf in revision control, check it in.
  2. Use capistrano to push the new state to your puppet masters.
  3. Puppet will see the new state and apply necessary changes. [*]
[*] This will only occur automatically if you run puppet periodically (like through cron) rather than manually.

If the change was bad, you can revert the change in revision control and again use capistrano to push the new files.

You can use puppet and capistrano to do similar tasks, if you wish, but I find they are best suited to compliment each other. Let puppet focus on automated state maintenance and let capistrano help you do deployments of new packages and new configurations.

Further reading:

December 11, 2008

Day 11 - Home away from home

Logging in to a machine that isn't your own workstation can be scary. You are subject to the decisions made in someone else's configuration files that don't always align with your own configurations: different shell, different default shell configuration, different default editor, different editor configuration, etc.

This is a scary and unproductive place. Suddenly 'ls' output is colored, or vim uses a different indenting configuration, or worse, the default editor is not your favorite editor (which doesn't have to be vim). Dedication to mastering your basic tools over time has helped you create the One True Configuration for each tool; deviation from this configuration means a loss of productivity. You need to bring your home (directory) with you.

When your home directory doesn't magically appear through the miracles of network filesystems, you may need to fix the problem another way. One potential solution is to make sure that you copy all your configuration files (.vimrc, .zshrc, whatever) to every host you're going to login to. This doesn't scale. Further, it means everyone else has to repeat the same process for their own files.

The fix is to create a system which automatically keeps your home directory, on every machine, populated with your configurations. You can do this minimally with revision control and a cron job, but I prefer to add rsync to this process.

Step one is make a place in your revision control system for people to create home directories. For example, declaring that the path /trunk/home in your repository is where you should dump your homedirectory contents. This means if my username were 'jordan' then I'd check my '.vimrc' in as '/trunk/home/jordan/.vimrc' and should expect it to show up on any system I have access to.

Step two is to pick a server that has access to both revision control and other servers. Set up a cron job here that will check out and keep-updated your entire /trunk/home path. Run an rsync daemon here that exports this /trunk/home for other servers to update with. Set the rsync module name to 'homedirs' for readability.

Step three is to deploy a cron job on every necessary server that copies down all the obvious files from someserver::homedirs with rsync. You do have automation that lets you install a cron job on all of your servers, right? ;)

Before you go and write the one line of rsync invocation that it would take to copy someserver::homedirs to /home, you should take care to note the potential security implications of doing this as root. If I have a file checked in called /home/jls/test/shadow, and on one of the servers I sneakily symlink /home/jls/test/ to /etc, and you run the rsync blindly as root, you just let me overwrite your /etc/shadow file (or something else evil). Malicious or accidental, doing a single rsync may not be the best solution.

The fix is to run the rsync as each user. You can get the list of users to copy down by running 'rsync someserver::homedirs' to get the list of directories, which should include your usernames. Check out the completed version of the sync home directories script.

You should now be able to modify your home directory files in revision control and have them automatically propogate without your assistance.

Further reading:

The 'run rsync as the user' security idea from Pete Fritchman.

December 10, 2008

Day 10 - Config Generation

A few days ago we covered using a yaml file to label machines based on desired configuration. Sometimes part of this desired configuration includes using a config file that needs modification based on attributes of the machine it is running on: labels, hostname, ip, etc.

Using the same idea presented in Day 7, what can we do about generating configuration files? Your 'mysql-slave' label could cause your my.cnf (mysql's config file) to include settings that enable slaving off of a master mysql server. You could also use this machine:labels mapping to automatically generate monitoring configurations for whatever tool you use; nagios, cacti, etc.

The older ways of doing config generation included using tools like sed, m4, and others, to modify a base configuration file inline or writing a script that had lots of print statements to generate your config. These are both bad with respect to present-day technology: templating systems. Most (all?) major language have templating systems: ruby, python, perl, C, etc. I'll limit today's coverage, for the sake of providing an example, to ruby and ERB.

ERB is a ruby templating tool that supports conditionals, in-line code, in-line variable expansion, and other things you'll find in other systems. It gets bonus points because it comes standard with ruby installations. That one bonus means that most people (using ruby) will use ERB as their templating tool (Ruby on Rails does, for example), and this manifests itself in the form of good documentation and examples.

Let's generate a sample nagios config using ruby, ERB and yaml. Before that, we'll need another yaml file to describe what checks are run for each label. After all, the 'frontend' label might include checks for process status, page fetch tests, etc, and we don't want a single 'check-frontend' check since mashing all checks into a single script can mask problems.

You can view the hostlabels.yaml and lablechecks.yaml to get an idea of the simple formatting. Using this data we can see that '' has the 'frontend' label and should be monitored using the 'check-apache' and 'check-frontend-page-test' checks.

The ruby code and ERB are about 70 lines total, perhaps too much to write here, so here are the files:

Running 'configgen.rb' with all the files above in the same directory produces this output. Here's a small piece of it:
define hostgroup {
  hostgroup_name frontend

define service {
  hostgroup_name frontend
  service_description frontend.check-http-getroot
  check_command check-http-getroot

define service {
  hostgroup_name frontend
  service_description frontend.check-https-certificate-age
  check_command check-https-certificate-age

define service {
  hostgroup_name frontend
  service_description frontend.check-https-getroot
  check_command check-https-getroot
I'm not totally certain this generates valid nagios configurations, but I did my best to make it close.

If you add a new 'frontend' server to hostlabels.yaml, you can regenerate the nagios config trivially and see that the 'frontend' hostgroup now contains a new host:

define hostgroup {
  hostgroup_name frontend
(There's also a new host {} block declaring the new not shown in this post)

Automatically generating config files moves you into a whole new world of sysadmin zen. You can regenerate any configuration file if it is corrupt or lost. No domain knowledge is required to add a new host or label. Knowing the nagios (or other tools) config language is only required when modifying the config template, not the label or host definitions (a time/mistake saver). You could swap nagios out for another monitoring tool and still make sure the underlying concepts (frontend has http monitoring, etc) are consistent. Being able to automatically generate configs means that you probably have both the templates and the source data (our yaml files here) stored in revision control, which is a whole other best practice to focus on.

Further reading:

December 9, 2008

Day 9 - Lock File Practices

The script started with a simple, small idea. Some simple task like backing up a database or running rsync. You produce the script matching your requirements and throw it up in cron on some reasonable schedule.

Time passes, growth happens, and suddenly your server is croaking because 10 simultaneous rsyncs are happening. The script runtime is now longer than your interval. Being the smart person you are, you add some kind of synchronization to prevent multiple instances from running at once, and it might look like this:


$lock = "/tmp/cron_rsync.lock"
if [ -f "$lock" ] ; then
  echo "Lockfile exists, aborting."
  exit 1

touch $lock
rsync ...
rm $lock
You have your cron job put the output of this script into a logfile so cron doesn't email you when the lockfile's stuck.

Looks good for now. A while later, you log in and need to do work that requires this script temporarily not run, so you disable the cron job and kill the running script. After you finish you work, you enable the cron job again.

Due to your luck, you killed the script while it was in the rsync process, which meant the 'rm $lock' never ran, which means your cron job isn't running now and is periodically updating your logfile with "Lockfile exists, aborting." It's easy to not watch logfiles, so you only notice this when something breaks that depends on your script. Realizing the edge case you forgot, you add handling for signals, just above your 'touch' statement:

trap "rm -f $lock; exit" INT TERM EXIT
Now normal termination and signal (safely rebooting, for example) will remove your lockfile. And there was once again peace among the land ...

... until a power outage causes your server to reboot, interrupting the rsync and leaving your lockfile around. If you're lucky, your lockfile is in /tmp and your platform happens to wipe /tmp on boot, clearing your lockfile. If you aren't lucky, you'll need to fix the bug (you should fix the bug anyway), but how?

The real fix means we'll have to reliably know whether or not a process is running. Recording the pid isn't totally reliable unless you check the pid's command arguments, and it doesn't survive some kinds of updates (name change, etc). A reliable way to do it with the least amount of change is to use flock(1) for lockfile tracking. The flock(1) tool uses the flock(2) interface to lock your file. Locks are released when the program holding the lock dies or unlocks it. A small update to our script will let us use flock instead:


if [ -z "$flock" ] ; then
  lockopts="-w 0 $lockfile"
  exec env flock=1 flock $lockopts $0 "$@"

rsync ...
This change allows us to keep all of the locking logic in one small part of the script, which is a benefit alone. The trick here is that if '$flock' is not set, we will exec flock with this script and its arguments. The '-w 0' argument to flock tells it to exit immediately if the lock is already held. This solution provide locking that expires when the shell script exits under any conditions (normal, signal, sigkill, power outage).

You could also use something like daemontools for this. If you use daemontools, you'd be better off making a service specific to this script. To have cron start your process only once and let it die, you can use 'svc -o /service/yourservice'

Whatever solution you decide, it's important that all of your periodic scripts will continue running normally if they are interrupted.

Further reading:

  • flock(2) syscall is available on solaris, freebsd, linux, and probably other platforms
  • FreeBSD port of a different flock implementation: sysutils/flock
  • daemontools homepage

December 8, 2008

Day 8 - One-off graphs

You've got your nagios and cacti configurations all diligently tracking information for you. The data it monitors is available for viewing in graphs at your will, but what about the data it doesn't monitor?

You get a report that your apache servers are randomly serving errors and that the problem started last week. You don't have cacti watching this data, so you check the logs. The few-hundred-megs of logs are probably too much for you to eyeball, and seeing this data, now, in a graph, would help you out.

This report means two things: 1) Verify the report and fix the problem, and 2) add apache error servings to your monitoring system. What are your options for #1 and getting that data graphed now? Common sysadmin graph staples might include things like rrdtool, gnuplot, cacti, and others. Some of these tools are designed for recording and graphing data slowly over time and others require configuration changes or other complexities. It's possible you may be able to import historical data into your monitoring or trending system (cacti, etc), but if you don't know how, you have to graph it by yourself.

The path of least resistance is probably the best path when it comes to doing one-offs for visualizing or grabbing data. This means using the tool that requires the least amount of steps to go from data to graph with easy ability to iterate in case your graph output isn't helpful due to display decisions like scaling, etc.

Tools that help you do this include gnuplot, R, rrdtool, and Excel (or other spreadsheets that graph). These tools might help you manipulate the data before you graph it, but I'm going to assume that you've already got the data in some reasonable format (space, tab, comma delimited X and Y values).

We have apache access logs and want to see 500 error code trends. One approach might be to graph the ratio of 200s to 500s codes (200 is OK, 500 is internal error), or just graphing the 500s alone.

Making a useful graph depends much on how you aggregate your data. Do you aggregate on the hour, minute, 10 minute, second? You can go with your gut feeling, or you can take another approach. When gathering data, keep the data in the highest-precision format you have. In this case, we have data on the second precision.

# The '500' here matches the response code from apache logs
% egrep '" 500 [0-9]+ "' /b/access | sed -e 's/^.*\[//; s/\].*$//' | tee err500
01/Dec/2008:21:23:54 -0500
01/Dec/2008:21:24:08 -0500
01/Dec/2008:21:27:09 -0500
02/Dec/2008:05:23:34 -0500
08/Dec/2008:00:44:59 -0500
08/Dec/2008:00:45:55 -0500
< remainder of output cut >

# We count the instances per second by piping this output to 'uniq -c'
% uniq -c err500 > counts
If your graphing tool helps you make aggregation decisions such as "total 500s in an hour", then that's a help. Otherwise, you'll need to aggregate yourself before feeding your graph tool. RRDtool lets you do this by using the 'average' RRA and multiplying the value by the time interval. From what I can tell, gnuplot doesn't let you modify input data before graphing in a way that would let you aggregate values. R lets you do this easily as it's a statistics scripting language.

Data input for time series might require additional steps to convert the date into a value your graphing tool understands. Gnuplot accepts string time values and lets you specify the strptime(3) format string. RRDtool updates require times be specified in terms of unix epoch. R, from what I can tell, needs to be given numbers (like rrdtool). Excel hopefully has time parsing options, but I haven't tried.

Further, if your data doesn't have a point at every single unit of your graph, you will end up with odd-looking results when using lines to graph. This sways in favor of rrdtool since gnuplot and other tools that graph don't often accept this lack of data as OK. RRDtool has support for data points being 'unknown' and such and is much more drawn to time-series plotting.

Output is important too. Your graph is less helpful if the axes aren't readable; this means you need readable dates on your time axis. Both gnuplot and rrdtool allow you configure the X (time) axis labels and steps. It's difficult to do in R, from what I've tried and read.

For all the reasons above that help us see time-series data visualized most effortlessly, I would normally pick rrdtool. However, past experience has had me spend more time fighting rrdtool (read: pebcak) when I'm in a hurry, so I'll try gnuplot today. I fully confess in failing tonight trying to rush and re-learn rrdtool ;)

In gnuplot, you specify time as an input, from most any format, with:

set xdata time
set timefmt "%d/%b/%Y:%H:%M:%S"
If you want output to a file, use:
set terminal png size 580,300
set output "/tmp/apache.png"
The timefmt uses format strings specified by strptime(3). To graph the last year's worth of data in gnuplot (including the above xdata and timefmt lines):
set xtics rotate right
set xrange ["01/Jan/2008:00:00:00":"01/Dec/2008:00:00:00"]
set yrange [0:]
set format x "%Y/%b/%d"
plot "counts" using 2:1
Since 'uniq -c' (used above) outputs in the format 'count value' and our 'value' here is a timestamp for use with the x axis, we have to tell gnuplot to use the 2nd column for X and the count for Y.

This generates an ugly and not totally useful graph, because visualizing errors on that rate .

Changing from a seconds to another unit just requires some simple summation. Rounding the timestamp to whatever value (10 minutes, for example) and then doing another summation (uniq -c) on the output should be easy; any tool that supports strptime will help you, such as this small ruby script

If we sum errors by hour, the new graph gets a bit more useful, showing some days having very high error spikes compared to the average. As a note, since the output of strptime.rb is in unix epoch, I had to change the timefmt to "%s" and the xrange to '["1199145600":"1228089600"]'

Note: If you use gnuplot with it's default output device (don't run 'set terminal png') you get a useful GUI that you can zoom in and out of, which is pretty useful.

This is another case of having the right tools to do the job. I've used statistics tools like SAS before, and while writing this article today it feels like using such a tool to do simple, fast visualizations and analyses would be easier. It's possible R, Octave, or other math/stats tools provides this. On the other hand, I've never once heard of a sysadmin colleague using statistical tools, is this indicative of a problem?

Further reading:

Visualization periodic table
Neat periodic table showing lots of different visualization methods with examples
R's homepage

December 7, 2008

Day 7 - Host vs Service

An important distinction when talking about servers and services is to talk about them separately. Build automation in terms of configuration sets, not in terms of servers.

I tend to think of servers, machines, devices, whatever, as having labels or tags. Each label refers to a particular configuration set. Your automation tools should know what labels are on a host and only apply changes based on those labels. Modern administration tools such as Capistrano and Puppet are designed with this distinction in mind. Capistrano calls them 'roles' and puppet calls them 'classes,' but ultimately they're just some kind of name you apply to configuration or change.

Labels can be anything, but they should be meaningful. You might have "mysql-debug" and "mysql-production" service labels which both cause mysql to install but the debug version means you have heavier logging features enabled like full query logging, etc.

Configuring with labels instead of individual hosts helps you scale up. Managing configuration changes for a specific service lets you make one change to a service and have it deploy on any host having that service. Further, if you buy new server hardware, simply adding the appropriate labels to a host will let your automation system do the hard work of installation and configuration.

It helps you scale down, too. Here's a fictional example:

Quality control requested a production-like environment to test release candidates before pushing to production, but the budget will only allow you to use two server hosts for this. Production uses many more than this. If you automate based on labels instead of hosts, you could easily spread the required services across your two servers by simply labelling them, and automation would take care of the installation and configuration.

Assuming you have the development time or the tools available, you can use labels all over your automation:

  • Generate dns entries for all hosts with a specific label
  • Configure your monitoring system based on labels on a host
  • Configure firewall rules
  • Configure backup policy
  • etc...
A simple implementation of this would be a small yaml file with host:label mappings:
- mysql-debug
- memcache
- frontend
The deployment of these labels is up to you and the needs of your automation system. Keeping this in revision control gives you history with logs. Along with the other automation code and configuration you should be keeping in revision control, you might just be one step closer to being able to do more while working less.
With puppet
If you're using puppet, telling each host what it's labels (aka, puppet classes) are is easy, you need only write a script to help puppet know what classes to apply to a host (or node, in puppet's case). This document will show you how in puppet.
With capistrano
You'll want some piece of code that turns your yaml file of host:label entries into 'role <label>, <host1, host2, ...>. Something like this may do (ruby): (I called our yaml file 'hostlabels.yaml')
# roles.rb
require "yaml"
labelmap = { |h,k| h[k] = [] } # default hash value is empty array
hosts = YAML::load("hostlabels.yaml"))
hosts.each { |host,labels|
  labels.each { |label| labelmap[label] << host }
labelmap.each { |label,hosts|
  role label, *hosts
And in your Capfile:
load "roles"   # use 'load' not 'require'

task :uptime, :roles => "frontend" do
  run "uptime"
And now 'cap uptime' will only hit servers listed in your yaml file as having the label 'frontend'. Cool.
I wanted to provide an example with cfengine, too, but I'm not familiar enough with the tool and my time ran out learning how to do it.

The yaml file example is not totally ideal, but it's a start if you have nothing. Evolutions beyond the simple host:services are the state configuration management tools where you store information about what is truth - such as for every machine that exists, mac addresses, IPs, service labels, hardware type, etc. It might include the class of "enterprise inventory management" suites by Oracle and others, too.