Installing CentOS via Netboot in VMWare · 4 May 2009, 18:28

A quick blurb as I always forget this:


No trailing slash on the domain and no leading or trailing slash on the URI path

— Max Schubert



Getting ruby 1.8.7 and newer to compile with readline support on Red Hat Enterprise Linux (RHEL4 and RHEL5) · 4 April 2009, 06:20

Paraphrased from

First, ensure you have the following packages installed:

Then make sure you remove the system ruby and ruby-devel packages, otherwise gems and other extensions might find the wrong version of ruby when they look for compile flags etc:

After unpacking the source for ruby, do the usual:

configure --prefix /usr/local
make all
sudo make install

Now do the following from the ruby source directory:

cd ext/readline
/usr/local/bin/ruby extconf.rb
make install

To ensure that ruby now has readline support, run

/usr/local/bin/ruby -rreadline -e 1

If you get no output (which should be the result), voila, readline support is now active.

— Max Schubert



Getting ruby gem mysql native extension to install on RHEL5 / CentOS 5 · 2 April 2009, 08:04


If you are on a 32-bit platform:

gem install mysql -- --with-mysql-conf=/usr/bin/mysql_config --with-mysql-lib=/usr/lib/mysql

If you are on a 64-bit platform:

gem install mysql -- --with-mysql-conf=/usr/bin/mysql_config --with-mysql-lib=/usr/lib64/mysql

— Max Schubert

Comment [1]


Which rubies do not work with rubygems 1.3.x? · 2 April 2009, 07:28

Anything later than ruby 1.8.7. Spent a few hours learning that lesson:

Thanks to my coworker, Ryan Richins, for pointing me to a version that does work!

— Max Schubert



New perl module: Sys::Syslog::OO · 15 March 2009, 19:19

I just uploaded Sys::Syslog::OO to my Github account and CPAN.
Sys::Syslog::OO is a thin object-oriented wrapper around Sys::Syslog designed to make it easy to integrate Sys::Syslog in object-oriented projects.

Special thanks to my managers Mike Fischer and Jason Livingood at Comcast for allowing me to release my work to the OSS community.

— Max Schubert



New perl module: Schedule::Week · 13 March 2009, 16:07

I just uploaded Schedule::Week to my Github account and CPAN. Schedule::Week lets a developer easily create, manipulate, serialize, and deserialize a weekly hour-by-hour schedule.

Special thanks to my managers Mike Fischer and Jason Livingood at Comcast for allowing me to release my work to the OSS community.

— Max Schubert



Nagios Performance Tuning: Early Lessons Learned, Lessons Shared. Part 4.5 - Scalable Performance Data Graphing · 13 March 2009, 11:40

In my previous post on Scalable Performance Data Graphing with Nagios, I discussed how our team is using PNP, NPCD, and modpnpsender.o to send performance data from our polling server to our report server and then process it.

A week ago our report server hit it’s upper limit on the number of PNP performance data events it could process (8800 events every 5 minutes). We trend on a dozen or so poller and report server metrics, including the age of events in the NPCD queue; our queue went from having one minute each 5 minutes where it was completely empty to having over 30,000 backlogged events and growing in a 24 hour period. This backlog meant the RRD files (and consequently the PNP UI) were up to 15 minutes behind reality as well.

I started at the beginning of the week with tuning our NPCD threads and sleep time parameters. At first I tried starting more threads and sleeping less, but the server was so overwhelmed that this had the opposite effect; the queue grew.

Next I played with starting fewer threads and eventually found that 10 threads every 5 seconds was at least letting NPCD start to drain the queue. After 48 hours (!) the queue was down to 3k events, with no events in queue older than 119 seconds. Better, but not good enough for us to say the problem was fixed.

My colleague, Ryan Richins, remembered seeing documentation on the PNP site on integrating rrdcached with PNP. I had vaguely remembered seeing it, so I gave it a second look. Ryan, meanwhile, downloaded the source to the latest stable versions but did not find that rrdcached was included. He then re-read the PNP page and we eventually downloaded the latest trunk snapshot of RRD Tool, knowing that it might not be production ready. This version did contain rrdcached.

The configure options we used are:

export LDFLAGS=”-L/lib64 -lpango-1.0 -lpangocairo-1.0”
CPPFLAGS=”-I/usr/include/cairo -I/usr/include/pango-1.0”
CPPFLAGS=”$CPPFLGAGS -I/usr/include/cairo”
CPPFLAGS=”$CPPFLAGS -I/usr/include/pango-1.0/pango”

./configure \ —prefix=/path/to/base \ —enable-perl \ —disable-ruby \ —disable-lua \ —disable-python \ —disable-tcl \ —with-rrdcached

After installing rrdcached, we downloaded the latest version of PNP and looked at the rrdcached integration. We have done a lot of local changes to, so we back-ported the RRD integration code into our script (just 6 lines or so of code!). We tested rrdcached on our integration environment with this /etc/default/rrdcached file:

OPTS="-w 60 -z 6 -f 120 -F -j /path/to/temp/dir -t 10"

On our integration environment, this performed quite well, so we rolled it out to production. In production, performance was not doing as well, so I changed the parameters to this, which seems so far to be the best combo I have found for us:

OPTS="-w 75 -z 1 -f 900 -F -j /path/to/journal/dir -t 8"

IMPORTANT NOTE – when using rrdcached, if you need to restart it with a new set of parameters and npcd, do the following:

If you do not let all processes stop before restarting rrdcached, you will lose data.

So after this change our queue came down to about 1.5k events, but we still were constantly processing events (no empty minute). My coworker and I started discussing using memcached to queue events and then a light went off for us .. why not use a RAM disk for the NPCD queue? Since rrdcached keeps a journal file in case it crashes, the risk from a crash is just losing 5 minutes of data or less, which was acceptable to us. By the way, this idea was not an original one, I had seen it on the Nagios / PNP lists before, i just had not considered it fully.

So, I changed the queue directory for npcd to be this (RHEL 5.x):


I also changed the path for the process_perfdata.log file to be


The size of the queue in MB with 8800 events in 5 minutes is only 1-2 MB on average, so not much risk of clogging the real memory on this system (we have about 500 MB free according to top and snmpd).

I then restarted npcd and watched our NPCD metrics. Voila! Back down to 4 minutes to process all PNP files and an empty minute to spare :), with a max event queue size of 1.2k.

Something Ryan and I were expecting to see that we did not see during all this was CPU I/O wait to decrease. The load on the server has slightly decreased, the server definitely feels more responsive, but still seeing consistently 20-25% I/O wait on the CPU.

Maybe there is more we can tune? Ideas welcomed.

Special thanks to Mike Fischer, my manager at Comcast, for allowing me to share my experiences at work online; special thanks to Ryan Richins, my talented teammate, for his hard work with me on our Nagios system and welcome to Shaofeng Yang, our new teammate!

— Max Schubert



PNP-aware version of Drraw released · 27 February 2009, 06:43

I have been looking for a while for a tool to let me and our users create custom web-based dashboards from PNP RRD files using a web interface.

On the PNP users mailing list someone mentioned a perl-based tool to create dashboards from RRD files called Drraw. I installed it (very easy!) and it is quite a cool tool for this purpose. Very full-featured and flexible. The tool was written by Cristophe Kalt.

I saw a few things I did not like about it:

So I added code to the project that will read from the XML meta-data descriptors PNP creates along with RRD files so that when you go to create a new template/graph in Drraw you see the DS names as specified in the perfdata output from your Nagios plugins. I also cleaned the CSS up, renamed the CGI to index.cgi, and included a little Apache configuration snippet to make it easy to set up Drraw in Apache with the index.cgi file being used as the directory index.

Hope you find it useful; I have interest in integrating this functionality into PNP .. if you have interest as well and are famililar with PNP, perl, and PHP, write me, I welcome help.

Update – Cristophe added PNP integration code to the project independent of me doing it, his release JUST came out today :), so feel free to use my variant but I am discussing with him and will be talking with other developers about rolling my changes back into the main line and helping with the project as a developer.

— Max Schubert



Nagios Performance Tuning: Early Lessons Learned, Lessons Shared. Part 4 - Scalable Performance Data Graphing · 17 January 2009, 19:28

In the last three parts of this short series I have discussed the techniques my teammate and I have used at work to tune our Nagios poller so that it completes all configured checks within a five minute interval. I will now discuss how we are storing and graphing this data in a way that will scale as our installation continues to grow (we are currently graphing data from over 5000 checks every 5 minutes).

The Nagios Plugin API and Performance Data: sections of the online Nagios documentation discuss plugin development and current performance data format specifications in great detail; check out both links if you are not familiar with Nagios plugin performance data.

While Nagios does not come out of the box with a performance data graphing framework, it should come as no surprise that there are a number of ways to send performance data from Nagios to external graphing systems:

As with any other configuration choice with Nagios, each method has benefits and drawbacks in terms of it’s implementation difficulty and effect on Nagios performance and resource utilization on the host running Nagios.

Since we are focusing on scalable graphing, our goals are as follows:

There are a number of graphing frameworks available for Nagios; in this article I will focus on PNPPNP Is Not Perfparse. It is a mostly well-documented, flexible framework. For Nagios administrators who currently use both Cacti and Nagios, I highly recommend considering PNP as an an alternative. PNP eliminates the need to administer device and service configurations in two places and also means no double-polling to get both fault management and trending data.

PNP consists of four discreet components:

PNP can integrate with Nagios in a number of ways:

The first two methods above place all the disk I/O burden associated with RRD files on the Nagios poller; while this is perfectly fine for smaller installations, it is not good for a larger installation. Additionally, methods one and two cause Nagios to pause as it runs the perfdata processing commands. In our environment this cause check scheduling skew to happen at an unacceptable rate. With just 1800 services we were skewing by over a minute a day for checks, i.e. a check that was initially scheduled to run at minute 01 of the hour was running at minute 02 of the hour by day two.

modpnpsender.o is a NEB module that registers for service events; when a service event occurs within Nagios modpnpsender opens a TCP connection to a remote server, sends an XML representation of the event to the remote side, then closes the socket. This transaction does not take more than a second or two depending on where in the network your reporting host sits in comparison to your Nagios poller. We made a few minor modifications to the code (which we will release in the near future) to enhance the functionality of the NEB module.

Our first modification was to add in fork() code to the NEB module. While the Nagios documentation says never to fork in a NEB module, without the fork we found that our service check schedule was skewing almost as significantly with the NEB module in place as it had been calling directly from Nagios via the process perfdata external command options in nagios.cfg. This occurred because Nagios waits for the NEB module to finish processing before it continues. With the fork() code in place, this skew disappeared completely; we have not seen any system instability due to the additional fork() calls.

Our second modification was to make the XML buffer size in the modpnpsender.c source file a C #define parameter, as the code had a hard-coded buffer size that was not enough to accommodate the 4096 bytes of output that Nagios allows; for checks with long perfdata output this hard-coded buffer was being overrun by the code and causing Nagios to segfault.

The third PNP architecture works much better than the first two with our modpnpsender.c modifications in place; the NEB module opens a socket and sends XML to the report server; then reads the data from the socket via inetd and updates the RRD files associated with each metric. The problem with this method is that the report server effectively experiences a denial of service attack every polling cycle if thousands of performance data records are sent to it at a time. In our case, thousands of perfdata records would arrive within two minutes, nearly knocking the server over for a few minutes each run.

My first attempt to ameliorate this problem was to have each instance sleep for a random number of seconds ranging from 15 – 60 before the RRD update processing occurred. While this helped, it still left the kernel tracking thousands of processes at once and did not lower the impact of each check cycle enough to be satisfactory.

The solution I found to this was the fourth option for PNP data processing listed above, which is a hybrid of the methods the PNP developers outline in the online documentation:

So far this method is much more effective in our environment than the others are at keeping load averages and I/O wait times on the report server at reasonable levels. We are currently processing over 5000 checks on 1200 hosts in four minutes with a load average of 2 or less on the report server and I/O wait CPU percentages of 20% or less. All perfdata is ingested into RRD files within 4 minutes.

In addition to our PNP graphing, we also have a daemon running on the report server that reads the Nagios perfdata output and sends it to a corporate data warehouse for long term trending.

There it is; a scalable graphing architecture with Nagios and PNP that we believe will allow us to graph thousands more checks per five minute period than we are doing now without having to upgrade hardware.

In the next article in this series I will discuss how to use PNP to monitor the performance of your Nagios poller and report server.

Special thanks to Mike Fischer, my manager at Comcast, for allowing me to share my experiences at work online; special thanks to Ryan Richins, my talented teammate, for his hard work with me on our Nagios system. We are looking for another developer to join our team in the NoVA area; write me at if you are interested.

— Max Schubert



Nagios Performance Tuning: Early Lessons Learned, Lessons Shared. Part 3 - Tuning The Poller · 25 December 2008, 18:48

The main Nagios configuration file, nagios.cfg, gives Nagios administrators a high degree of control over the tuning process. This flexibility can be intimidating to new Nagios users. In this article I will attempt to demystify some of the settings contained in this file and explain how they can be used to make the Nagios poller operate in a manner that meets your requirements.

All parameters in this article are discussed in the online Nagios documentation extensively; read that documentation before you read this article if you have not already.

The configuration settings that will most likely be of interest to you when tuning Nagios are:

If you are using host and service dependencies, then these parameters become important as well:

Host check performance tuning goes hand-in-hand with service performance tuning as failed service checks can trigger on-demand host checks. It is also important to properly define your host alive check, retry interval and max attempts parameters as well to keep host checks from dragging down poller performance.

Service check related parameters


This variable holds the number of seconds (or partial seconds) Nagios should sleep between running service checks. In a large installation, with solid hardware, and time performance goals, I recommend setting this parameter to as low a number as possible as any additional delay introduced between checks means your overall check schedule skews more quickly over time.

Warning – if you do not configure Nagios with the —enable-nanosleep flag, you can only use positive integers for this parameter.

With nanosleep enabled, we were able to reduce this number to as low a fraction as .01 seconds (1/100th of a second). Anything lower will fail; if you use zero, Nagios will complain and exit.


This parameter indicates how long Nagios has to execute all service checks in your configuration when it is restarted, in minutes. If you are super-concerned about performance data getting into a performance data store at regular intervals, set this value to the interval you are using for metrics. For example, if your performance data tool expects 5 minute samples, you better set this value to 5 to ensure that all service checks get run within a 5 minute interval. This can cause administrators headaches when many checks are being run and some are time sensitive for trending and others are longer checks (like robotic checks) that are more oriented towards fault management than trending / deviation from trend alerting. One way to diminish the time you spend on managing poller performance for trending is to set up two instances of Nagios (it is free to use after all) and set up all time sensitive checks on one instance and all longer running checks on another. This gives your team the benefit of having an instance that is tightly bounded by time and watched to ensure it is hitting time interval requirements while the other can be a little more loose on timing and take on larger numbers of longer running checks.


Nagios is designed to be flexible and work on a wide variety of hardware. This is a good thing. The inter-check delay method is a parameter that lets the administrator tell Nagios how aggressive to be when running a set of service checks on a managed host. It tunes the delay Nagios uses between scheduled checks on a host; the more delay, the less resource impact imposed on a host. There are four settings for this parameter:

While the configuration notes say to never use the n method for production, if you have hardware that can handle it this will give you a huge performance boost. The hosts we monitor can all take 5-10 service checks at once without noticeable negative performance impact and our Nagios poller is able to handle running over 1000 service checks at any given instant; changing from the s method to the n method lowered the time we took to complete all our checks by a significant amount; and our team is using Nagios to populate a time-series database as well as do fault management.


This parameter determines how many checks are scheduled initially on each host at a time in your configuration as the scheduler creates the initial service check queue. It has two settings:

Lets pretend we have 250 hosts and 1000 checks, 4 checks per host. If you set this parameter to 1, the scheduler will schedule all four checks on host 1, then schedule all checks on host 2, then all checks on host 3, up to host 250 .. in essence serially checking all hosts. If you set this parameter to 250, then the scheduler will first schedule check 1 across all 250 hosts, then check 2 across all 250 hosts, then check 3, and finally check 4. In a situation when finishing checks quickly matters, we have found that setting the interleave value to the number of hosts in your configuration gives a performance boost and does help decrease the effect of setting inter check delay to none.


This parameter tells Nagios how many checks it is allowed to schedule at once. Leaving this at 0 minimizes the amount of time Nagios takes to complete all checks but also maximizes the load on the Nagios host and the bursts of network traffic Nagios produces. Setting max_concurrent_checks to 1 then would force Nagios to execute just one check at a time. We use the 0 setting as we again are tied to hitting a 5 minute interval for all checks for trending purposes and we are fortunate enough to have decent hardware (dual dual-core CPU, SCSI disks, nice network) and we have enough network bandwidth around that bursting every few minutes does not bother anyone.

Finally, if you are using service dependencies, be sure to set the cached_service_check_horizon to a number of seconds equal to the smallest service check interval in use. When a service that depends on another service needs to be checked, Nagios will first check the depended on service, so if one service is depended on by many others, setting this will keep Nagios from re-executing the depended on service check plugin more often than it needs to.

Tuning is not easy but if you have the time and resources to invest in it the results can be fantastic. With the parameters discussed in this section, my team has been able to have our Nagios instance execute over 3500 checks across 900+ servers in about three minutes, still well within our five minute ‘hard’ ceiling.

Host check related parameters

A key consideration in optimizing host checks is to ensure that your host check method completes quickly and to minimize the number of times it verifies a host is up. An example given in the Nagios documentation uses the ICMP ECHO (ping) check that comes with Nagios to illustrate this, telling how it is more effective to have a check ping once and then repeat up to ten times on failure rather than have a single check that pings ten times each time it is run.


As with the service check inter-delay, this is the amount of time Nagios inserts between checks of all hosts when initially scheduling them after a restart. We use the n value to finish quickly, other values will reduce the network impact of initial host checks.


How long Nagios has to complete all host checks. We set this to 5 minutes as that is our max polling interval. The longer the interval, the lower the load will be on your Nagios poller.


How long Nagios should cache host check results. When using service dependencies, this comes in handy, as Nagios will perform a host check each time a service that is depended on fails. Caching host checks for the interval of a typical depdended on service will reduce the number of ‘on demand’ live host checks Nagios has to do and help keep your “all checks done” intervals low.

In this article we have briefly covered some of the more important nagios.cfg performance tuning parameters. Learning to tune Nagios effectively is a process that takes time, patience, and experience with Nagios. If you are a long time Nagios administrator or have worked with a number of network/host/service fault and performance management tools, then the time you invest may well pay off many times over in the success of your Nagios deployment.

In my next article I will discuss the methods my team is using to ingest and graph Nagios performance data using PNP in a way that we hope will scale to large numbers of hosts and services.

Special thanks to Mike Fischer, my manager at Comcast, for allowing me to share my experiences at work online; special thanks to Ryan Richins, my talented teammate, for his hard work with me on our Nagios system. We are looking for another developer to join our team in the NoVA area; write me at if you are interested.

— Max Schubert



Older Newer