Nagios Performance Tuning: Early Lessons Learned, Lessons Shared. Part 4 - Scalable Performance Data Graphing · 17 January 2009, 19:28

In the last three parts of this short series I have discussed the techniques my teammate and I have used at work to tune our Nagios poller so that it completes all configured checks within a five minute interval. I will now discuss how we are storing and graphing this data in a way that will scale as our installation continues to grow (we are currently graphing data from over 5000 checks every 5 minutes).

The Nagios Plugin API and Performance Data: sections of the online Nagios documentation discuss plugin development and current performance data format specifications in great detail; check out both links if you are not familiar with Nagios plugin performance data.

While Nagios does not come out of the box with a performance data graphing framework, it should come as no surprise that there are a number of ways to send performance data from Nagios to external graphing systems:

As with any other configuration choice with Nagios, each method has benefits and drawbacks in terms of it’s implementation difficulty and effect on Nagios performance and resource utilization on the host running Nagios.

Since we are focusing on scalable graphing, our goals are as follows:

There are a number of graphing frameworks available for Nagios; in this article I will focus on PNPPNP Is Not Perfparse. It is a mostly well-documented, flexible framework. For Nagios administrators who currently use both Cacti and Nagios, I highly recommend considering PNP as an an alternative. PNP eliminates the need to administer device and service configurations in two places and also means no double-polling to get both fault management and trending data.

PNP consists of four discreet components:

PNP can integrate with Nagios in a number of ways:

The first two methods above place all the disk I/O burden associated with RRD files on the Nagios poller; while this is perfectly fine for smaller installations, it is not good for a larger installation. Additionally, methods one and two cause Nagios to pause as it runs the perfdata processing commands. In our environment this cause check scheduling skew to happen at an unacceptable rate. With just 1800 services we were skewing by over a minute a day for checks, i.e. a check that was initially scheduled to run at minute 01 of the hour was running at minute 02 of the hour by day two.

modpnpsender.o is a NEB module that registers for service events; when a service event occurs within Nagios modpnpsender opens a TCP connection to a remote server, sends an XML representation of the event to the remote side, then closes the socket. This transaction does not take more than a second or two depending on where in the network your reporting host sits in comparison to your Nagios poller. We made a few minor modifications to the code (which we will release in the near future) to enhance the functionality of the NEB module.

Our first modification was to add in fork() code to the NEB module. While the Nagios documentation says never to fork in a NEB module, without the fork we found that our service check schedule was skewing almost as significantly with the NEB module in place as it had been calling directly from Nagios via the process perfdata external command options in nagios.cfg. This occurred because Nagios waits for the NEB module to finish processing before it continues. With the fork() code in place, this skew disappeared completely; we have not seen any system instability due to the additional fork() calls.

Our second modification was to make the XML buffer size in the modpnpsender.c source file a C #define parameter, as the code had a hard-coded buffer size that was not enough to accommodate the 4096 bytes of output that Nagios allows; for checks with long perfdata output this hard-coded buffer was being overrun by the code and causing Nagios to segfault.

The third PNP architecture works much better than the first two with our modpnpsender.c modifications in place; the NEB module opens a socket and sends XML to the report server; then reads the data from the socket via inetd and updates the RRD files associated with each metric. The problem with this method is that the report server effectively experiences a denial of service attack every polling cycle if thousands of performance data records are sent to it at a time. In our case, thousands of perfdata records would arrive within two minutes, nearly knocking the server over for a few minutes each run.

My first attempt to ameliorate this problem was to have each instance sleep for a random number of seconds ranging from 15 – 60 before the RRD update processing occurred. While this helped, it still left the kernel tracking thousands of processes at once and did not lower the impact of each check cycle enough to be satisfactory.

The solution I found to this was the fourth option for PNP data processing listed above, which is a hybrid of the methods the PNP developers outline in the online documentation:

So far this method is much more effective in our environment than the others are at keeping load averages and I/O wait times on the report server at reasonable levels. We are currently processing over 5000 checks on 1200 hosts in four minutes with a load average of 2 or less on the report server and I/O wait CPU percentages of 20% or less. All perfdata is ingested into RRD files within 4 minutes.

In addition to our PNP graphing, we also have a daemon running on the report server that reads the Nagios perfdata output and sends it to a corporate data warehouse for long term trending.

There it is; a scalable graphing architecture with Nagios and PNP that we believe will allow us to graph thousands more checks per five minute period than we are doing now without having to upgrade hardware.

In the next article in this series I will discuss how to use PNP to monitor the performance of your Nagios poller and report server.

Special thanks to Mike Fischer, my manager at Comcast, for allowing me to share my experiences at work online; special thanks to Ryan Richins, my talented teammate, for his hard work with me on our Nagios system. We are looking for another developer to join our team in the NoVA area; write me at if you are interested.

— Max Schubert



Textile Help