Nagios deep dive: retention.dat and modified_attributes · 23 November 2010, 07:26

When Nagios core (the daemon, typically started by a script in /etc/init.d/) starts up, it follows a rather involved process to turn the configuration files and domain-specific language (DSL) contained within them into in-memory objects – the 10000 foot view of this process is:

modified_attributes tells Nagios which attributes of an object should be loaded into memory as Nagios reads object state from retention.dat; the code that uses this field (all DSL-related code is in the xdata/ directory of the source tree) uses bit-shifting to store and determine which attributes should be read into memory for an object and which should be ignored.

From include/common.h:

#define MODATTR_NONE                            0
#define MODATTR_CHECK_COMMAND                   512
#define MODATTR_NORMAL_CHECK_INTERVAL           1024
#define MODATTR_RETRY_CHECK_INTERVAL            2048
#define MODATTR_MAX_CHECK_ATTEMPTS              4096
#define MODATTR_CHECK_TIMEPERIOD                16384
#define MODATTR_CUSTOM_VARIABLE                 32768

The default value for modified_attributes is 0 – ignore all attributes from retention.dat that have counterpart constants in common.h

When an object’s state for the fields listed is changed as Nagios runs, Nagios changes the value of the modified_attributes field to include the constant that represents the field; this allows the retention.dat parsing code to know which attributes to read into memory as an object is parsed from retention.dat into memory when Nagios starts.

A common use case showing this process:

When these two actions are processed, Nagios core will then change modified_attributes to indicate that the state of the notifications_enabled and active_checks_enabled fields were changed from their default values by setting modified_attributes to 3, which is the result of code similar to this:

modified_attributes |= MODATTR_ACTIVE_CHECKS_ENABLED 

When Nagios is stopped, it serializes all objects from memory to disk – the modified_attributes attribute is one of the attributes written to disk.

Our team has taken the approach of writing out our own retention.dat files based on state for Nagios objects stored in a database as a part of our current distributed nagios implementation – knowing how modified_attributes works fixed a long standing bug in our code that was causing attributes for hosts and services that had been modified in-flight to be ignored when Nagios started – we hope this short article helps you avoid the same bug.

Special thanks to my managers Mike Fischer and Eric Scholz at Comcast (a great place to work as a developer!) for allowing me to share information learned while at work based on our use of open source software with the community – and special thanks to Ryan Richins for his work with me on uncovering the cause of this bug in our custom Nagios configuration distribution code.

— Max Schubert



Nagios Performance Tuning - use the RAM (but be careful!), Luke · 5 January 2010, 22:04

We found that migrating as many queues and files as we reasonably can within our Nagios architecture to RAM disks makes a huge difference with the performance of a large Nagios installation. We currently poll over 15k services on over 2k+ hosts in less than 5 minutes 24×7×365.

We use RHEL5; by default RHEL mounts /dev/shm as a RAM disk with 50% of physical RAM available to the partition.

Our opinion on using RAM disks for temporary storage is controversial; a number of users on the Nagios users and developers lists have told me that disks with big caches should be as fast as RAM as files are cached in RAM, but our experience has shown that nothing beats a RAM disk for a fast queue directory or file. Our experiences also taught us that when moving queues to RAM it is very important to also implement supporting code that ensures important data is persisted across reboots or can easily be re-created across reboots.

Our experience is based on machines with SCSI disks in RAID 0, 5, and 1+0 configurations.

Queues and files we moved to RAM that sped up our Nagios architecture noticeably (by over 40% in total):

Nagios (nagios.cfg)

Moving log_file, object_cache_file, and status file to RAM speed up the CGIs in a larger environment. Moving the temp_file, temp_path, check_result_path, and state_retention_file to RAM lowers the latency for Nagios in a larger environment.

We have also taken the radical steps of moving all configuration files into RAM as well as plugins. We use ePN extensively, every time Nagios goes to run an ePN plugin it checks to see if the plugin has changed. Moving plugins to RAM we noticed a speed up.

IMPORTANT NOTE – Do not move everything to RAM without putting in custom, periodic scripts or other processes that back up important files from RAM to real disk so that if the host crashes they can be quickly recovered or re-created!

SNMPTT (snmptt.ini)

The spool file for checks is a good one to move to RAM and speeds up processing.

PNP (npcd.conf and process_perfdata.conf)

The NPCD queue is another directory we moved to RAM and noticed a nice jump in processing time for NPCD.


Moving any of the above queues to RAM disks will increase the overall speed of your Nagios architecture; the Nagios-specific configuration changes make a very noticable difference but at the price of some additional supporting code to ensure the robustness of critical data. We developed this list over a period of 3-6 months of time, so take your time if you decide to implement any of the changes mentioned in this article; also make sure you have Nagios trending metrics in place beforehand so you can see what kind of difference the above changes make, if any, to your installation.

Special thanks to my managers Eric Scholz, Mike Fischer, and Jason Livingood for allowing us to share our experiences and knowledge with the general public, and extra special thanks to my teammates Ryan Richins and Shaofeng Yang for their work with me in creating an ever-changing and improving Nagios architecture that is stable and gives us incredible performance.

We are still hiring :), contact me if you are interested in working on a terrific team doing interesting and innovative work.

— Max Schubert



Nagios Performance Tuning: Early Lessons Learned, Lessons Shared. Part 5 - Circular Dependency Checking · 6 August 2009, 12:36

NOTE – we are using Nagios 3.0.3, which does not have the very cool patch for the circular dependency checking algorithm recently introduced into the Nagios 3.1.x release tree.

Our startup times for our Nagios instances jumped dramatically today (more than 6x) due to some of our users adding large numbers of new services to their hosts that are associated with their hosts through the

service -> hostgroup -> host

relationship I have discussed often and that we make use of often. We always want our Nagios instance to start on a 5 minute interval as we push most of the performance data we get back from checks into a long-term trending data warehouse.

We also test every configuration release in an integration and test environment before doing a deployment.

With this in mind, we decided to try turning off circular dependency checking on startup for our production Nagios instances.

On one this reduced startup time from 763! seconds to 16 seconds; on the other startup times were reduced from 158 seconds to 6 seconds.

There you have it, a simple way to dramatically reduce startup times, but again, only do this if you test your configuration beforehand in an environment with circular dependency checking on.

— Max Schubert



Nagios patch withdrawl: only send recovery escalation notifications for services if a problem escalation notification was sent · 24 July 2009, 13:16

Well, I hate to say it, but me oculpa, I had to withdraw the first attempt at the patch I did in an earlier article (which I have hidden for now to make sure others do not download it) that was supposed to fix escalation recovery notification behavior.

My first attempt at the patch was overly naive; if you downloaded it, please remove it from your installation as it will most likely not work for you. It does work for us, but our configuration is very unique and very different from how most people use Nagios.

I have a new version in place at my job and I will be releasing that version next week or the week after next. Why might you trust this new one after my poor first attempts?

My apologies if you downloaded and used the earlier patches; thankfully it will not corrupt data etc, just does not do what I promised it would do.

The current version is working for us and working with typical configurations as well I am just not going to repeat the same mistakes I made last time as I know how frustrating it is to back out code.

— Max Schubert



Nagios Performance Tuning: Early Lessons Learned, Lessons Shared. Part 4.5 - Scalable Performance Data Graphing · 13 March 2009, 11:40

In my previous post on Scalable Performance Data Graphing with Nagios, I discussed how our team is using PNP, NPCD, and modpnpsender.o to send performance data from our polling server to our report server and then process it.

A week ago our report server hit it’s upper limit on the number of PNP performance data events it could process (8800 events every 5 minutes). We trend on a dozen or so poller and report server metrics, including the age of events in the NPCD queue; our queue went from having one minute each 5 minutes where it was completely empty to having over 30,000 backlogged events and growing in a 24 hour period. This backlog meant the RRD files (and consequently the PNP UI) were up to 15 minutes behind reality as well.

I started at the beginning of the week with tuning our NPCD threads and sleep time parameters. At first I tried starting more threads and sleeping less, but the server was so overwhelmed that this had the opposite effect; the queue grew.

Next I played with starting fewer threads and eventually found that 10 threads every 5 seconds was at least letting NPCD start to drain the queue. After 48 hours (!) the queue was down to 3k events, with no events in queue older than 119 seconds. Better, but not good enough for us to say the problem was fixed.

My colleague, Ryan Richins, remembered seeing documentation on the PNP site on integrating rrdcached with PNP. I had vaguely remembered seeing it, so I gave it a second look. Ryan, meanwhile, downloaded the source to the latest stable versions but did not find that rrdcached was included. He then re-read the PNP page and we eventually downloaded the latest trunk snapshot of RRD Tool, knowing that it might not be production ready. This version did contain rrdcached.

The configure options we used are:

export LDFLAGS=”-L/lib64 -lpango-1.0 -lpangocairo-1.0”
CPPFLAGS=”-I/usr/include/cairo -I/usr/include/pango-1.0”
CPPFLAGS=”$CPPFLGAGS -I/usr/include/cairo”
CPPFLAGS=”$CPPFLAGS -I/usr/include/pango-1.0/pango”

./configure \ —prefix=/path/to/base \ —enable-perl \ —disable-ruby \ —disable-lua \ —disable-python \ —disable-tcl \ —with-rrdcached

After installing rrdcached, we downloaded the latest version of PNP and looked at the rrdcached integration. We have done a lot of local changes to, so we back-ported the RRD integration code into our script (just 6 lines or so of code!). We tested rrdcached on our integration environment with this /etc/default/rrdcached file:

OPTS="-w 60 -z 6 -f 120 -F -j /path/to/temp/dir -t 10"

On our integration environment, this performed quite well, so we rolled it out to production. In production, performance was not doing as well, so I changed the parameters to this, which seems so far to be the best combo I have found for us:

OPTS="-w 75 -z 1 -f 900 -F -j /path/to/journal/dir -t 8"

IMPORTANT NOTE – when using rrdcached, if you need to restart it with a new set of parameters and npcd, do the following:

If you do not let all processes stop before restarting rrdcached, you will lose data.

So after this change our queue came down to about 1.5k events, but we still were constantly processing events (no empty minute). My coworker and I started discussing using memcached to queue events and then a light went off for us .. why not use a RAM disk for the NPCD queue? Since rrdcached keeps a journal file in case it crashes, the risk from a crash is just losing 5 minutes of data or less, which was acceptable to us. By the way, this idea was not an original one, I had seen it on the Nagios / PNP lists before, i just had not considered it fully.

So, I changed the queue directory for npcd to be this (RHEL 5.x):


I also changed the path for the process_perfdata.log file to be


The size of the queue in MB with 8800 events in 5 minutes is only 1-2 MB on average, so not much risk of clogging the real memory on this system (we have about 500 MB free according to top and snmpd).

I then restarted npcd and watched our NPCD metrics. Voila! Back down to 4 minutes to process all PNP files and an empty minute to spare :), with a max event queue size of 1.2k.

Something Ryan and I were expecting to see that we did not see during all this was CPU I/O wait to decrease. The load on the server has slightly decreased, the server definitely feels more responsive, but still seeing consistently 20-25% I/O wait on the CPU.

Maybe there is more we can tune? Ideas welcomed.

Special thanks to Mike Fischer, my manager at Comcast, for allowing me to share my experiences at work online; special thanks to Ryan Richins, my talented teammate, for his hard work with me on our Nagios system and welcome to Shaofeng Yang, our new teammate!

— Max Schubert



PNP-aware version of Drraw released · 27 February 2009, 06:43

I have been looking for a while for a tool to let me and our users create custom web-based dashboards from PNP RRD files using a web interface.

On the PNP users mailing list someone mentioned a perl-based tool to create dashboards from RRD files called Drraw. I installed it (very easy!) and it is quite a cool tool for this purpose. Very full-featured and flexible. The tool was written by Cristophe Kalt.

I saw a few things I did not like about it:

So I added code to the project that will read from the XML meta-data descriptors PNP creates along with RRD files so that when you go to create a new template/graph in Drraw you see the DS names as specified in the perfdata output from your Nagios plugins. I also cleaned the CSS up, renamed the CGI to index.cgi, and included a little Apache configuration snippet to make it easy to set up Drraw in Apache with the index.cgi file being used as the directory index.

Hope you find it useful; I have interest in integrating this functionality into PNP .. if you have interest as well and are famililar with PNP, perl, and PHP, write me, I welcome help.

Update – Cristophe added PNP integration code to the project independent of me doing it, his release JUST came out today :), so feel free to use my variant but I am discussing with him and will be talking with other developers about rolling my changes back into the main line and helping with the project as a developer.

— Max Schubert



Nagios Performance Tuning: Early Lessons Learned, Lessons Shared. Part 3 - Tuning The Poller · 25 December 2008, 18:48

The main Nagios configuration file, nagios.cfg, gives Nagios administrators a high degree of control over the tuning process. This flexibility can be intimidating to new Nagios users. In this article I will attempt to demystify some of the settings contained in this file and explain how they can be used to make the Nagios poller operate in a manner that meets your requirements.

All parameters in this article are discussed in the online Nagios documentation extensively; read that documentation before you read this article if you have not already.

The configuration settings that will most likely be of interest to you when tuning Nagios are:

If you are using host and service dependencies, then these parameters become important as well:

Host check performance tuning goes hand-in-hand with service performance tuning as failed service checks can trigger on-demand host checks. It is also important to properly define your host alive check, retry interval and max attempts parameters as well to keep host checks from dragging down poller performance.

Service check related parameters


This variable holds the number of seconds (or partial seconds) Nagios should sleep between running service checks. In a large installation, with solid hardware, and time performance goals, I recommend setting this parameter to as low a number as possible as any additional delay introduced between checks means your overall check schedule skews more quickly over time.

Warning – if you do not configure Nagios with the —enable-nanosleep flag, you can only use positive integers for this parameter.

With nanosleep enabled, we were able to reduce this number to as low a fraction as .01 seconds (1/100th of a second). Anything lower will fail; if you use zero, Nagios will complain and exit.


This parameter indicates how long Nagios has to execute all service checks in your configuration when it is restarted, in minutes. If you are super-concerned about performance data getting into a performance data store at regular intervals, set this value to the interval you are using for metrics. For example, if your performance data tool expects 5 minute samples, you better set this value to 5 to ensure that all service checks get run within a 5 minute interval. This can cause administrators headaches when many checks are being run and some are time sensitive for trending and others are longer checks (like robotic checks) that are more oriented towards fault management than trending / deviation from trend alerting. One way to diminish the time you spend on managing poller performance for trending is to set up two instances of Nagios (it is free to use after all) and set up all time sensitive checks on one instance and all longer running checks on another. This gives your team the benefit of having an instance that is tightly bounded by time and watched to ensure it is hitting time interval requirements while the other can be a little more loose on timing and take on larger numbers of longer running checks.


Nagios is designed to be flexible and work on a wide variety of hardware. This is a good thing. The inter-check delay method is a parameter that lets the administrator tell Nagios how aggressive to be when running a set of service checks on a managed host. It tunes the delay Nagios uses between scheduled checks on a host; the more delay, the less resource impact imposed on a host. There are four settings for this parameter:

While the configuration notes say to never use the n method for production, if you have hardware that can handle it this will give you a huge performance boost. The hosts we monitor can all take 5-10 service checks at once without noticeable negative performance impact and our Nagios poller is able to handle running over 1000 service checks at any given instant; changing from the s method to the n method lowered the time we took to complete all our checks by a significant amount; and our team is using Nagios to populate a time-series database as well as do fault management.


This parameter determines how many checks are scheduled initially on each host at a time in your configuration as the scheduler creates the initial service check queue. It has two settings:

Lets pretend we have 250 hosts and 1000 checks, 4 checks per host. If you set this parameter to 1, the scheduler will schedule all four checks on host 1, then schedule all checks on host 2, then all checks on host 3, up to host 250 .. in essence serially checking all hosts. If you set this parameter to 250, then the scheduler will first schedule check 1 across all 250 hosts, then check 2 across all 250 hosts, then check 3, and finally check 4. In a situation when finishing checks quickly matters, we have found that setting the interleave value to the number of hosts in your configuration gives a performance boost and does help decrease the effect of setting inter check delay to none.


This parameter tells Nagios how many checks it is allowed to schedule at once. Leaving this at 0 minimizes the amount of time Nagios takes to complete all checks but also maximizes the load on the Nagios host and the bursts of network traffic Nagios produces. Setting max_concurrent_checks to 1 then would force Nagios to execute just one check at a time. We use the 0 setting as we again are tied to hitting a 5 minute interval for all checks for trending purposes and we are fortunate enough to have decent hardware (dual dual-core CPU, SCSI disks, nice network) and we have enough network bandwidth around that bursting every few minutes does not bother anyone.

Finally, if you are using service dependencies, be sure to set the cached_service_check_horizon to a number of seconds equal to the smallest service check interval in use. When a service that depends on another service needs to be checked, Nagios will first check the depended on service, so if one service is depended on by many others, setting this will keep Nagios from re-executing the depended on service check plugin more often than it needs to.

Tuning is not easy but if you have the time and resources to invest in it the results can be fantastic. With the parameters discussed in this section, my team has been able to have our Nagios instance execute over 3500 checks across 900+ servers in about three minutes, still well within our five minute ‘hard’ ceiling.

Host check related parameters

A key consideration in optimizing host checks is to ensure that your host check method completes quickly and to minimize the number of times it verifies a host is up. An example given in the Nagios documentation uses the ICMP ECHO (ping) check that comes with Nagios to illustrate this, telling how it is more effective to have a check ping once and then repeat up to ten times on failure rather than have a single check that pings ten times each time it is run.


As with the service check inter-delay, this is the amount of time Nagios inserts between checks of all hosts when initially scheduling them after a restart. We use the n value to finish quickly, other values will reduce the network impact of initial host checks.


How long Nagios has to complete all host checks. We set this to 5 minutes as that is our max polling interval. The longer the interval, the lower the load will be on your Nagios poller.


How long Nagios should cache host check results. When using service dependencies, this comes in handy, as Nagios will perform a host check each time a service that is depended on fails. Caching host checks for the interval of a typical depdended on service will reduce the number of ‘on demand’ live host checks Nagios has to do and help keep your “all checks done” intervals low.

In this article we have briefly covered some of the more important nagios.cfg performance tuning parameters. Learning to tune Nagios effectively is a process that takes time, patience, and experience with Nagios. If you are a long time Nagios administrator or have worked with a number of network/host/service fault and performance management tools, then the time you invest may well pay off many times over in the success of your Nagios deployment.

In my next article I will discuss the methods my team is using to ingest and graph Nagios performance data using PNP in a way that we hope will scale to large numbers of hosts and services.

Special thanks to Mike Fischer, my manager at Comcast, for allowing me to share my experiences at work online; special thanks to Ryan Richins, my talented teammate, for his hard work with me on our Nagios system. We are looking for another developer to join our team in the NoVA area; write me at if you are interested.

— Max Schubert



Nagios Performance Tuning: Early Lessons Learned, Lessons Shared: Part 2 · 31 October 2008, 17:29

One of the first questions to ask your customer when designing a Nagios implementation should be “how many devices and services will we be monitoring?” It is important to ask this question early on in the process as the answers will affect how you design your Nagios-based system.

Another important question to ask is whether the system will be used to gather long-term (months/years) trending information or not. If it will be used as an ingest system for long term trending information, then timing becomes important. Making sure your service check intervals are consistent ovr time is critical to having the metrics it gathers have value for your organization / customer.

Why? Isn’t 5 minutes always 5 minutes to Nagios? Imagine you have a 5 minute metric – if, over time, that 5 minute metrics’ scheduling slips constantly forward or backwards from the original 5 minute intervals you schedule it for because of configuration decisions that cause Nagios to pause or ‘fall behind,’ you will end up with gaps in metrics and intervals that are hard to compare against each other. For example, if your original schedule for a metric is

0 5 10 15 20

and it then over the course of time it slips to

8 13 18 24 28

now your hour to hour comparisons are skewed, and if the scheduling skew continues, eventually you will have gaps in metrics.

So, given the above two questions:

You have some early architecture decisions to make, so your first priority should be to spend generous amounts of time reading and understanding the comprehensive online Nagios documentation. The Nagios documentation includes useful information on how to prepare for a larger installation, architecture patterns to follow when designing your systems, and very good information on Nagios configuration parameters that will help keep your systems executing checks quickly without becoming overwhelmed.

If your Nagios system will be trending hundreds or thousands of devices with thousands of service checks, you should think about having your Nagios poller and Nagios reporting / graphing functions exist on different servers. If you can, dedicate a second server to trending and notification and, if you have the luxury, use a third server just for notifications. The less I/O strain you put on your master poller, the more likely it will be able to hit whatever performance expectations you have.

Using a second server to offload trending and reporting also helps ensure that all performance data designated for trending actually make it to whatever graphing package you use and then to graphs.

On the other hand, if your system will only be used for fault management (not trending), you will be able to use less expensive hardware and will not necessarily require the expense or complexity of a multi-server setup. Same goes for cases in which you are monitoring a few hundred services on 50-100 servers.

Nagios does a lot of fork()ing when it runs service checks, so your Nagios master poller should have generous RAM and at least two CPUs. I have not come up with sizing formulae yet nor I have I found a sizing calculator for Nagios, but when I find one or figure out general rules of thumb I will post them.

Your reporting / trending server will experience high levels of disk I/O activity, so a generous amount of RAM and SCSI disks in RAID 1+0 (or 6, 0+1) is highly recommended.

Also, if you can avoid it, do NOT use VMWare or other BIOS-emulating virtual machine technology for your Nagios instances .. they generally will not be able to handle the fast processing needs a large Nagios installation requires and some virtualization technologies have problems with time sync, which is a huge deal killer for Nagios’ scheduling.

Next blog entry in this series will focus in on the Nagios master polling server.

Special thanks to Mike Fischer, my manager at Comcast, for allowing me to share my experiences at work online; special thanks to Ryan Richins, my talented teammate, for his hard work with me on our Nagios system. We are looking for another developer to join our team in the NoVA area; write me at if you are interested.

— Max Schubert



Nagios Performance Tuning: Early Lessons Learned, Lessons Shared: Part I · 30 October 2008, 16:48

This is the start of a short series of articles on Nagios 3.x performance tuning. If you have not read the Nagios performance tuning guide, please do so before reading this series of articles. Everything I discuss in these articles was done after applying the well thought-out, useful tips contained in the online Nagios documenation.

My team mate and I just went through a round of tuning our pre-production instance of Nagios, gathering base performance information and data to allow our team to give our management reasonable capacity estimates for how many services and hosts we can monitor in our environment with Nagios.

Pre-production hardware is HP DL185 (2x of these servers in use):

Nagios configuration:

Nagios poller:

Nagios report server:

For initial testing and tuning we are polling ~ 250 hosts with a total of ~ 1800 checks, all checks are SNMP, all scheduled at 5 minute intervals. Some are gets, some are summarizations of walks.

We are using PNP for graphing (NEB module mode, run via inetd), RRD updates happen on a second server dedicated to reporting and visualization.

At the beginning of our tuning adventure we were seeing:

This would barely be ok if we were just doing fault management (barely), but we want to send all perfdata not only to PNP but to a large time series warehouse db another team maintains. This meant we needed 5 minute samples to stay close to the same intervals over time as the large time series database stores raw samples for years and many other teams pull data from it for graphing, reports, and other analysis.

After two weeks of tuning we have reduced our check execution time (all 1800 checks!) to < 60 seconds, with an average scheduling skew of just 7 seconds at the end of 24 hours with our tuned configuration in place. All performance data is successfully being graphed by PNP as well. Our current configuration does this without knocking over either our Nagios polling server, our PNP server, or the hosts we are polling .. and we have room to poll many more services and hosts using the same two servers.

How did we get from start to finish? More science than art :p. I am usually a very intuitive developer but this time my teammate and I found we had to take a more scientific approach .. and it worked.

Stay tuned as I will be posting a series of short articles on my blog about Nagios performance tuning and scaling.

Special thanks to Mike Fischer, my manager at Comcast, for allowing me to share my experiences at work online; special thanks to Ryan Richins, my talented teammate, for his hard work with me on our Nagios system. We are looking for another developer to join our team in the NoVA area; write me at if you are interested.

— Max Schubert



First Nagios 3 Enterprise Monitoring Book Review! · 28 September 2008, 13:50

Finally, someone reviewed our book. Wee (and a huge thanks to the reviewer!!

— Max Schubert

Comment [1]