Nagios Performance Tuning: Early Lessons Learned, Lessons Shared: Part 2 · 31 October 2008, 17:29

One of the first questions to ask your customer when designing a Nagios implementation should be “how many devices and services will we be monitoring?” It is important to ask this question early on in the process as the answers will affect how you design your Nagios-based system.

Another important question to ask is whether the system will be used to gather long-term (months/years) trending information or not. If it will be used as an ingest system for long term trending information, then timing becomes important. Making sure your service check intervals are consistent ovr time is critical to having the metrics it gathers have value for your organization / customer.

Why? Isn’t 5 minutes always 5 minutes to Nagios? Imagine you have a 5 minute metric – if, over time, that 5 minute metrics’ scheduling slips constantly forward or backwards from the original 5 minute intervals you schedule it for because of configuration decisions that cause Nagios to pause or ‘fall behind,’ you will end up with gaps in metrics and intervals that are hard to compare against each other. For example, if your original schedule for a metric is

0 5 10 15 20

and it then over the course of time it slips to

8 13 18 24 28

now your hour to hour comparisons are skewed, and if the scheduling skew continues, eventually you will have gaps in metrics.

So, given the above two questions:

You have some early architecture decisions to make, so your first priority should be to spend generous amounts of time reading and understanding the comprehensive online Nagios documentation. The Nagios documentation includes useful information on how to prepare for a larger installation, architecture patterns to follow when designing your systems, and very good information on Nagios configuration parameters that will help keep your systems executing checks quickly without becoming overwhelmed.

If your Nagios system will be trending hundreds or thousands of devices with thousands of service checks, you should think about having your Nagios poller and Nagios reporting / graphing functions exist on different servers. If you can, dedicate a second server to trending and notification and, if you have the luxury, use a third server just for notifications. The less I/O strain you put on your master poller, the more likely it will be able to hit whatever performance expectations you have.

Using a second server to offload trending and reporting also helps ensure that all performance data designated for trending actually make it to whatever graphing package you use and then to graphs.

On the other hand, if your system will only be used for fault management (not trending), you will be able to use less expensive hardware and will not necessarily require the expense or complexity of a multi-server setup. Same goes for cases in which you are monitoring a few hundred services on 50-100 servers.

Nagios does a lot of fork()ing when it runs service checks, so your Nagios master poller should have generous RAM and at least two CPUs. I have not come up with sizing formulae yet nor I have I found a sizing calculator for Nagios, but when I find one or figure out general rules of thumb I will post them.

Your reporting / trending server will experience high levels of disk I/O activity, so a generous amount of RAM and SCSI disks in RAID 1+0 (or 6, 0+1) is highly recommended.

Also, if you can avoid it, do NOT use VMWare or other BIOS-emulating virtual machine technology for your Nagios instances .. they generally will not be able to handle the fast processing needs a large Nagios installation requires and some virtualization technologies have problems with time sync, which is a huge deal killer for Nagios’ scheduling.

Next blog entry in this series will focus in on the Nagios master polling server.

Special thanks to Mike Fischer, my manager at Comcast, for allowing me to share my experiences at work online; special thanks to Ryan Richins, my talented teammate, for his hard work with me on our Nagios system. We are looking for another developer to join our team in the NoVA area; write me at if you are interested.

— Max Schubert



Nagios Performance Tuning: Early Lessons Learned, Lessons Shared: Part I · 30 October 2008, 16:48

This is the start of a short series of articles on Nagios 3.x performance tuning. If you have not read the Nagios performance tuning guide, please do so before reading this series of articles. Everything I discuss in these articles was done after applying the well thought-out, useful tips contained in the online Nagios documenation.

My team mate and I just went through a round of tuning our pre-production instance of Nagios, gathering base performance information and data to allow our team to give our management reasonable capacity estimates for how many services and hosts we can monitor in our environment with Nagios.

Pre-production hardware is HP DL185 (2x of these servers in use):

Nagios configuration:

Nagios poller:

Nagios report server:

For initial testing and tuning we are polling ~ 250 hosts with a total of ~ 1800 checks, all checks are SNMP, all scheduled at 5 minute intervals. Some are gets, some are summarizations of walks.

We are using PNP for graphing (NEB module mode, run via inetd), RRD updates happen on a second server dedicated to reporting and visualization.

At the beginning of our tuning adventure we were seeing:

This would barely be ok if we were just doing fault management (barely), but we want to send all perfdata not only to PNP but to a large time series warehouse db another team maintains. This meant we needed 5 minute samples to stay close to the same intervals over time as the large time series database stores raw samples for years and many other teams pull data from it for graphing, reports, and other analysis.

After two weeks of tuning we have reduced our check execution time (all 1800 checks!) to < 60 seconds, with an average scheduling skew of just 7 seconds at the end of 24 hours with our tuned configuration in place. All performance data is successfully being graphed by PNP as well. Our current configuration does this without knocking over either our Nagios polling server, our PNP server, or the hosts we are polling .. and we have room to poll many more services and hosts using the same two servers.

How did we get from start to finish? More science than art :p. I am usually a very intuitive developer but this time my teammate and I found we had to take a more scientific approach .. and it worked.

Stay tuned as I will be posting a series of short articles on my blog about Nagios performance tuning and scaling.

Special thanks to Mike Fischer, my manager at Comcast, for allowing me to share my experiences at work online; special thanks to Ryan Richins, my talented teammate, for his hard work with me on our Nagios system. We are looking for another developer to join our team in the NoVA area; write me at if you are interested.

— Max Schubert



First Nagios 3 Enterprise Monitoring Book Review! · 28 September 2008, 13:50

Finally, someone reviewed our book. Wee (and a huge thanks to the reviewer!!

— Max Schubert

Comment [1]


Nagios and JSON with statusjson.cgi: a winning combination! · 27 September 2008, 13:35

Yann JOUANIN’s new JSON output CGI for Nagios called Nagios2JSON has some seriously important positive impacts to the utility of Nagios as glue for an organization’s monitoring infrastructure.

I have been using the JSON CGI to build a custom web interface to Nagios to help convince some important people I work with that Nagios is more flexible and easier to mold without hacking than any other open source fault management framework available today. Working with JSON lets me really just focus on my front end and just treat Nagios as a data source .. really really cool!

Some of the limitations of the current implementation (project is very young)

I really cannot say enough good things about this add on and I really hope Yann continues to work on this project.

— Max Schubert

Comment [2]


Nagios 3 Enterprise Network Monitoring Book · 26 July 2008, 16:22

Lead a team of five very capable and talented authors to create this book for Syngress Publishing . The book discuss in depth how to integrate Nagios into a large organization and also provides a nice plugin cookbook-style chapter that focuses on SNMP.

If you have purchased this book and have any questions, please go to and there you will find a link to the book mailing list along with all of the source code from the book.

The authors who worked with me to create this book are:

A special thank you to each of them for their hard work on this project. You all rock!

— Max Schubert