Semintelligent

Nagios Performance Tuning: Early Lessons Learned, Lessons Shared: Part I · 30 October 2008, 16:48

This is the start of a short series of articles on Nagios 3.x performance tuning. If you have not read the Nagios performance tuning guide, please do so before reading this series of articles. Everything I discuss in these articles was done after applying the well thought-out, useful tips contained in the online Nagios documenation.

My team mate and I just went through a round of tuning our pre-production instance of Nagios, gathering base performance information and data to allow our team to give our management reasonable capacity estimates for how many services and hosts we can monitor in our environment with Nagios.

Pre-production hardware is HP DL185 (2x of these servers in use):

Nagios configuration:

Nagios poller:

Nagios report server:

For initial testing and tuning we are polling ~ 250 hosts with a total of ~ 1800 checks, all checks are SNMP, all scheduled at 5 minute intervals. Some are gets, some are summarizations of walks.

We are using PNP for graphing (NEB module mode, process_perfdata.pl run via inetd), RRD updates happen on a second server dedicated to reporting and visualization.

At the beginning of our tuning adventure we were seeing:

This would barely be ok if we were just doing fault management (barely), but we want to send all perfdata not only to PNP but to a large time series warehouse db another team maintains. This meant we needed 5 minute samples to stay close to the same intervals over time as the large time series database stores raw samples for years and many other teams pull data from it for graphing, reports, and other analysis.

After two weeks of tuning we have reduced our check execution time (all 1800 checks!) to < 60 seconds, with an average scheduling skew of just 7 seconds at the end of 24 hours with our tuned configuration in place. All performance data is successfully being graphed by PNP as well. Our current configuration does this without knocking over either our Nagios polling server, our PNP server, or the hosts we are polling .. and we have room to poll many more services and hosts using the same two servers.

How did we get from start to finish? More science than art :p. I am usually a very intuitive developer but this time my teammate and I found we had to take a more scientific approach .. and it worked.

Stay tuned as I will be posting a series of short articles on my blog about Nagios performance tuning and scaling.

Special thanks to Mike Fischer, my manager at Comcast, for allowing me to share my experiences at work online; special thanks to Ryan Richins, my talented teammate, for his hard work with me on our Nagios system. We are looking for another developer to join our team in the NoVA area; write me at maxs@webwizarddesign.com if you are interested.

— Max Schubert

---

Comment

Textile Help