The first historical documentation describing a major computer network failure examined an unexpected downtime event on ARPANET that occurred October 27, 1980. For several hours, the entire network became unusable. Subsequent investigation discovered a hardware failure that caused data packets to flood the network.
One particular Interface Message Processor (IMP)—the early equivalent of what we know today as a router—ran out of control. It consumed resources so great that no other IMP could communicate. Just as a denial of service attack today can take a web server down, the October 1980 event locked up ARPANET all across the country.
The impact it had on users triggered a serious, systematic look at how one should manage a network, its software and its component devices. Since that landmark day, IT monitoring has changed and grown more complex. Keeping up with the growing complexity can be a difficult task.
What IT Monitoring Used to Be
As businesses began using personal computers and local area networks in the 1980s, the primary goal was to interconnect users with printers and storage systems. Using connectivity like ISDN lines, companies began linking their LANs and offices together around the nation and the world. At that time, software applications and the infrastructure that supported the user base were relatively simple. There was no cloud computing; everything ran on premises. So it was fairly easy to monitor applications and the hardware that handled the processing, storage, and printing.
IT people wrote scripts that would ping servers to confirm they were up and running. Monitoring tools evolved that helped automate such scripts. Simple load checks were developed that would send notices if traffic associated with various components (a server, a sub-net, etc.) passed a preset threshold.
Then, in 1988 the Simple Network Management Protocol (SNMP) was born an an open source project in its Version 1 incarnation. It drastically simplified the chore of monitoring IT infrastructure by detecting when something ceased to be available. For the first time it gave IT personnel a purpose-built tool to watch over network operation.
Modern IT Monitoring Challenges
Now, more than 30 years later, network engineers continue to build on the monitoring techniques created decades ago, but continue to grow their skills to meet the needs of modern IT monitoring. SNMP Version 3 continues to secure and enhance the monitoring that today’s much more sophisticated networks demand, handling what the “old school” concept of network monitoring addressed but with greater functionality.
While SNMP is still valuable to IT teams, today’s more complex networks may also require new methodologies in order to provide a holistic view into the health and performance of networks and applications.
For example, in a software defined network (SDN), routers no longer compute the data tables that define how packets are routed through the network. That control function is given to an SDN controller that defines the route path packets follow. The controller software routinely changes the mapping of physical components (e.g., routers) to maintain optimum efficiency.
Monitoring an SDN with its inherent complexity and continuous change requires special skill sets in which network engineers focus more on rules and services than the traditional hardware focus on routers, switches, and firewalls. That’s just one example of how new methodologies (like REST/JSON, API calls, or WMI) are required to ensure the performance and uptime of increasingly complex networks.
Monitoring of applications is also more complex today, given that applications might be deployed at an on-premise location, in one or more clouds that service different geographical office locations, or a combination of both. Monitoring such applications requires an entirely different approach than the “old school” monitoring techniques.
Yet another issue: Server-less computing, in which cloud services run the servers, requires special consideration. Think Microsoft’s Azure, Amazon’s AWS, the IBM Cloud and Google Cloud. Likewise, software microservices and containers add another dimension to the monitoring task.
The design of present-day networks have made network and infrastructure monitoring a multidisciplinary job. Network monitoring, application performance monitoring, and server monitoring are each a distinct discipline that require specific tools and skill sets.
Coping with the Complex Monitoring Needs of Modern IT Environments
It’s a given that monitoring processes and tools have grown complex in recent years and are likely to become even more so. With the rapid growth of machine learning and artificial intelligence, some experts predict those technologies will gradually begin to appear as “intelligent agents.” Despite that, humans will still need to monitor networks during these early days of machine learning and artificial intelligence.
To make matters worse, monitoring is not on most IT professionals’ lists of exciting job responsibilities. Monitoring can be tedious and feel like a distraction from activities that create added value for the organization.
With your team probably spending more time and resources on monitoring today than in the past, outsourcing your monitoring frees up your team to focus on value-added activities. Outsourced NOCs have the expertise and the mandate to monitor your complex infrastructure full-time and keep up with trends and changes in technology.
iGLASS monitors, detects, fixes, and reports outages all day, every day, 24×7. We’re the go-to partner to intelligently monitor your network, servers, applications, and websites around the clock. Learn about the entire suite of services you could begin using almost immediately.
Start a conversation by giving us a call at 1-888-YOUR-NOC or sending us an email and tell us about your monitoring needs.