New Relic Extends Its APM Methods to Monitor Infrastructure

The same mechanism that monitors the performance of active applications on servers is being leveraged to comment on infrastructure performance. But what is New Relic measuring now, and why?

Scott Fulton III, Contributor

November 11, 2016

5 Min Read
New Relic
New Relic Infrastructure, part of its expanded Digital Intelligence Platform. [Courtesy New Relic]

As applications take their place from virtual machines as the center of the virtualization and cloud space, the tools that monitor application performance are inevitably evolving into infrastructure monitoring tools as well.  That means, as hyperscale architectures become more the norm in data centers, the way customers manage workloads on data center platforms will look a lot more like application performance managers.

This week, APM maker New Relic took the next steps in that direction, in a move that surprised no one following this space.  In a release scheduled for November 16, New Relic’s Digital Intelligence Platform (one screen from which is shown above, and yes, it actually did name the thing “DIP”) promises to extend visibility of the performance of data center workloads to what it describes as “the full stack.”

“We’ve put a lot of investment into adding a new product to our portfolio,” said Greg Unrein, New Relic’s vice president for performance analytics, in an interview with Data Center Knowledge, “in a way that naturally complements the data we’ve been collecting, and the problems we’ve been solving for customers with their end user performance monitoring and their application monitoring.  New Relic Infrastructure adds to that, allowing you to see deeply what’s going on with the infrastructure that those applications are running on.”

Metrics System

At issue is the validity of the metrics, especially when they’re cast in the presumed perspective of an end user.  Supposedly, if anyone is disappointed by poor performance, it’s the customer at the far end of the scale.  New Relic — and Dynatrace, AppDynamics, and others in the APM space — have all shifted their marketing approaches to a kind of “customer-centric” theme.

But of course, users of mobile apps and Web sites are not supposed to experience infrastructure, any more than users of electricity experience the power grid (except if they jam a finger or two in a light socket).  If New Relic’s aim is truly to report on the status of data center infrastructure the way the customer sees it, then what happens if the customer sees nothing?  Put another way, is anyone sure we’re using the right metrics to measure data center performance?

Unrein told a story of one client that publishes online media, some of whose demand tends to spike when shows first become available, or when their popularity trends up on social media.  The client needs to be able to determine when to pre-scale availability of that programming, to meet anticipated demands.  The criteria important to this client, at this time, would involve a kind of ratio between the assessed resource usage requirements of the program, and the operating status of the servers hosting the feed for that program.

A simple APM could tell you how well a live program is streaming, but that’s only while it’s happening, and the underlying resources have already been provisioned.  Unrein said this client needed a way to put assessed performance data to use, prior to adapting the virtual infrastructure to suit demand.

Part of what Unrein implied was this:  If we’re capable of measuring performance from the customer’s perspective, and we know from experience what parts of that performance are creditable to the design of the application, then by deduction the other factors must be functions of the infrastructure.  Simple logic, in his opinion, reveals what the right metrics should be at the right times.

Read Alert

As with other APM platforms, New Relic’s DIP operates through the use of agents injected into running software workloads.  In the case of new containerized (Docker) workloads, this means injecting container images with agent code as they’re being built, and before they’re deployed.

The lingua franca of these agents are New Relic Alerts, which are events that trigger some level of notification.  That notification can be set to take place in pre-determined conditions, defined as rules.

Nate Heinrich, New Relic’s senior product manager for Alerts, admitted that with respect to its expanded platform, when a rule relates to a performance factor perceived by the customer, the meaning of “customer” has also been expanded.

“A customer can be an internal customer — someone consuming your service — or it can be an end user,” said Heinrich.  “Those measurements, and how they change over time, can have rules applied to them.  When they’re triggered, they can send events and Web hooks out to controlling systems, so you can automate some changes — whether it be auto-scaling or other types of configuration management changes to accommodate the event that occurred.”

DIP into DCIM?

Here is where the new platform must distinguish itself from a typical APM, which notifies operators of some kind of performance event by way of a visual dashboard — for instance, turning a gold happy face red and sad when page load times become slow.  If you’re going to be monitoring infrastructure, the results of that monitoring need to be applicable to the tools that manage the infrastructure.

Enabling an automated incident response may be a work in progress for DIP.  Heinrich said the company is experimenting with new ways to “surface” this information for DevOps — for example, opening up its NRDB data store to pattern recognition and other AI methods.  But these experiments are all internal, for the time being.

“Having that data in NRDB will allow customers to leverage being able to group, slice, and dice their incident lifecycle information in any way they want,” he said, “and be able to compare that with interesting events that occur.”

So DIP is not an infrastructure lifecycle manager, at least not yet, but it could soon become the nerve center for one.  However, New Relic told us it doesn’t intent to enter the DCIM space directly any time soon.

“There are many systems that do the orchestration, provisioning, and management of the application’s lifecycle from an infrastructure perspective,” explained New Relic’s Greg Unrein.  “We don’t care to be in that business, specifically.  We want to provide the data that is needed to make good decisions — automated or not — about how that system is behaving, from that end user and application performance perspective.”

New Relic has scheduled a live webinar on the subject of its latest addition, entitled “New Relic Infrastructure 101,” for Friday, Nov. 18, at 11 am PST (2 pm EST). Registration is required.

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like