AppDynamics Adds Microservices Monitoring for Hyperscale

Scott Fulton III, Contributor

August 2, 2016

5 Min Read
AppDynamics Adds Microservices Monitoring for Hyperscale

It’s a new style of application development that is having an impact on how modern, hyperscale data centers are managed: microservices.  It’s a design that enables individual functions of an application to scale up to meet high demand, as an alternative to replicating entire virtual machines.  Following the example of Netflix, more organizations are shifting new workloads to microservice design, including in production.  It’s more efficient, it uses less energy, and it may actually be a more sensible way to design applications in the end.

But it’s a bear, or something else that starts with “b,” to manage.  Now, an emerging player in the performance management space named AppDynamics is building out the latest version of its App iQ APM suite (with a small “i”) with a new component that’s designed to monitor the performance of applications that utilize microservices.

“Amazon has up to 150 microservices that are hit any time a page is built,” remarked Matt Chotin, AppDynamics’ director of product management, in an interview with Datacenter Knowledge.  “One of the big challenges that an organization is going to face is keeping track of the environment.  Not having to manually configure your monitoring system, and the ability to automatically build an app a map, is huge.  Manually monitoring and tracking that, would just be very difficult.”

More Than One Way to Slice a Loaf

Maybe.  Theoretically speaking, any application is the collection of all its constituent services.  If you slice an application into its services, while continuing the relationship between those services, you really shouldn’t be changing the application at all.  So monitoring the application (if you’re doing it right) should not change.

But there’s one critical difference:  When services scale up individually, the behavior of the application as a whole can change dramatically.

“Microservices are loosely coupled services that are maintained and deployed independently,” writes Donnie Berkholz, who directs research into development and DevOps practices for 451 Research, in a note to Datacenter Knowledge.  “They resemble service-oriented architecture (SOA) but in a lightweight and composable form, without all the XML and monolithic middleware.

“Our data shows that more companies are moving toward a dual agility- and risk-driven approach to IT, versus the classic penny-pinching view.  Microservices serve as a strategic investment to make that transition.”

AppDynamics’ approach to monitoring microservices-driven applications, in an environment that shares space with conventional applications (“monoliths”), involves what the company calls a business transaction.  That’s a fairly common phrase for something that’s, in this context, quite specific:  It’s a chain of events that represent a service taking place — for example, a request for data from a database, followed by a response to that request that may contain the data, or may contain an error message.

“AppDynamics creates business transactions,” reads the company’s documentation, “by detecting incoming requests at an entry point and tracking the activity associated with request at the originating tier and across distributed components in the application environment.  A default detection scheme exists for each type of framework associated with an entry point (such as a Web Service).”

So the person or people charged with monitoring services with AppDynamics don’t actually need to understand the architecture of the application, or contact the person or people who do.  In monitoring the normal behavior of the application, App iQ can detect when and where API requests take place.  Once again, theoretically, these would be the same points for a monolithic app as for a microservices app.

But once App iQ has definitions in place for business transactions, by AppDynamics’ definition, it can ascertain and present key performance indicators for transactions inside and outside of microservices contexts.

Coalescence

“The important thing that you focus on, when you look at the broad overview,” said AppDynamics’ Chotin, “is how these [transactions] are impacted as all those microservices are assembled together.  But, with microservices, you’re talking about teams.  You can’t necessarily tell fifty distinct teams, ‘Hey, team, everybody look at what’s happening with the overall user experience, and then don’t worry about your individual services.’  Because in the end, one of those services might be a culprit.”

Put another way, in a microservices context, an appropriate APM will need to be able to ascertain performance metrics for the transaction model as a whole, and the individual components that comprise each instance of that transaction, as it traverses the spectrum of the organization’s data center assets.

“To gain the agility and low risk that microservices promise, it must be possible to innovate quickly and fail forward to minimize the time to recovery,” writes 451’s Berkholz.  “Both of these require the ability to deploy quickly and independently of other microservices, and low risk is at odds with the manual tweaking that is common in many production settings, so automated pipelines and infrastructure are a must.

“If pieces are segmented into small chunks, and the overall user experience is not significantly impacted when one fails, this creates a dramatic benefit over monoliths that are either up or down.”

App iQ, Chotin told us, is constructed with an API that is already being put to use in environments with automated configuration management, such as Jenkins, and load balancing proxies such as NGINX Plus.  That said, AppDynamics is angling to be the central repository for all applications performance.

“The challenge of the enterprise is that you have legacy infrastructure and legacy environments, new monitoring environments — you have so many different things going on,” remarked Chotin.  “To have different monitoring tools for all of these pieces, is difficult.  Not only is your environment complex, but your monitoring is complex.  We have a unified view of how monitoring should work, that stems from the business transaction.”

[Corrections were made to inadvertent spelling errors in the first draft.]

About the Author

Scott Fulton III

Contributor

Scott M. Fulton, III is a 39-year veteran technology journalist, author, analyst, and content strategist, the latter of which means he thought almost too carefully about the order in which those roles should appear. Decisions like these, he’ll tell you, should be data-driven. His work has appeared in The New Stack since 2014, and in various receptacles and bins since the 1980s.

Subscribe to the Data Center Knowledge Newsletter
Get analysis and expert insight on the latest in data center business and technology delivered to your inbox daily.

You May Also Like