Insight and analysis on the data center space from industry thought leaders.
Making Automation Work in Your IT Department
The establishment of an intelligent infrastructure that can anticipate and adapt to the fast rate of changing business and market demands has become the primary objective of the CIO, writes Jonathan Crane of IPSoft.
April 22, 2014
Jonathan Crane is Chief Commercial Officer, IPsoft. Jonathan has been a communications industry leader for more than 35 years. Jonathan has held numerous executive positions in corporations such as MCI, Savvis, ROLM, Marcam Solutions and Lightstream.
To be responsive to the business advantages brought forth by technology forces such as cloud architectures, automation advances, ubiquitous mobility and the proliferation of information, today’s IT departments must undergo a radical transformation. The establishment of an intelligent infrastructure that can anticipate and adapt to the fast rate of changing business and market demands becomes the primary objective of the CIO.
Crucial measurements of this new architecture will be reliable and predictable performance, nearly 100 percent availability, and as usual, accomplished with reduction in operational costs. In order to focus on assimilating the new and disruptive technologies, labor automation coupled with informed or intelligent labor will be the enabler of success in this new era.
Getting started with automation can be a daunting endeavor, though, given the variety of tools on the market, each with its own pros and cons. Where to even start is a bewildering task for many CIOs, IT managers and their respective teams, but they can begin with examining a couple criteria.
Criteria #1: Process Integration
Most importantly, organizations must evaluate process integration. Do you want an automation tool that can automate single, simple tasks, or do you need a solution that is capable of broader applications to address linked activities? More than likely, you’ll take the latter.
For instance, allocating resources in response to capacity shortages is an event that can (and should) be automated, but it is simply one link in a chain of actions. A number of other steps will be involved: the organization will want to measure and report on application performance, identify the underlying cause of the shortage and obtain approvals for their response, all before the capacity shortage can be remedied. It’s important to remember that the end goal of automation is to go above and beyond humans’ manual capabilities; they simply aren’t able to efficiently manage end-to-end processes in one fell swoop. Selecting a solution that is capable of automating an entire process flow can derive significantly greater value in time and cost savings.
Criteria #2: Process Flow
The second critical criteria to evaluate as you select an automation tool is the flow of IT processes. Are they comprised of predictable and well-defined, if A then B tasks, like the resource provisioning example? Or, are they constantly in flux, with sequences of actions differing from one day to the next? Determining which one of these process frameworks characterizes your IT environment will then lead you down one of two paths: scripted automation or autonomics.
Scripted Automation: For Controlled Environments
Let’s say your IT department falls into the former category – it’s comprised of pre-set processes that remain largely unchanged like rebooting a server at 6:00 a.m. every morning or signaling that capacity usage has bypassed a given threshold. For standard workflows and processes like these, the big name IT vendors have long-served this need and are well-established scripted automation vendors. They can even provide out-of-the-box functionality with scripted templates, ready-made for commonly occurring tasks.
While these automation brands can produce significant time and cost savings in repetitive and predictable IT environments, their reliance on scripted automation can become a hindrance when more complex tasks are introduced. In heterogeneous environments that cross processes and domains, engineers could spend hours or even days scripting a single, specialized automation execution, only for that process to change, requiring a modified script and putting the engineer back to square one.
Autonomics: For Complex Environments
For enterprises that operate in complex, ever-changing environments, scripting could turn into a full-time job, reversing the potential resource savings of automation. A better solution for these types of infrastructures is autonomics, which can essentially script itself by observing engineers’ day-to-day activity to emulate how they interpret and respond to service issues. It takes simple task execution a step further by adding a contextual element to automate the entire process based on environmental triggers. The more it “sees,” the more its knowledge base grows, and the more it is eventually able to reduce the workload of IT engineers.
While the concept of autonomics may seem abstract, its potential savings are very real. In some cases, it can automate up to 80 percent of low-level, repetitive processes and can reduce mean-time-to-resolution from 40 minutes to just a few minutes. With the help of autonomics, large enterprises can cut their IT staff by up to one-half, redeploying that headcount to more strategic tasks that drive greater business value.
Scripted Automation vs. Autonomics: Their Common Ground
With today’s IT landscape maturing by the day, achieving operational efficiency is more than just a nice touch – it’s an absolute necessity. Deploying automation is critical to improving an organization’s bottom line, but ensuring its success means finding the right solution. And that means tapping into one that removes people-intensive administration from the equation – whether that’s removing humans from repeatable environments that lend themselves to scripted automation, or removing them from the scripting process itself. No matter what kind of solution you go with, there will always come a time where human intervention is required. The key is getting the right tool to minimize it as much as possible and, in turn, maximize operational – and commercial – success.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
About the Author
You May Also Like