Close

HPE Aruba Networking Blogs

AIOps: Well, How Did We Get Here?

By Dave Logan, Vice President and CTO for the Americas, Aruba

So, I've been writing this blog for many weeks now. It started out as my thoughts on “Why do we desperately need AIOps,” and it’s had a few false starts. It's not that I didn't know what to write — I have too much to write about and I found I really need to paint the picture of how we got into the operations management situation we are in.

TL;DR – Enterprise IT needs to fully re-commit to a framework of “Experience Management” where end-user digital experiences need to be characterized and assured by IT, even when the IT does not own the end-to-end delivery or infrastructure for those experiences. Network-based observability and AIOps figure in heavily into the solution for this need.

I've been in the IT industry for more than 30 years and engaged with many enterprise organizations on how they deliver mission-critical applications across networks in a secure, scalable and effective manner. I've watched many technology trends sneak up on us — the move to SaaS applications and its cousin cloud, and the acceptance of BYOD. IT executives are now faced with some fairly interesting challenges of higher business expectations but have less visibility and control of the infrastructure and software.

In this first blog of a series, I’m simply going to reflect on these changes and challenges, and follow on with how they need to be addressed through new thinking, retooling and new processes.

Remember when IT was in control?

Enterprise IT used to control almost every aspect of the digital experience for every enterprise user.

Twenty years ago, the dominant enterprise IT architecture model involved server-side applications, which were either licensed from vendors or developed natively by enterprise IT and installed on enterprise-owned servers in enterprise-owned data centers. Applications were installed on the client endpoint, which was 90% likely to be a Windows-Intel architecture platform that was purchased, customized and supported by the IT organization. These client-server or web-based networked applications then used an entirely enterprise-controlled network for their delivery.

Complimenting this architectural model for end-to-end digital experiences was an end-to-end observability model. Since enterprise IT controlled all the major applications being utilized and their architecture, it was a simple matter of ensuring that each application and end system had the right instrumentation for effective management. Vendors of enterprise software and hardware provided key components of the digital services chain, and the IT organization worked very closely with these vendors to ensure that use cases, architecture and operations requirements were all met.

Client endpoints typically had Microsoft’s Windows Management Instrumentation (WMI) software agent running, while servers and their applications were instrumented by agents from vendors like BMC and Mercury Interactive, and the network was instrumented by the equipment vendors using SNMP-based agents. Data from these all agents was then collected by platforms like Microsoft System Management Server (SMS) and HP OpenView and used during monitoring and troubleshooting operations activities.

This methodology wasn’t really “experience assurance” in the more well-defined sense; instead, measurement that the architectural components were available and apparently working properly meant that end-user expectations were likely being met. The enterprise IT organization was (and largely still may be) organized around the major subcomponents of the architecture: client platforms, server platforms and applications, and the network including its sub-types. In the end, the IT organization’s specific technology expertise coupled with their direct control over every end-user application and direct measurement of every application components’ behavior and performance worked pretty well.

Almost every element of this model and architecture has changed

It started with the SaaS revolution that began in first decade of the new Millennium, where vendors of web-based applications began to directly market and operate their solutions on behalf of enterprise departments and lines-of-business instead of IT. Twenty years later, SaaS-delivered software is the dominant application delivery model. Salesforce is an amazing singular example of the pivot to the SaaS model, going from $5.4M in revenue in 2000 to $1.6B in 2010 and $21B in 2020.

Importantly, due to the SaaS model of application delivery, IT is no longer in control of the use cases for the application, nor is the vendor of the application; the sponsoring department is in control. And, IT likely has no detailed knowledge about how the application is built nor how it is behaving; that is under the control of the SaaS vendor.

As enterprise organizations have gradually adopted SaaS-based applications for every major department or line of business, the IT operations architecture of server-side design, control and visibility has been rendered ineffective. IT can measure SaaS application availability and performance using “black box” testing tools and methods, but the prior strategy of “instrument and measure everything” to rapidly pinpoint issues and their root causes is no longer viable. End-user expectations for highly available and highly performant applications can no longer be met directly by IT.

Cloud is simply a variation on this same theme. Cloud started as a mechanism to instantly host, or instantly scale an enterprise-owned application stack, but then evolved to where the software components of the application stack itself became cloud-ified in various models (IaaS, PaaS, etc.). While the enterprise does have more options for visibility and control than for SaaS, the end-result to the enterprise IT organization is the same – an overall loss of visibility to the entire application architecture and how well it is meeting end-user experience expectations.

Ten years after the emergence of SaaS came the bring-your-own-device revolution, or BYOD, which is seems innocuous, but in reality has been more impactful from an IT experience assurance perspective. Innovations in end-user computing hardware and software, coupled to a bold play by Apple to disintermediate the cellular carriers from making decisions about phone features, finally resulted in a completely new end-user enterprise computing platform – the iPhone. Certainly, RIM and its BlackBerry product had defined the enterprise mobile device use case in the late 1990s of allowing employees to gain access to email, but the iPhone and subsequent release of the iPad and Android-based phones and tablets launched BYOD through consumerization.

Once the Apple App Store was available, it was only a matter of time until enterprise-capable mobile applications became available and could connect to the enterprise application instances. BYOD gave rise to BYOA, or bring your own application, where users extended their productivity options by purchasing and installing their own mobile applications. People connected their newly installed mobile applications to their enterprise’s email and productivity apps as well as their own SaaS application subscriptions, including Dropbox, Evernote and Google G-suite. While IT organizations generally fought off BYOD (and BYOA) at first due to security concerns, and then tried to control BYOD platforms through mobile device management (MDM) systems, BYOD is largely embraced by the enterprise today, frequently in an untouched or uncontrolled manner.

Similar to the loss of visibility that occurred due to SaaS adoption, the same loss of visibility has occurred due to BYOD. Again, enterprise IT once controlled a fairly homogenous environment consisting mostly of Windows-based PCs with tightly controlled operating system configurations and application loads, along with the aforementioned visibility agent software like WMI. IT knew what to expect from how the device and its applications would behave, and when they behaved abnormally, how to troubleshoot the issues.

Now, the enterprise end-user client platforms include personal mobile devices from many vendors (typically multiple devices per user), utilizing many different OS versions from Apple and Google, each with their own unique blend of personal and enterprise apps. No more standardization, no more embedded instrumentation. None of the IT operations tools set up to monitor and manage the enterprise client platforms are effective at ensuring the BYOD endpoints are performing well and are delivering a robust end-user experience for the entirety of their productivity apps.

And now we have the emergence of BYOT – bring your own thing – where consumer IoT devices (like smart TVs and Amazon Alexa devices) are being brought into enterprise settings en masse by departments and end users. Similar to the situation with BYOD, there are no visibility nor control tools for these systems. To make matters worse, these IoT devices are always networked and are configured in a manner that is optimal for a consumer setting (i.e. a house or other dwelling) and not an enterprise setting, which in turn can frequently wreak havoc inside the enterprise environment.

Netting out 20 years of progress

There are two important and often overlooked side-condition to consider resulting from SaaS and BYOD/A/T.

First, while formal IT budgets may be flat or even decreasing, the overall IT (or digital experience, technology, and software) spend of a given enterprise organization has dramatically increased as a result of department, line-of-business technology budgets and employee technology purchases. There is more technology to manage and more experiences to assure, with less overall IT budget allocated for the purpose.

Second, the psychology of expectations-setting behind SaaS, cloud and BYOD is incredibly important to consider, and it’s simple to understand. When we as humans are handed a laptop, or asked to use an application, it is merely a tool that we are passively invested in or must be used as a condition of our employment.

When we are personally responsible for sponsoring our own devices (I like my iPhone) and our own applications (the HR department picks Workday) the departments and entire user community have deep ownership of these choices, and therefore they become significantly more important and expectations of their availability and performance is far higher. We selected the Concur expense management system because it worked really well on our iPhones and Android tablets and it had better work well!

Netting it out: In a 20 year journey, the posture of the enterprise IT has evolved from a situation of complete experience definition and control to one of diffuse control and visibility. IT has lost visibility and control over a significant portion of the enterprise server and application architecture, and how it delivers end-user experiences. IT has lost control and visibility over the client platform endpoints. IT likely doesn’t have knowledge of all the devices are attached to the network or what client platforms are in use. Lines-of-business, functional departments and the entire end-user community are now setting their own higher expectations for their digital experiences.

And yet, the IT organization remains responsible for the performance assurance and security-risk management of the entire enterprise and the digital experiences for the entire end-user community.

Come with me as I discuss in future blogs how regaining visibility and control is a key requirement, and is a journey in and of itself. I believe this journey, which will embed new instrumentation into a network-based observability architecture, can be accomplished by leveraging the principles of AIOps and focusing on the concepts found in Network-as-a-Service. Let’s regain control of what current seems uncontrollable.

Related Resources: