HPE Aruba Networking Blogs

The secrets of AI networking part 2: data science meets real customer problems

By Jose Tellado, VP of AIOps, HPE Aruba Networking

In my previous post I outlined the keys to AI networking success: a comprehensive and relevant data lake, cloud scale and the combination of data science with deep networking and security domain expertise to attack the high-priority customer challenges.

AI networking is a long game.  It’s easy to come up with hypotheses about what kind of AI models and what kind of data will move the needle and make the network admin’s job easier while keeping the organization safe.   It turns out that many of these initial ideas just don’t pan out—the model doesn’t deliver, the outcomes are noisy, the training or inference data isn’t complete, diverse, or granular enough, and/or the impact is too small.

That’s why it is essential to have data scientists who are also network and security experts to work through all the steps required to reach a customer-impactful AI networking solution.  The first step is identifying network and security challenges. That’s where the network domain expertise comes in handy.  For example, “outdoor passerby client devices is dragging down wireless performance or resulting in end user application degradations,” or “we are seeing application outlier IoT client devices on the network,” or “suboptimal configurations are resulting in service or application performance degradations.”

Next is prioritizing these challenges to identify where analytics, AI modeling and AI deployment and automation can make the most difference.  This is the data science side of the equation.  Once there is agreement that AI can deliver a significantly better outcome, models are developed, trained, tested, validated, deployed, monitored in production, and retrained or tuned periodically.  Our minimum goal is to provide at least a 25% improvement to whatever AI powered problem we are tackling with a 95% degree of confidence. After deploying we continuously monitor  AI model improvement after applying the recommendation.  In many cases we do much better than that with the over 100-200% improvements in throughput, coverage, service or application responsiveness, etc. for a number of different AI models we deploy.

There is also an infrastructure dimension to the AI networking long game. As network engineers who design access points, switches, and gateways work closely with the data scientists, they become more familiar with the type, breadth, granularity, and volume of data that AI models thrive on.  As a result, hardware and software is AI native, from two perspectives.  First, access points, switches, and gateways are specifically designed to deliver high coverage, accurate and granular AI data, with each generation of HW and SW producing increasingly rich data “loads”.  Second, these “sensing” network devices can seamlessly execute the remediations, recommendations and insights that come from the AI models—in many cases automatically.  In some instances, we are on our fourth generation of AI native hardware and software products and our AI efficacy reflects this highly tuned data generation and resulting actions.

Here’s an example.  Many vendors design wireless access points (AP) to minimize cost.  But when you design for cost you often leave out support for key data telemetry that enables the models work.  Some AP’s are so “lite” that they can’t even report on their associated client device applications thru DPI engines or instantaneous power utilization.  Key data blind spots mean that no matter how powerful the AI and despite some small scale POC demos, it won’t respond to critical information needed for networking or security anomaly detection, troubleshooting, and optimization.  Over the years we’ve learned how to constantly improve what AI telemetry we need and make sure that our infrastructure provides it while still remaining price competitive.

This kind of insight comes from the fact that we have been delivering AI networking solutions for almost a decade. I think it is fair to say that we have gone from a good “1.0” effort at the start to a highly effective set of AI-powered network management and security solutions as both our data sets have greatly expanded, and our data science and our domain expertise have matured and coalesced.

Here’s what some of our customers have found with this approach:

  • In addition to positive feedback on the free connectivity, our new Wi-Fi 6E gets high marks internally. This includes business staff, who report a 66 percent reduction in the time required to upload camera, drone footage, and other video media daily.1
  • For IT, network intelligence has reduced deployment labor costs 30 percent, cut device configuration time 50 percent, and slashed troubleshooting time-to-resolution by another 50 percent.1
  • With HPE Aruba Networking Central, we’re resolving Wi-Fi trouble calls up to 50 percent faster, reducing IT overhead while improving user experiences.

When you are late to the game a common approach to AI is to start a lot of different projects with small, synthetic, noisy and/or dirty data sets that do not allow AI models to generalize to real work scenarios and see what happens—in other words, experiment.  That may be fine as an organization comes up to speed on AI techniques, but it doesn’t work when running an enterprise network.  As you can see from the results, we don’t ship experiments.

There are no easy shortcuts and that’s bad news for newcomers and startups.  As we have seen from the GenAI explosion, practically anyone can API a question into Chat GPT and generate what looks like a reasonable answer (not with the coverage and accuracy networking practitioners would expect).  What’s missing is the mindful work, collaborative HW, SW, cloud AI-native design and extensive testing to validate that results are useful and that efficacy is monitored in production.  As I said, AI networking is a long game and here’s an overused but apt analogy:  AI networking is like fine wine, no amount of muscle, money or bluster can make up for the thoughtful time required to produce a great product.  Don’t rely on experiments cloaked in marketing.

So a secret of AI networking is a customer focus with tight collaboration between teams, technology, and infrastructure.  Data science meets networking meets data meets time.  In my next blog, I’ll explain how not all the data is the same and what that means for your network.

1. Powering a vibrant city life, Hewlett Packard Enterprise, 2024