Close

HPE Aruba Networking Blogs

Let’s welcome GenAI’s arrival in HPE Aruba Networking Central

By Karthik Ramaswamy, Blog Contributor
This post coauthored with Alan Ni 

Organizations around the world are embracing AI faster than expected as a force multiplier for improved ITOps productivity.  To support this change in networking, GenAI LLM technology will soon appear in HPE Aruba Networking Central's AI Search feature as part of our existing AIOps suite of capabilities.  This post highlights how cutting-edge GenAI techniques will enhance the accuracy and response of search and navigation, along with additional details on how our LLMs are responsibly implemented and differentiated from earlier GenAI implementations within the networking space. 

The use of multiple LLMs within Central allows us to advance its conversational and summarization capabilities faster, more accurately, and more securely than ever before resulting in an even stronger search experience.  Best of all, the roll out of these new production-grade capabilities started earlier this month, with an anticipated completion across our global footprint by April. 

See GenAI in action

AI Search LLM enhancements 

Over the past 2 years, our AI Search tool has been universally found at the top of the Central GUI, designed for users to easily find answers to questions about their environments leveraging advanced natural language processing technology.   

HPE Aruba Networking Central AI Search 

With the incorporation of multiple HPE trained and tuned LLMs, we are performing an “engine-swap” for AI Search.    You'll get the latest and greatest in search engine accuracy, response times, and data privacy, with no change to the look and feel of interacting with AI Search. 

Improving search accuracy with user intent: We’re utilizing proprietary trained and tuned LLM transformers to better understand the intent of questions entered into AI Search.  Accurately understanding the intent of a user’s question is paramount for better responses and improved user satisfaction.  Since its introduction, AI Search has been asked over 3 million questions, and we have trained our LLMs on this extensive base dataset.  (Read more about the importance of AI training and data lakes from our fellow colleague and AIOps lead Jose Tellado.)  As a result, AI Search understands and answers network jargon questions better, provides type-ahead autocomplete capabilities, and introduces search-driven navigation to other parts of the GUI directly from the AI Search interface. 

Document summarization: TL; DR (‘too long; didn’t read’) our 20,000+ pages of technical publications on our products?  Don’t worry, we’ll forgive you for that and have you covered!  One of the most common question types that AI Search receives are questions regarding “how to” configure or activate certain functions within our networking products.  AI Search’s GenAI functionality now generates human-like, summarized answers for many of those queries, in addition to providing links to the foundational documents its generative output is created from. This can be a significant time saver for network operators trying to find a documentation answer they’re looking for.  

Response times: Anyone that has used ChatGPT, Gemini, or Copilot will understand that each query has a trade-off.  That trade-off can be the multiple seconds ChatGPT takes to respond to your answer, the “contribution” of your question data to Gemini’s data lake for future learning, or the large amount of compute needed to continually train Copilot’s models.  We’ve designed and are leveraging multiple purpose-built LLM transformers to reduce or eliminate these trade-offs for our users.  Having our LLMs self-contained allows us to provide faster response times and greater search performance. 

Data privacy comes first 

Security-first and data privacy principles are core to what we do. They are also fundamental to good AI. Use of tools like ChatGPT have created grave concerns regarding privacy and security with many enterprises, and rightfully so.  Any corporate intellectual property entered into these tools creates significant privacy and ownership issues.  Our engineering teams thought very deliberately about this issue and designed a solution that takes advantage of GenAI advancements without violating our security-first principles.  With HPE Aruba Networking Central, we have implemented multiple locally trained and hosted LLMs to take advantage of the human understanding and generative qualities of GenAI without the risk of data leaks via external API queries to and from our data lake.  Specifically, we have a dedicated language model that identifies PII/CII (personal and corporate identifiable information) on the platform.  This function allows AI Search to better understand device and site names queries entered, for more accurate answers.  And the function obfuscates that identified data from our training data lakes.   

Coming to a network near you! 

Generative AI is incredibly powerful, and the industry is just scratching the surface in terms of real-world AIOps applications.  We are really excited about LLM-powered AI Search, as it represents a huge benefit for today’s HPE Aruba Networking Central users but is also the first of many GenAI use cases we are working on as we move to the next generation of Central. 

 Stay tuned! 

Want to experience our LLMs in action along with the other security, scaling, automation, and orchestration features HPE Aruba Networking Central has to offer?  Sign up here for a future test drive.