Header Image of the Blog.

ELASTIC AI ASSISTANT FOR OBSERVABILITY

Elastic has extended its Elastic AI Assistant, a generative AI powered by the Elasticsearch Relevance Engine™ (ESRE). The AI Assistant, currently in technical preview for Observability, aims to redefine how Site Reliability Engineers (SREs) identify and resolve problems, eliminating manual data hunting across various silos. By providing context-aware information, Elastic’s AI Assistant enhances the understanding of application errors, log messages, alert analysis, and code efficiency, offering support for SREs.

KEY FEATURES:

  1. Interactive Chat Interface: Elastic AI Assistant facilitates cohesive communication for SREs, offering an interactive chat interface where users can chat and visualize relevant telemetry data in one place. This interface also integrates proprietary data and runbooks, providing additional context.
  2. Access to Private Information: Users can share private data, such as runbooks, incident histories, and case data, with the AI Assistant. An inference processor, driven by the Elastic Learned Sparse Encoder, grants the Assistant access to the most pertinent data for answering questions and completing tasks.
  3. Knowledge Expansion: The AI Assistant can continuously expand its knowledge base through user interactions. SREs can teach it about specific problems, enabling the Assistant to offer support for similar scenarios in the future. This includes composing outage reports, updating runbooks, and enhancing automated root cause analysis, ultimately expediting issue resolution.

HOW IT WORKS:

Imagine you’re an SRE who receives an alert related to an exceeded log entry threshold. While Elastic Observability provides some insights, you need further analysis of the log spike. This is where the AI Assistant steps in. It sends a pre-built prompt to your configured Large Language Model (LLM), which not only provides a description and context of the issue but also offers recommendations on how to proceed. Additionally, you can initiate a chat with the AI Assistant to delve deeper into your investigation.

The AI Assistant’s chat interface supports natural language queries and enables you to:

  • Obtain conclusions and context, and receive recommendations from your private data (powered by Elastic Learned Sparse Encoder) and the connected LLM.
  • Analyze responses and output from the AI Assistant.
  • Summarize information throughout the conversation.
  • Generate Lens visualizations within the chat.
  • Execute Kibana® and Elasticsearch® APIs through the chat interface.
  • Perform root cause analysis using specific APM functions.

With Elastic’s AI Assistant, SREs can gain deeper insights into issues, understand their impact on the business, and leverage private data that LLMs aren’t trained on. This tool aims to streamline observability analysis, reduce manual data retrieval, and enhance AIOps capabilities.

In summary, Elastic’s AI Assistant for Observability is set to transform the way SREs operate, providing them with context-aware insights and efficient problem-solving capabilities. This extension of the Elastic AI Assistant is poised to play a pivotal role in the observability landscape, offering a powerful tool for SREs to enhance their workflow.


Do you have any questions or would you like a tailored solution? Please, feel free to contact us!