Elastic has extended its Elastic AI Assistant, a generative AI powered by the Elasticsearch Relevance Engine™ (ESRE). The AI Assistant, currently in technical preview for Observability, aims to redefine how Site Reliability Engineers (SREs) identify and resolve problems, eliminating manual data hunting across various silos. By providing context-aware information, Elastic’s AI Assistant enhances the understanding of application errors, log messages, alert analysis, and code efficiency, offering support for SREs.
Imagine you’re an SRE who receives an alert related to an exceeded log entry threshold. While Elastic Observability provides some insights, you need further analysis of the log spike. This is where the AI Assistant steps in. It sends a pre-built prompt to your configured Large Language Model (LLM), which not only provides a description and context of the issue but also offers recommendations on how to proceed. Additionally, you can initiate a chat with the AI Assistant to delve deeper into your investigation.
The AI Assistant’s chat interface supports natural language queries and enables you to:
With Elastic’s AI Assistant, SREs can gain deeper insights into issues, understand their impact on the business, and leverage private data that LLMs aren’t trained on. This tool aims to streamline observability analysis, reduce manual data retrieval, and enhance AIOps capabilities.
In summary, Elastic’s AI Assistant for Observability is set to transform the way SREs operate, providing them with context-aware insights and efficient problem-solving capabilities. This extension of the Elastic AI Assistant is poised to play a pivotal role in the observability landscape, offering a powerful tool for SREs to enhance their workflow.