Risks and Opportunities of AI in Incident Management
Risks and Opportunities of AI in Incident Management
Description
Large Language Models provide a powerful "sidekick" in resolving incidents. Our talk opens by exploring what LLMs can do when things go wrong: from parsing your codebase to debug, to ad-hoc testing scripts, to brainstorming solutions with engineers. These features ought to be considered by orgs of all sizes. Not only do they reduce the time sink of incidents, they open up that time for feature development, compounding advantages. LLMs aren’t perfect, and their common failure modes are critical when applied to incident response. We’ll cover some of these failures and how they’d look in the context of incident response, including hallucination, misprioritization, and black boxing. This isn’t the end of the world. Orgs just need to weigh these risks versus the speed and convenience of LLM incident response. To mitigate the risk, orgs need to invest in people. We’ll show how the resilience, adaptability, and knowledge of your incident response teams can compensate for the risks of LLMs. Emily Arnott Community Manager at Blameless @emilyarnott8 Emily is the community manager at Blameless, an incident workflow solution. She loves seeking out the cutting edge in how companies stay online. ---- Recorded Sept 21, 2023 at Strange Loop 2023 in St. Louis, MO. https://thestrangeloop.com
Speakers
Emily Arnott
Read Bio
Emily Arnott
Emily is the Community Relations Manager at Blameless, where she fosters a place for discussing the latest in SRE. She has also presented talks at SREcon, Conf42, and Chaos Carnival.