Security-operations and threat-intelligence teams are chronically short-staffed, overwhelmed with data, and dealing with competing demands — all issues which large-language-model (LLM) systems can help remedy. But a lack of experience with the systems is holding back many companies from adopting the technology.
Organizations that implement LLMs will be able to better synthesize intelligence from raw data and deepen their threat-intelligence capabilities, but such programs need support from security leadership to be focused correctly. Teams should implement LLMs for solvable problems, and before they can do that, they need to evaluate the utility of LLMs in an organization’s environment, says John Miller, head of Mandiant’s intelligence analysis group.