An AI Agent Didn’t Hack McKinsey. Its Exposed APIs Did.
The recent McKinsey incident should serve as a wake-up call for enterprises deploying AI, highlighting the importance of considering AI security beyond the model layer. Many organizations are still focused on the model layer, while the real risk lies in the action layer, including APIs, internal services, and shadow integrations that AI agents can access and manipulate. The technical details of the McKinsey incident, including its internal AI platform with a broad API footprint and unauthenticated APIs, demonstrate the potential for enormous exposure. This is not an isolated case, as the McDonald's AI hiring incident also points to the same structural problem of weakly governed APIs and exposed administrative access. The real risk is not the AI model itself, but what the agent can do, including retrieving data, calling APIs, and accessing systems. The AI security market is currently focused on prompts, model behavior, and output controls, but these are only one layer of the problem. The industry framing of AI security is still too narrow, and the attack surface is no longer just the model, but the full connected system around it. The emergence of shadow APIs connected to agents is a particularly dangerous category, as these internal or lightly governed APIs can become part of an external attack surface once connected to AI systems. To address this risk, enterprises need to shift their focus from just the model to the action layer, including APIs, endpoints, and MCP servers, and consider the potential blast radius of AI agents.