Secrets Sprawl 2026: How AI Is Accelerating the Credential Leak Crisis
GitGuardian's latest report reveals 28.65 million hardcoded secrets leaked to GitHub in 2025—a 34% YoY increase. AI service leaks surged 81%, with MCP configs becoming a new attack vector.
Secrets Sprawl 2026: How AI Is Accelerating the Credential Leak Crisis
GitGuardian just dropped its State of Secrets Sprawl 2026 report, and the numbers are staggering: 28.65 million new hardcoded credentials were committed to public GitHub in 2025—a 34% year-over-year increase and the largest single-year jump on record.
But the real alarm isn't just the volume. It's the AI factor.
AI service-related credential leaks exploded to 1.27 million, an 81% surge from 2024. This includes 113,000 exposed DeepSeek API keys, tens of thousands of OpenAI and Claude credentials, and a disturbing wave of LLM infrastructure configurations leaking into public repositories.
The Claude Code Hidden Cost
The report found that commits assisted by Claude Code had a 3.2% secret leak rate, compared to 1.5% baseline across all public GitHub commits. This isn't a tool failure—it's a workflow problem. Developers using AI to accelerate coding are skipping security reviews.
MCP Configs: The New Credential Goldmine
The explosive growth of AI coding assistants has introduced a new attack surface. Model Context Protocol (MCP)—the standard connecting AI assistants to external tools—is being adopted faster than security practices can keep up.
GitGuardian identified 24,008 unique secrets exposed in MCP-related configuration files across public GitHub, with 2,117 valid credentials (8.8% of all MCP findings).
The root cause? Documentation itself. Popular MCP setup guides often include copy-paste examples with hardcoded API keys:
{
"mcpServers": {
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": "BSAAa1B2C3d4E5f6G7h8I9j0K1l2M3n4"
}
}
}
}
This configuration contains a valid Brave Search API key. Developers copy-paste it, tweak the service name, and commit the entire file—key and all—to their repository.
The AI Security Paradox
AI-assisted coding is reshaping software development velocity. Public GitHub commits hit 1.94 billion in 2025, up 43% YoY. The active developer base grew 33%.
But when organizations scale creation faster than governance, credentials spread like weeds.
Ironically, AI assistants aren't the root cause. The report reveals a critical nuance:
- Claude Code's 3.2% leak rate is concerning
- But this isn't tool failure—it's human workflow failure
- Developers still control what gets accepted, edited, ignored, or pushed
- Even as coding assistants improve guardrails, people override warnings or prompt models to behave insecurely
The leak still happens through a human workflow. This distinction matters.
Sanitize .env Files Before Commit
Over 10 million secrets were leaked on GitHub last year. Use our Env Sanitizer to detect sensitive values in environment files before they reach your repository.
Open Env Sanitizer →Defense Strategy: From Detection to Prevention
Reactive secret detection can't keep pace with AI-accelerated development. Organizations must shift left—blocking leaks before code leaves the developer's machine.
1. Local Pre-Commit Hooks
Integrate secret scanning in .pre-commit-config.yaml:
repos:
- repo: https://github.com/gitguardian/ggshield
rev: v1.35.0
hooks:
- id: ggshield
language: python
stages: [pre-commit]
args: ['secret', 'scan', 'pre-commit']
This automatically scans every commit, blocking pushes that contain potential secrets.
2. Externalize MCP Configurations
Never embed credentials directly in MCP config files. Use environment variable references:
{
"mcpServers": {
"brave-search": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-brave-search"],
"env": {
"BRAVE_API_KEY": "${BRAVE_API_KEY}"
}
}
}
}
Then set actual values in your shell profile (e.g., .zshrc):
export BRAVE_API_KEY="your-actual-key-here"
3. Establish AI Assistant Guardrails
When using Claude Code, GitHub Copilot, or other AI coding assistants:
- Never let AI generate code containing real credentials
- Always review AI-generated configuration files, especially for API keys and database connection strings
- Create
.cursorrulesor similar AI behavior guidelines in your projects, explicitly requiring placeholders instead of real credentials
Conclusion
28.65 million hardcoded credentials isn't statistical noise—it's the byproduct of 2025's software development velocity. AI-assisted coding democratized software creation, but also democratized the ability to unknowingly expose organizational attack surfaces.
Secrets Sprawl won't fix itself. It requires parallel evolution of tools, processes, and culture. Catching credentials before they leave the developer's machine is the critical control point we can actually influence.
Because once that key hits GitHub, it's not your secret anymore.
Data Source: GitGuardian State of Secrets Sprawl Report 2026
Keywords: Secrets Sprawl, API Security, AI Security, GitGuardian, Claude Code, MCP, DevSecOps, Credential Leaks, Pre-Commit Hooks