Log compression for AI agents
Codag is a drop-in compression and incident-memory layer for AI agents. We can collapse a million log lines into one schema-valid capsule before your LLM sees them, and cluster recurring incidents over time so the next outage triages by recalling the last one.
Wrap any log call your agent already makes, no hassle.
PRIVATE BETA · API KEYS BY APPROVAL
{
"service": "api",
"window": "14:22:11–15:22:11",
"compression": 29063,
"evidence": [
{
"role": "root_cause",
"line": 412847,
"text": "psycopg2.OperationalError"
},
{
"role": "trigger",
"line": 412831,
"text": "pool acquire 480ms"
},
{
"role": "consequence",
"line": 412854,
"text": "pool exhausted, queue=18"
}
// + 15 more, all cited
]
}
Pairs with any agent
For Claude Code users
Wrap any log fetch your agent already makes. Same Claude. Same answers. A fraction of the token bill, and bigger context windows for the lines that actually matter.
Token cost · per incident
cheaper per incident
@ Opus 4.7 rates · 1.2M lines · 24.6M → 847 tokens