Codag
Sign in Book a demo

The questions we get most.

Does Codag replace my LLM?

No. Codag sits in front of your LLM. Send raw logs to us, get a structured IncidentCapsule back, then pass that capsule to whatever agent you're already using. Claude, GPT, Gemini, local Ollama, anything.

Will my logs be used to train models?

No. Submitted log payloads are processed in memory and discarded after the response, with a short idempotency cache. We do not train, fine-tune, or share. See the privacy policy.

How fast is the warm path?

Under 1 second per incident on cache hits, which is roughly 95% of production traffic once your services have warmed up.

Which log formats are supported?

Anything line-oriented. JSON logs, syslog, framework-specific formats (Hadoop, Spark, HDFS, K8s), unstructured app logs. The preprocessing stage handles parsing and PII redaction before templating.

How do I get started?

Install the CLI, run codag init, and your agent gets the MCP server wired up automatically. Two commands. See the product page for the full setup.