Support
Pick the channel that fits. We're a small team and respond fast.
Best for billing, account issues, or anything sensitive.
GitHub Issues
Bug reports, feature requests, eval reproductions. Public.
Twitter / X
Quick questions and product updates.
Book a demo
See codag wired into a real log pipeline.
FAQ
No. Codag sits in front of your LLM. Send raw logs to us, get a structured IncidentCapsule back, then pass that capsule to whatever agent you're already using. Claude, GPT, Gemini, local Ollama, anything.
No. Submitted log payloads are processed in memory and discarded after the response, with a short idempotency cache. We do not train, fine-tune, or share. See the privacy policy.
Under 1 second per incident on cache hits, which is roughly 95% of production traffic once your services have warmed up.
Anything line-oriented. JSON logs, syslog, framework-specific formats (Hadoop, Spark, HDFS, K8s), unstructured app logs. The preprocessing stage handles parsing and PII redaction before templating.
Install the CLI, run codag init, and your agent gets the MCP server wired up automatically. Two commands. See the product page for the full setup.