# Hydra Brand Document

**Confidential | April 2026**

---

## 1. Point of View

AI coding tools made every engineer on your team faster. They did not make anyone smarter about what those engineers are shipping.

51% of daily AI tool users report more code quality problems than before they started. 53% report more vulnerabilities. The output velocity went up. The quality floor went down. The gap between how fast code is being written and how carefully it is being reviewed is widening every month.

Every tool in the market addresses one side of this. GitHub Copilot, CodeRabbit, Qodo, Augment -- they all do the same thing: read your PR, post a comment, stop. The human still triages the comment, decides whether it is real, writes the fix, opens the PR, creates the ticket, and closes the ticket. The tool handed the problem back.

Nobody has closed the loop.

The technology to close it has been possible for less than 18 months. Multi-agent autonomous execution at production confidence on well-defined problem categories crossed a reliability threshold in late 2024. The Cloudflare internal system runs seven agents, completes a full review in 3 minutes 39 seconds, and costs $1.68. The capability exists. The product that uses it end-to-end does not.

Hydra closes the loop. It reads your entire codebase, not just what changed this week. It builds a profile of your application so it knows what matters most for this specific repo. It finds the issue, creates the ticket, writes the fix using your own codebase's conventions, verifies nothing broke, opens the PR, and closes the ticket. No human in the critical path.

The category is autonomous code governance. No one owns it yet.

---

## 2. Positioning Statement

For engineering teams of 20 to 200 developers using GitHub and AI coding agents, who are shipping more code than they can safely review, Hydra is the autonomous code governance agent that reads your entire codebase, finds issues across three layers of analysis, and closes them -- from detection to merged fix to closed ticket -- without a human in the critical path. Unlike every code review tool on the market, Hydra does not stop at the comment. It does the work.

---

## 3. One-liner

Hydra reads your codebase, finds the problems, and closes the tickets. No human in the critical path.

---

## 4. Tagline

Autonomous code governance.

---

## 5. Elevator Pitch

Your team is writing 3x more code than they were 18 months ago. Your review capacity is the same. Every tool in the market finds the problem and hands it back to you as a comment. Hydra finds the problem, writes the fix, opens the PR, and closes the Linear ticket. It also builds a full profile of your codebase so every developer's AI coding session gets smarter the moment they open it. One install. A few minutes later: codebase issues discovered, one fixed autonomously, one Linear ticket closed. No other tool delivers all three.

---

## 6. Messaging Pillars

### Pillar 1: The Loop Closes

Every other tool stops at detection. Hydra executes.

**Proof points:**
- Finding identified, Linear ticket created, fix written, PR opened, ticket closed on merge -- no human in the critical path
- Baseline tests established before every change; post-fix audit run before every PR opens
- Focused on bugs and security issues first -- where functionality does not change, and AI validation is straightforward

**Example copy:**
> "Connect a repo. Hydra finds the issues. The Linear tickets close themselves."

---

### Pillar 2: The Codebase Knows Itself

Hydra reads everything, not just what changed this week.

**Proof points:**
- Discovery builds an application profile in ~16 minutes: architecture, conventions, how-to guides, category weighting (customer-facing app vs. internal tool vs. external API)
- Three-step audit: deterministic scanner patterns, 6 parallel agents across ~40 dimensions, Opus meta-review to catch what structured analysis missed
- Self-improvement loop: 78 scanner patterns generated for Hydra's own tenant; 20 meet the global quality threshold. Every audit makes the next one better.

**Example copy:**
> "When Discovery runs, it writes an AI profile of your repo. Every audit after that starts from a codebase that already knows itself."

---

### Pillar 3: The Team Gets Smarter Passively

One install. The whole team benefits. No one has to do anything.

**Proof points:**
- Hydra patches your repo's `CLAUDE.md` to point to its documentation. Every developer using Claude Code on that repo automatically gets Hydra's architecture context, conventions, and how-to guides injected into every AI session -- without installing anything.
- Freeform Notes inject custom context (constraints, in-progress decisions, known issues) directly into every LLM prompt on that repo
- Related Repos: link `compliance-frontend` to `compliance-backend` and a type change in one surfaces the mismatch in the other

**Example copy:**
> "One developer installs Hydra. The .hydra directory merges. Every teammate's next Claude Code session just got smarter."

---

## 7. Product: Four Layers

### Find: Discovery + Audit

Hydra reads the entire codebase first. Discovery runs once per repository, costs a few dollars, and produces:

- Comprehensive architecture outline of the codebase
- Conventions and definition-of-done for that specific project
- Patterns for how features are developed in this codebase
- How-to guides unique to the project (e.g., "how to write Playwright tests in this repo")
- An application profile: is this customer-facing or internal? What categories matter most?

The application profile changes how Hydra weights analysis. A customer-facing app gets accessibility, PII, and security weighted higher. An internal tool weights performance over accessibility. An external API prioritizes security and input validation. No competitor does this.

**The .hydra directory:** After discovery, Hydra opens a PR that creates a `.hydra/` directory in the repository containing architecture overview, `hydra.md` (AI entry point), how-to guides, and documented conventions.

**The three-step audit:**

1. **Deterministic Scanner Patterns** -- GREP patterns and a rules engine. No LLM cost, no false positive risk from creativity. Runs first, quickly, cheaply. Replaces traditional static analysis (Semgrep, SonarQube linting rules) and serves as the foundation the LLM layers build on.

2. **Tool-Guided LLM Analysis (6 Parallel Agents, ~40 Dimensions)** -- Six specialized agents run in parallel, each covering a category: security, UX (accessibility, internationalization), performance, code quality, architecture, and others. Each agent has structured checklists plus open-ended problem finding. Application profile weighting applies here.

3. **Opus Meta-Review** -- A final pass using Claude Opus with a deliberately open prompt: find what steps 1 and 2 missed. No checklist. No category constraint. This catches non-obvious issues -- the architectural decision that looks fine in isolation but is problematic in context.

---

### Fix: Autonomous Execution

When Fix is triggered, Hydra:

1. Creates a Linear ticket instantly (full context, severity, effort estimate, risk assessment)
2. Establishes baseline test coverage before any change
3. Implements the fix using the codebase's own conventions
4. Verifies the change against baseline tests
5. Runs the fix through full audit quality metrics
6. Opens a PR linked to the Linear ticket
7. Closes the ticket on merge

**Where autonomous execution is currently focused:** Bugs and security issues -- where functionality does not change. This makes AI validation straightforward. Feature additions require human validation; bug fixes do not.

**The guardrails:** Baseline test coverage before every change. Post-fix audit before every PR opens. Codebase context informs the fix approach. These are engineering controls, not marketing language.

---

### Improve: Code Quality + Kaizen

**Improve** targets the code quality category: dead code removal, naming conventions, structural refactors, documentation gaps, dependency cleanup. Not bug fixes. Not security patches. The things that work but make the codebase slower to operate in.

Each improvement is scoped, estimated for effort, and executed using Hydra's Discovery context. Every improvement produces:
- A PR with the change, linked to a Linear ticket
- A change plan in the PR description
- A manual testing plan for the reviewer
- A post-improvement audit to verify no regressions

**Kaizen:** Autonomous improvement loops with a defined budget or focus area. Give it a focus area and let it run. Kaizen uses Git worktrees to prevent conflicts across non-overlapping sections of the codebase.

Exit conditions:
- Budget reached
- 3 consecutive marginal iterations (diminishing returns)
- Nothing new to improve in the focus area

Real numbers: 7 loops, $2.75 total. A full autonomous improvement cycle costs less than a coffee.

---

### Govern: PR Review + Attribution + Self-Improvement

**PR Review:** Two modes -- scanner patterns only (fast, deterministic) or full audit (all three steps). Maintains a living PR comment that updates as code changes are made. Supports GitHub check system: green check or red X that can block merges until Hydra passes.

**Author Attribution:** Findings linked to the Git author who wrote the code. Findings per 1,000 lines of code by author. Hidden by default. Enabled by engineering managers for refactor prioritization and team planning. Not recommended as a primary performance signal for individual engineers.

**Self-Improvement Loop:** During LLM analysis, when an agent finds an issue that could be detected deterministically, it generates a suggested scanner pattern. Patterns that meet quality thresholds are added to that tenant's scanner library. Tenants can opt into global contributions -- patterns that generalize across codebases improve Hydra for all users. 78 scanner patterns generated from Hydra's own codebase audits. 20 meet the global quality threshold. False-positive patterns are automatically deactivated.

---

## 8. Context Injection Features

**Freeform Notes:** User-defined hints per repository, injected at the prompt level into every LLM prompt Hydra runs on that repo. If you know something Discovery would not surface -- a team constraint, a decision in progress, a known issue -- write it here. Immediately active. No re-run required.

Example uses:
- "We are mid-migration from PostgreSQL to CockroachDB. Do not suggest PostgreSQL-specific patterns."
- "The auth module is being replaced in Q3. Flag issues but do not fix."
- "This repo targets WCAG 2.2 AA. Weight accessibility findings accordingly."

**Related Repos:** Link repositories for cross-repo context. When a repo has a related repo configured, Hydra's analysis and fix execution consider patterns, types, and conventions from both repos. A type change in the backend surfaces the mismatch in the frontend audit.

---

## 9. Target Audience and ICP

### Primary ICP

Engineering teams of **20-200 developers**:
- SaaS, tech, or fintech company
- Using GitHub
- Already using at least one AI coding agent (Cursor, Claude Code, GitHub Copilot)
- Using Linear for issue tracking
- Python, TypeScript, Go, or Rust primary language

**Why this profile specifically:** They feel the AI Velocity Paradox daily. Linear is installed, so the ticket lifecycle loop is immediately visible value. Size is right: real code review problem, fast procurement decisions. They use Claude Code, which means CLAUDE.md injection works immediately.

### Secondary ICP

VP Engineering or CTO at **500+ person engineering orgs**:
- Already bought Snyk or SonarQube (budget exists and is proven)
- Documented technical debt problem
- Compliance requirements creating governance pressure

### Who to avoid in year one

- Companies requiring on-prem deployment (pre-GA infrastructure work required first)
- Government / FedRAMP requirements
- Companies not using Linear (Jira integration is on the roadmap)
- Companies not using GitHub (architecture is GitHub-native today)

---

## 10. Buyer Personas

### The Engineer

**Their problem:** You are getting flagged for things a machine should have caught. You are also being asked to review 20 PRs a week when you used to review 5. The review queue never empties. And when an AI tool does flag something, it hands it back to you as another item on the list.

**What they care about:** Real bug detection. Low false positive rate. Not being embarrassed by a bad autonomous fix. Three-step analysis answers "does it actually work." The guardrails (baseline tests, post-fix audit) answer the trust question.

**What wins them:** Benchmark score. First fix in under 10 minutes. No noise. When it finds something, it is a real problem.

**What they say internally:** "It actually fixes things. And when it flags something, it's real."

---

### Engineering Manager

**Their problem:** Your senior engineers are spending half their week in the PR review queue. Technical debt is accumulating faster than it is being resolved. When a governance tool surfaces 200 findings, someone on your team still has to triage all 200. The tool did not save time -- it redistributed the work.

**What they care about:** Metrics they can report upward (cycle time, issues per sprint, debt velocity). Whether it creates new management overhead. Whether the team likes it.

**What wins them:** Seeing closed Linear tickets instead of open comments. Not having to manage the tool. The self-improvement loop means Hydra gets better without their involvement.

**The proof point that closes them:** Kaizen runs autonomous improvement loops until there is nothing left to improve. 7 loops. $2.75. No engineer hours.

**What they say internally:** "The debt backlog is shrinking and no one is doing it manually."

---

### VP Engineering / CTO

**Their problem:** You have visibility into velocity but not quality. Your team is generating 3x more code than 18 months ago. Your review process has not scaled. You have no governance layer across all your repos, and every tool you evaluate requires your team to change how they work.

**What they care about:** Visibility across the entire codebase, cost per developer vs. the alternative, the self-improvement loop (what makes Hydra a long-term platform, not a point tool), integration with existing GitHub and Linear workflows.

**What wins them:** "Your team is generating 3x more code with AI tools. Your review capacity has not scaled. Hydra closes that gap -- and it gets better the more repos you run it on."

**The proof point that closes them:** Hydra's own tenant has generated 78 scanner patterns from its own codebase audits. 20 meet the global quality threshold. The system builds its own tools.

**What they say internally:** "It gets better on its own. We don't have to manage it."

---

### CISO

**Their problem:** 53% of teams using AI coding tools have more vulnerabilities than before they started. Your security tooling finds CVEs in your dependency tree. It does not find SQL injection vulnerabilities in your actual code logic. You have a Snyk-clean codebase that may have critical code-level security bugs. And when a tool does surface something, you still have to track whether anyone fixed it.

**What they care about:** Detection of actual code-level security bugs (not just CVEs in dependencies). False positive rate. Compliance reporting. No-training data policy. Model provider. SOC 2 certification.

**What wins them:** "Hydra finds SQL injection vulnerabilities in your actual code. Snyk finds CVEs in your packages. These are different problems. Hydra fixes what it finds and creates the audit trail automatically."

**What they say internally:** "It finds what Snyk can't find. And it closes the ticket."

---

## 11. Buyer Stage Messaging

### How the buying decision actually happens

**Step 1 -- Discovery:** A developer installs Hydra on a shared repository. Their teammates start benefiting from Hydra's CLAUDE.md injection without installing anything. A PQL signal fires when 3+ users are active from the same company domain.

**Step 2 -- Team adoption:** The engineering manager notices closed Linear tickets and improved cycle time. They become an internal champion before any sales conversation happens. By the time sales reaches out, the champion already has numbers to justify the purchase.

**Step 3 -- Leadership signs:** VP Engineering or CTO gets a Hydra pitch from their own team. Enterprise contract follows. Contract size: $30,000-$100,000+ annually. PQLs convert at 5-6x the rate of MQLs.

---

## 12. Pricing

| Tier | Price | What you get | Target buyer |
|---|---|---|---|
| **Free** | $0 | Full discovery. Full audit. 5 fixes/month. 1 doc run/month. 5 Linear cycles/month. CLAUDE.md injection active. No credit card required. | Individual developer, early adopter, OSS projects |
| **Team** | $20/dev/month | Unlimited fixes. Unlimited doc runs. Unlimited Linear cycles. Up to 5 repos. Custom agent rules. | Engineering teams of 5-50 developers |
| **Business** | $40/dev/month | Everything in Team. Unlimited repos. Priority fix queue. Audit logs. Usage reporting. Jira integration. | Growing orgs 50-500 developers |
| **Enterprise** | Custom ($100K+ annually) | SSO. RBAC. SAML. VPC deployment. Compliance reporting. SLAs. Dedicated support. Custom model config. Global contribution controls. | VP Engineering / CTO at 500+ dev orgs |

**Competitive position:** Team at $20/dev/month is below Augment ($60-$200), below Sourcegraph Cody ($59), and competitive with CodeRabbit and Graphite. Priced to build the installed base, not to maximize early revenue.

---

## 13. Proof Point Bank

### Product claims

- Discovery runs in ~16 minutes and costs a few dollars per repo
- Three-step analysis: deterministic scanner patterns + 6 parallel agents across ~40 dimensions + Opus meta-review
- 78 scanner patterns generated from Hydra's own tenant audits
- 20 patterns meet the global quality threshold
- Benchmark at version 18 after extensive iteration; designed to be consistent and deterministic
- Kaizen: 7 loops, $2.75 total cost
- Fix workflow: baseline tests before every change, post-fix audit before every PR
- CLAUDE.md injection: every developer using Claude Code gets Hydra's context passively, no install required
- Target: first fix in under 10 minutes from install

### Market claims

- 51% of GitHub commits are now AI-generated or AI-assisted -- Harness Research Report, March 2026
- 51% of daily AI tool users report more code quality problems -- Harness Research Report, March 2026
- 53% of daily AI tool users report more security vulnerabilities -- Harness Research Report, March 2026
- Cloudflare internal review system: 3 min 39 sec median review time, $1.68 per full 7-agent review -- Cloudflare engineering blog
- PLG free-to-paid conversion benchmark: 8-15% in 90 days -- OpenView Partners / Profitwell
- PQL vs. MQL conversion rate: 5-6x higher -- Paddle research
- Augment: 1.03 bugs per PR vs. 0.54 for human reviewers -- Augment production data

### Competitor comparisons

- GitHub Copilot CCR always leaves a "Comment" review. It never approves or blocks a PR. It cannot fix anything, create a ticket, or document anything. -- GitHub documentation
- Augment Code F1 score: 53.8% on Code Review Bench (Martian, third party). No fix execution -- hands off to IDE.
- Qodo F1 score: 60.1% on proprietary benchmark. No discovery, no fix execution, no self-improvement loop.
- CodeRabbit: 2 million repos, 13 million PRs reviewed. Reviews diffs only. No fix execution. -- CodeRabbit Series B, September 2025
- Semgrep autofix creates a draft PR. Human reviews and merges.
- SonarQube: 400,000+ organizations. Advisory by design. Cannot find security bugs in actual code logic.
- Sourcegraph Cody: exited PLG July 2025. Enterprise-only at $59/user/month. The PLG market they abandoned is open.
- Amazon CodeGuru: formally retired late 2025. Replaced by Amazon Q Developer, a broad AI assistant, not a specialized governance product.

---

## 14. Capability Comparison

| Capability | Qodo | Augment | CodeRabbit | GitHub CCR | Semgrep | SonarQube | **Hydra** |
|---|---|---|---|---|---|---|---|
| Codebase discovery + documentation | No | No | No | No | No | No | **Yes** |
| Application profiling + context weighting | No | No | No | No | No | No | **Yes** |
| Deterministic scanner patterns | Partial | No | No | No | Yes | Yes | **Yes** |
| Multi-agent parallel LLM analysis | 15+ agents | Context engine | No | No | No | No | **6 agents / 40 areas** |
| Opus meta-review pass | No | No | No | No | No | No | **Yes** |
| CLAUDE.md injection (passive distribution) | No | No | No | No | No | No | **Yes** |
| Self-improvement + global pattern library | No | No | No | No | No | No | **Yes** |
| Autonomous fix execution (default) | No | No | No | No | Draft PR only | No | **Yes** |
| Linear ticket lifecycle closure | No | No | No | No | No | No | **Yes** |
| Freeform Notes (per-repo context injection) | No | No | No | No | No | No | **Yes** |
| Kaizen (continuous autonomous improvement loops) | No | No | No | No | No | No | **Yes** |
| Author attribution (findings per 1K lines) | No | No | No | No | No | No | **Yes** |
| PLG distribution (free tier) | No | No | Yes | Bundled | Yes | Community | **Yes** |

---

## 15. Objection Handling

**"We already have SonarQube."**
SonarQube finds issues and surfaces them. Someone on your team still triages every finding, decides what to fix, writes the fix, opens the PR, and closes the ticket. Hydra finds issues, creates the ticket, writes the fix using your codebase's own conventions, runs it through a full audit to verify it didn't break anything, and closes the ticket. SonarQube is a reporting tool. Hydra is the execution layer SonarQube will never be. And Hydra's three-step analysis starts with the same deterministic scanning SonarQube does, then goes further.

**"We already have GitHub Copilot Enterprise."**
Copilot Enterprise code review always leaves a "Comment" review -- it never approves or blocks a PR. It cannot fix anything, create a ticket, or document anything. It is a single-model system with a 4,000-character instruction limit. Hydra runs a three-step analysis (deterministic scanner, six parallel agents across ~40 dimensions, Opus meta-review), fixes what it finds, and closes the Linear ticket. Copilot is the conversation. Hydra is the execution. They are not the same product.

**"We're not comfortable with autonomous code execution."**
Every team starts there. Configure Hydra to run autonomous fixes only on bugs and documentation fixes for the first 30 days. Review gate on security fixes. After 30 days, look at what ran autonomously. Hydra established baseline test coverage before every change and ran a post-fix audit after. If you trust the output, expand the categories. Most teams expand to a much broader autonomous scope within 60 days because the guardrails work. You stay in control of what autonomous means.

**"We need SOC 2 before we can sign."**
SOC 2 Type I is on the pre-GA roadmap as a critical priority. A pilot scoped to non-production repositories is typically something a security team can approve without a full certificate in hand. We can provide our security questionnaire and architecture documentation now. Many enterprise pilots run on a BAA/DPA bridge while SOC 2 certification completes.

**"Can it run on-prem or in our VPC?"**
VPC deployment with proper tenant isolation is on the pre-GA infrastructure roadmap. Current infrastructure is moving to Fargate with a separate VPC as the first security milestone before public launch. Share the confirmed timeline based on where that work stands.

---

## 16. Homepage Hero

**Variant A**
Headline: Your codebase fixes itself.
Subhead: Hydra finds bugs across your entire codebase, executes the fix, and closes the Linear ticket. No human in the critical path.
CTA: Connect a repo

**Variant B**
Headline: Find it. Fix it. Close the ticket.
Subhead: Connect a repo. Hydra reads the whole thing, runs a three-step analysis, and resolves issues autonomously -- from detection to merged PR.
CTA: Get started free

**Variant C**
Headline: The code review that actually finishes.
Subhead: Every other tool posts a comment and stops. Hydra writes the fix, opens the PR, and closes the ticket.
CTA: Connect a repo

---

## 17. GitHub App Marketplace Listing

**App name line:**
Hydra -- autonomous code governance agent

**Description:**
Connect a repo. Hydra reads the entire codebase, builds an application profile, and runs a three-step analysis: deterministic scanner patterns, six parallel agents across 40 dimensions, and an Opus meta-review pass. When it finds a bug or security issue, it creates a Linear ticket, writes the fix using your repo's own conventions, verifies nothing broke, and opens the PR. The ticket closes on merge. Hydra also publishes a .hydra/ directory and patches your CLAUDE.md -- every developer on the team using Claude Code gets the architecture context automatically from their next session.

**Three feature bullets:**
- Three-step analysis catches what diff-only review misses: deterministic patterns, 6 parallel agents, Opus meta-review
- Autonomous fix execution: baseline tests before every change, post-fix audit before every PR opens
- CLAUDE.md injection: one install improves every teammate's AI coding session, passively

---

## 18. Boilerplates

### 150-word version

Hydra is the autonomous code governance agent that closes the loop every other tool leaves open.

Connect a repo. Hydra reads the entire codebase -- not just what changed this week -- and builds an application profile that weights analysis by what matters for your specific product. It runs a three-step analysis: deterministic scanner patterns, six parallel agents across forty dimensions, and an Opus meta-review pass to catch what structured analysis missed.

When it finds a bug or security issue, it creates a Linear ticket, writes the fix using your repo's own conventions, establishes baseline tests, verifies nothing broke, and opens the PR. The ticket closes on merge.

One developer installs Hydra. Every teammate using Claude Code automatically gets the architecture context and documentation in every AI coding session. No install required. No behavior change.

Autonomous code governance. The loop closes.

---

### 100-word version

Hydra reads your entire codebase, runs a three-step analysis (deterministic scanner, six parallel agents across forty dimensions, Opus meta-review), and when it finds a bug or security issue -- creates the Linear ticket, writes the fix using your repo's conventions, verifies nothing broke, and opens the PR. The ticket closes on merge. No human in the critical path.

One developer installs Hydra. Every teammate using Claude Code gets the architecture context automatically in every AI session. No install. No behavior change.

The self-improvement loop generates scanner patterns from every audit. Kaizen runs improvement loops until there's nothing left. $2.75.

---

### 50-word version

Connect a repo. Hydra reads the whole thing, runs a three-step analysis, and resolves issues autonomously -- from finding to merged PR to closed Linear ticket. One install improves every teammate's AI coding session through automatic CLAUDE.md injection. The self-improvement loop gets better with every audit.

Autonomous code governance.

---

### 25-word version

Hydra finds bugs in your entire codebase, writes the fix, opens the PR, and closes the Linear ticket. No human required.

---

## 19. Voice Reference Sheet

### Write like this

1. "Connect a repo. Run Discovery. Every PR after that gets reviewed against the profile it built."
2. "Seven loops. $2.75. No engineer hours."
3. "Hydra finds SQL injection vulnerabilities in your actual code. Snyk finds CVEs in your packages. These are different problems."
4. "Baseline tests before every change. Post-fix audit before every PR opens. Hydra verifies its own work."
5. "The self-improvement loop generated 78 scanner patterns from Hydra's own codebase. 20 meet the global quality threshold. Every audit makes the next one better."
6. "Every other tool stops at detection. Hydra executes."
7. "Your team is shipping 3x more code than 18 months ago. Your review capacity is the same."

### Do not write like this

| Wrong | Right |
|---|---|
| "Seamlessly integrate with your existing workflows" | "Works with GitHub and Linear. No behavior change required." |
| "AI-powered code governance for modern teams" | "Autonomous code governance. Finds bugs. Closes tickets." |
| "Effortlessly improve your code quality" | "Kaizen runs improvement loops until there is nothing left. 7 loops. $2.75." |
| "We transform how teams think about technical debt" | "Debt backlog shrinks. No one does it manually." |
| "Next-gen autonomous code review platform" | "Three-step analysis. Autonomous fix execution. Closed Linear ticket as output." |
| "Leverages AI to surface actionable insights" | "Finds the issue. Creates the ticket. Writes the fix. Opens the PR." |

### Three questions before publishing any line

1. Would a senior engineer say this in Slack at 4pm?
2. Is there a number, a mechanism, or a specific moment -- or is it a claim that requires trust?
3. Does it show what Hydra does, or does it describe what Hydra is?

---

## 20. FAQ

**What is the difference between Hydra and CodeRabbit?**
CodeRabbit reads your diff and posts comments. You triage, write the fix, open the PR, and close the ticket. Hydra starts where CodeRabbit stops: it finds issues in your entire codebase (not just the diff), executes the fix, opens the PR, and closes the Linear ticket. Hydra also builds a full codebase profile that improves every AI coding session on the team.

**What is the difference between Hydra and Snyk?**
Snyk checks installed packages for CVEs and licensing issues -- it looks at your dependency tree, not your code. Hydra finds security bugs in actual code logic: SQL injection vulnerabilities, auth pattern flaws, input validation gaps. A team can be Snyk-clean and have critical code-level security bugs. Hydra finds those.

**How does the CLAUDE.md injection work?**
When Discovery runs, Hydra opens a PR that creates a `.hydra/` directory and inserts a reference into the repository's `CLAUDE.md` file pointing to `hydra.md`. Every developer on the team using Claude Code automatically loads Hydra's architecture documentation, conventions, and how-to guides when they work on that repository. They do not install anything. They do not change their workflow.

**Is autonomous fix execution safe?**
Hydra establishes baseline test coverage before every change, executes the fix, verifies against the baseline, then runs the fix through a full audit before the PR opens. The current focus is on bugs and security issues -- categories where functionality does not change, making AI validation straightforward. Teams configure which categories are autonomous vs. review-gated.

**Do I need to use Linear?**
Linear is required to unlock the full ticket lifecycle loop today. Jira integration is on the pre-GA roadmap and will open the majority of enterprise engineering teams. The CLAUDE.md injection, three-step analysis, and autonomous fix execution work independently of Linear.

**What does Discovery cost?**
Discovery costs a few dollars per repository and runs in the background. It runs once per repository as the first action when you add a repo to Hydra.

**How does pricing work?**
Free tier includes full Discovery, full three-step audit, 5 autonomous fixes per month, and CLAUDE.md injection -- no credit card required. Team tier ($20/dev/month) removes all usage limits and supports up to 5 repos. Business tier ($40/dev/month) adds unlimited repos, audit logs, usage reporting, and Jira integration. Enterprise starts at $100K/year with SSO, RBAC, VPC deployment, and compliance reporting.
