The Nuclear Suite

Professional-grade legal intelligence. Every capability verified, every assertion traced to its source. The whole workflow, from verified research to filed brief.

Patents Pending

Every legal assertion in your documents passes through a multi-stage verification pipeline. Our resolution engine locates full opinion text across multiple source tiers, using proprietary retrieval methods that go beyond what any single database provides. Other platforms check whether a citation looks real. We retrieve the actual opinion. Quotes are compared word-for-word against the actual source text. Legal propositions are validated against the holdings they claim to support. Treatment history confirms the authority has not been overruled, questioned, or limited. Every verified authority is cached in the Sovereign Library, so the next user who cites the same case gets instant verification at zero cost. The result: verified work product you can file with confidence.

The Agentic Associate is the platform's autonomous agent layer. Where Plan Mode chains a predefined sequence of steps, the Agentic Associate operates with agency. You give it a broad directive, and it decides how to explore, what to prioritize, when to execute, and when to escalate. It researches, analyzes, drafts, files, and takes action. It is the difference between "execute these five steps" and "handle this entire workflow, research through execution, and tell me when you need a decision."

The architecture is an Orchestrator + Sub-Agent Pool. The orchestrator receives your directive, decomposes it into task domains, and spawns specialized sub-agents for each. Each sub-agent gets a fresh 200K-token context window. Zero truncation, zero context bleed. This is not a single model trying to hold 2,000 pages in memory. It is a fleet of agents, each with full reasoning capacity over its assigned scope, reporting findings back to the orchestrator for cross-referencing and synthesis.

Sub-Agent Specialization. Sub-agents are not generic. The orchestrator assigns them based on document classification: contracts, correspondence, financial records, court filings, regulatory documents. Each carries task-specific instructions and extraction schemas.

Materiality Filter. Separates hard evidence from procedural noise. Findings are classified by materiality tier so the attorney can focus on what moves the needle.

Live Telemetry. Real-time visibility into what every sub-agent is doing. A scrolling feed streams activity as it happens. This is not a black box.

Global Token Budget. Tracks consumption across the entire session tree. You set a budget ceiling, and the system operates within it.

Intelligence Dividend. Every verified authority discovered is auto-ingested into the Sovereign Library, 0-rated for every future query.

Computer Control. Browser automation bridge for pulling documents from e-filing portals, checking docket entries, looking up entity information. Requires global settings toggle + per-session consent.

MCP Integration. Dynamic tool registration from external MCP servers. Extensible without platform changes.

Human-in-the-Loop Gates. The orchestrator pauses at configurable decision points. The attorney reviews, approves or redirects, and the agent continues. Autonomy never exceeds authorization.

Execution, Not Just Analysis. The Agentic Associate doesn't just read and report. It drafts documents, creates matter entries, files structured findings, builds timelines, and populates the entity graph. When paired with computer control, it can pull filings from external systems, check docket entries, and push results back into the matter record. It is a full execution agent, research through work product, not a summarizer.

Registered Tool. Accessible through natural language or GUI. Uses 11 internal entity-creation tools. Output is structured intelligence and work product integrated into the matter graph, not a chat message.

Every other legal AI waits for you to ask whether its citations are real. That is reactive verification — it only works if you already suspect a problem. By then, the hallucination may already be in your work product.

Proactive Authority Detection runs automatically on every AI response before you see it. It identifies every citation, resolves each one against external databases, checks shepardizing status, and flags bad law, without a single additional prompt.

Conventional legal AI verification asks one question: does this citation appear in a database? If it matches, it passes. That catches hallucinations, but it catches nothing else. A citation can be real and still be wrong (cited for a holding it never announced, quoted with language it never used, or relied upon despite being overruled). Proactive Authority Detection correctly understands that each of these are independent points of failure. Every authority is scrutinized across multiple dimensions. When something fails, you see the specific dimension, the specific deficiency, and the evidence from the source.

Large language models fabricate legal citations 20-40% of the time. Not approximately wrong citations. Completely invented cases with plausible-sounding names, reporters, and page numbers. It gets worse with niche issues or when you need specific authorities from a specific state, because those cases appear less frequently in the training data. Ask any AI about Fourth Amendment digital privacy and it will confidently cite non-existent circuit court opinions alongside real Supreme Court landmarks. Every result from the Research Pipeline must survive a multi-stage elimination process. The system verifies that each authority exists, confirms it actually supports the claimed legal point, and ensures it has not been overruled. Fabricated results are caught and removed. What remains is a verified set of authorities you can cite without hesitation.

Opposing counsel makes sweeping claims. "Plaintiff has failed to plead any facts supporting fraudulent inducement." "The complaint contains no allegations of reliance." At the MTD stage, this is the most common battleground — the opponent claims something isn't pleaded, and you need to prove it is.

At every stage of litigation — but especially at the motion to dismiss — one side claims the other side did or didn't say something. These characterizations are often wrong, exaggerated, or misleading. But catching them requires painstaking cross-referencing: reading the opponent's brief, finding every factual claim about your pleading, then going back to your pleading to verify each one. At the MTD stage, defendants routinely claim "fails to plead" when the plaintiff did plead the element — just in different words, or in a different section, or implicitly through factual allegations the defendant chose to ignore. In discovery disputes, one side characterizes the other's responses inaccurately. In summary judgment, statements of undisputed facts are frequently disputed on closer reading.

Juxtaposition analysis automates this by comparing any two documents and systematically identifying:

  • Misstatements — Where opposing counsel characterizes your pleading inaccurately. Where they say your complaint "fails to allege X" but paragraph 47 alleges exactly that.
  • Gaps — Arguments the opponent raises that your pleading doesn't address — real vulnerabilities you need to shore up before responding.
  • Contradictions — Where the opponent's own brief contradicts itself or contradicts the record.
  • Unsupported claims — Where the opponent asserts something without citation or with miscited authority.

Features include side-by-side document comparison, opposing document classification and characterization, gap detection between related documents, contradiction identification across documents, and adversarial argument mapping with specific textual references to both documents.

What takes an associate hours of cross-referencing — reading a 30-page MTD brief against a 50-page complaint, paragraph by paragraph — is completed in minutes with specific textual references to both documents. The output is essentially a blueprint for the opposition brief: here's everything they got wrong, here's everything they ignored, and here's what you actually need to worry about.

The shepardizing system checks whether a cited case is still good law by analyzing subsequent treatment — the same function that Westlaw's KeyCite and LexisNexis' Shepard's Citations provide.

Capabilities: Negative treatment detection (overruled, reversed, distinguished, limited, criticized), positive treatment detection (followed, cited favorably, affirmed), treatment verdicts, citing case identification, and confidence levels.

Cross-user benefit — Shepardizing results flow into the Sovereign Library. Every shepardizing run, regardless of how it was triggered, contributes treatment data that benefits every other user.

Every verified authority is cached in the Sovereign Library with full opinion text and treatment metadata. When any user on the platform verifies a case, every future lookup for that same authority is instant and costs zero tokens. The library grows with every verification run, every research query, every shepardizing analysis across all users.

Over time, the most cited authorities in American law accumulate comprehensive treatment histories: which courts followed the holding, which distinguished it, which questioned or overruled it. Full shepardizing is built in. Citing case history, treatment classification, adverse signal detection, and the actual quotes from citing opinions showing how each court characterized the authority.

The Sovereign Library is the shared foundation. Your Knowledge Base is private. The combination means the AI draws from verified public law and your firm's institutional knowledge simultaneously.

Traditional legal AI charges per query. Every citation check, every verification, every shepardizing run consumes tokens. The Intelligence Dividend inverts this model. Every authority that has already been verified in the Sovereign Library is 0-rated: zero token cost, zero latency, zero external API calls.

If your firm verifies Twombly across 50 different briefs, you pay for the first verification. Every subsequent hit is a dividend paid back in saved tokens. This applies across the entire platform: when any user verifies a case, every future user who cites that authority benefits.

At the Professional level, the Intelligence Dividend can represent thousands of dollars in monthly savings. The larger the library grows, the less every firm spends. This is an architectural advantage that compounds with scale.

The Knowledge Base is a firm-private document intelligence layer. While the Sovereign Library is shared across all users (verified case law, statutes, and rules that benefit everyone), the Knowledge Base is private to each firm or user. Your firm's contracts, templates, internal memos, client-specific research. None of it is shared with or accessible by any other user. Knowledge Base data is never used to train models. Attorney-client privilege and work product protection are preserved.

Legal work is cumulative. The research memo you wrote for Client A six months ago contains analysis relevant to Client B today. The contract template your partner refined over 15 years encodes institutional knowledge that no AI has. The Knowledge Base makes all of this institutional knowledge searchable by the AI, so every answer is informed not just by general legal knowledge and verified case law, but by your firm's own accumulated expertise.

Upload files in any major format. The platform extracts, indexes, and enriches every document with structured metadata so the AI can retrieve precisely what is relevant. In private mode, ingestion runs entirely through the encrypted enclave. Sovereign Shield users receive cryptographic proof that privilege was maintained throughout ingestion.

Every document carries rich metadata that powers intelligent retrieval across jurisdiction, practice area, legal issues, document type, court, and judge. Precise filtering ensures the AI retrieves jurisdiction-appropriate, practice-area-relevant results.

Search and retrieval includes semantic search (find documents by meaning, not just keywords), faceted filtering, duplicate detection, and save from analysis.

The Knowledge Base supports hierarchical organization. Firms can structure their knowledge by practice area, client, matter type, or any custom taxonomy.

Sovereign Shield is a private AI mode that routes all AI processing through Trusted Execution Environments (TEEs) with cryptographically attested zero data retention. When Sovereign Shield is active, no client data ever reaches a third-party AI provider in cleartext. The system is designed to comply with the agency and confidentiality standards established in U.S. v. Heppner (2026), setting the standard for confidential AI inference in legal practice.

Legal AI has a confidentiality problem. Every prompt sent to an AI provider contains client data — case facts, strategy, privileged communications. For most firms, this is a calculated risk. But for government work, classified matters, M&A transactions under NDA, grand jury materials, or any engagement where data exposure is categorically unacceptable, that risk is a dealbreaker. Sovereign Shield eliminates this tradeoff.

Two tiers of protection are available. The Pro Tier runs AI inference on TEE-backed infrastructure with pay-per-token pricing and no infrastructure to manage. The Enterprise Tier runs AI inference on a dedicated, hardware-isolated enclave provisioned for the firm, with custom endpoint URL, air-gapped RAG, reproducible builds, KMS-gated encryption keys, and zero data retention.

Cryptographic attestation (Enterprise) means the system proves data protection cryptographically. The firm can verify, at any time, that the exact code running inside the enclave is the code that was audited and approved. This is not a policy. It is a mathematical proof.

The Security Proof UI provides a one-click toggle between public and private AI modes. Every feature on the platform works identically in private mode. The model routing is automatic.


Architectural Comparison: The Privilege Test

Privilege FactorStandard Cloud AISovereign Shield
Confidentiality BasisContractual Policy (Variable)Hardware Attestation (Fixed)
Provider AccessTechnically PossibleMathematically Impossible
Training RiskPolicy-dependent "de-linking"Physical Isolation
Kovel CompatibilityLikely No (Third-party waiver)Yes (Direct Agency Tool)
Judicial ScrutinyFailsMeets Kyllo Privacy Standard

Say you have 22 affirmative defenses to address in a motion to dismiss. Without Plan Mode, you have two bad options. First, handle all 22 at once. You type one massive prompt and get a response that's shallow on each defense. Second, handle them one at a time. Quality is good, but you're chained to your computer for the entire process.

Plan Mode solves this by allowing the AI to generate an execution plan, getting user approval once, and then executing the entire plan autonomously. Every stage receives the full capacity of the platform. The last affirmative defense gets the same depth as the first.

Batch work becomes autonomous. Brief sections, interrogatory responses, affirmative defenses, contract clauses. Any task that requires handling N items with consistent quality.

Plan Mode does not just chain arbitrary requests — it enables proven multi-stage legal workflows where each stage builds on the last. Consider responding to a motion to dismiss. The gold-standard process is:

  • Ingest the MTD and all supporting documents
  • Juxtaposition — systematically compare the MTD against your complaint to identify every misstatement, every "fails to plead" claim that is actually pleaded, every gap
  • Outline — generate an opposition structure that addresses each argument with the juxtaposition findings as ammunition
  • Draft with citations — write each section with supporting authority, verified through the research pipeline
  • Veracity — run the full verification suite on the completed draft to ensure every citation exists, every quote is accurate, every case is still good law

Without Plan Mode, the lawyer would need to manually trigger each of these stages, wait for completion, review, and initiate the next. With Plan Mode, the entire pipeline is defined upfront, approved once, and executed autonomously. The lawyer walks away and comes back to a verified opposition brief — not a rough draft, but a document where every factual claim has been cross-referenced, every legal citation has been checked, and every argument has been built on verified authority.

This pattern applies to any multi-stage legal workflow: contract review → deviation detection → redline → revision. Discovery review → extraction → privilege log. Research → outline → draft → cite-check. Plan Mode makes the AI's capability compositional — each feature builds on the others in sequence, producing results that no single feature could achieve alone.

AI-powered document type detection determines what kind of legal document the user has uploaded and applies the appropriate extraction engine.

Document review that takes an associate hours (reading an insurance policy to identify coverage gaps, reviewing a contract to find one-sided termination provisions) is completed in seconds with structured, navigable output.

Analysis Lenses are specialized UI views that present extracted data in document-type-appropriate formats. Every major legal document type has a dedicated lens.

Analysis Cards present structured findings across dozens of specialized categories, from party identification and timelines to risk flags and adversarial analysis.

Every AI has a context window. Send it a 200-page contract and attention fades. Middle pages get lost. The analysis of page 150 is shallower than page 5. This is the fundamental limitation of every other AI platform. We took a different approach. Our Software for the Document Intelligence Pipeline ensures that every section of every document receives full-depth analysis, regardless of document length. The AI gives page 150 the same attention it gives page 5. The platform applies dozens of specialized extraction engines, analysis lenses, and result cards to every document. The effective context window is limitless. No fidelity loss. No lost pages. No degraded analysis.

The Redline Review Engine reviews your document across two dimensions. Language quality (grammar, spelling, style consistency, clarity, legal accuracy, argument persuasiveness, document structure) and citation accuracy (citation verification, quote accuracy checking, shepardizing, proposition validation). Every authority in the document is verified.

Suggestions never drift or misalign, regardless of how many have been applied or in what order.

Suggestion system: Accept or reject individual suggestions, bulk accept/reject, formatting suggestions, AI-generated inline comments, suggestions stream in real-time.

The deviation detection engine identifies departures from standard and market terms in NDAs and contracts. A lawyer reviewing an NDA needs to know: what's standard, what's unusual, what's one-sided, and what's missing. The deviation engine compares the document against market norms and flags:

  • Non-standard clauses that diverge from typical terms
  • One-sided provisions that favor the other party
  • Missing protections that should be present
  • Unusual definitions that could create loopholes

Contract review that requires years of experience to do well — "is this non-compete reasonable?", "is this indemnification one-sided?" — becomes accessible to any lawyer on the platform. A first-year associate running deviation detection gets the benefit of a senior partner's pattern recognition, surfaced instantly against the specific document they're reviewing.

The Citations Tab is a dedicated panel that provides a comprehensive view of every citation in the active document, with a health score, filtering, and one-click correction.

The health score is an overall percentage (valid citations / total citations) with breakdown counts — how many are valid, how many have quick fixes available, how many have errors. At a glance, you know whether your brief is ready to file.

Filtering lets you view all citations, or filter to just quick-fixes (format-level corrections the system can apply automatically), errors (validation failures requiring attention), or valid citations.

Each citation in the panel shows the citation text (clickable — navigates to the citation's position in the editor), a status badge (Valid in green, Quick Fix Available in yellow, Error in red, Ignored in purple, Accepted with a green checkmark), validation errors as a bulleted list explaining what is wrong, a suggested correction with green highlighting showing the proposed fix, formatting rule violations with the specific Bluebook rule cited and "Apply" / "Dismiss" buttons, and user action tracking showing whether you have accepted, rejected, or not yet reviewed each suggestion.

Bluebook compliance checking validates citations against a comprehensive set of rules: - Full citation format (B10.1.1) - Reporter abbreviation (B10.1.2) - Pincite format and "at" usage (R. 3.2) - Year and court in parenthetical (B10.1.1) - Case name abbreviation per Table T6 (R. 10.2.1) - Textual vs. citation sentence distinction (B10.2.1) - Signal usage and italicization (B1.2) - Period placement after abbreviations - Parenthetical ordering (R. 1.5) - Quotation mark alternation (R. 5.1) - Ellipsis format (R. 5.3) - Bracket usage for alterations (R. 5.2) - Parallel citations and string citation format - Short cite / Id. suggestions (R. 4.1, R. 10.9)

For statutory citations, the panel shows verification results including a direct source URL link to the authoritative text.

Refresh capabilities allow re-extraction of citations with or without preserving pending suggestions.

Beyond the Citations Tab, the platform provides citation overrides (correct AI-extracted citations when the system gets it wrong), citation analytics (health/quality metrics across the project), bad law marking, batch correction, citation autocomplete as you type in the editor, a citation decorator that visually highlights citations in document view with color-coded status, citation navigation (click any citation in the panel to jump to its position in the editor), citation tooltips (hover over any citation in the editor for an instant tooltip showing validation status, errors, suggested corrections, formatting rule violations, and action buttons to accept or ignore — all without leaving the document), and statute trust badges as visual indicators of authority reliability and verification status.

The Orchestration Engine makes every feature possible. It ensures that every AI interaction, regardless of type, gets the same reliability guarantees: you see responses form in real-time, you can stop generation instantly, every token is tracked for billing, and the AI can take actions on your behalf mid-response. It is the reason the platform can offer dozens of integrated tools and analysis modes without any of them feeling bolted on.

The triage system decides how to handle every user message before the AI begins responding. A simple factual question doesn't need the same resources as a comprehensive motion to dismiss. A typo fix doesn't need the same approach as a multi-document discovery analysis.

Triage classifies every message in milliseconds and routes it to the optimal workflow. Simple questions get fast, direct answers. Complex tasks get the full depth of the platform. The system scales its resources to the task automatically, so you never wait longer than necessary and never get a shallow answer on something that matters.

Lawyers don't ask one question at a time. A single message might contain "summarize the NDA terms, check if the non-compete is enforceable in California, and draft a response to opposing counsel's demand letter." Without decomposition, the AI would try to handle all three in one pass, invariably shortchanging at least one. Decomposition ensures each sub-task gets the full capacity of the platform. The result is that a compound prompt produces the same quality as three individual prompts, without the user having to type three separate messages.

Legal work varies enormously in complexity. A billing narrative doesn't need the same model as a motion to dismiss. Intelligent routing means the user gets the best possible answer for each task while keeping costs proportional to value.

The platform has access to multiple AI models spanning different capability tiers. Each task is automatically routed to the optimal model for its complexity. For users who require that their data never touches a third-party AI provider, private-mode models run inside encrypted enclaves (see Sovereign Shield). Switching between public and private AI requires zero workflow changes.

Legal AI responses can be long. A comprehensive case analysis might be 3,000+ words. Without streaming, the user stares at a blank screen for 30-60 seconds. With streaming, they see the answer forming word by word, can assess whether it's heading in the right direction, and can stop generation at any time if it's not what they need.

All tools are accessible two ways — naturally through conversation ("schedule a deadline for the MSJ response on April 15th") or through explicit GUI controls in the interface. The AI decides when to invoke tools based on the user's natural language, but the user can also trigger them directly.

Deadline Management — The AI can create deadlines as a natural byproduct of document analysis ("this complaint must be answered within 21 days — I've created a deadline for April 7th"), and the user can query deadlines conversationally. Features include title, date, priority levels, project association, completion toggling, and gap alert dismissal.

Calendar Management — The AI can schedule events as a natural byproduct of case analysis, and every calendar event can be billable or non-billable. Features include event creation with title, time, location, attendees, reminders, RSVP management, multi-user availability checking, and billable/non-billable classification with time tracking integration.

Time Tracking — The AI can proactively create time entries as it works — when it drafts a motion, it logs the time spent — and gap detection identifies periods where the lawyer was active but didn't log time. AI-generated billing narratives using present-tense "-ing" convention. Submit/approve/reject workflow. Per-project and global timers.

Veracity & Verification — The full verification engine, invokable as a tool.

Juxtaposition Analysis — Side-by-side document comparison.

Legal Research — Broad doctrinal research, targeted support/attack research, and interactive continuation.

Sovereign Library Query — Query the platform's growing verified legal text cache for instant authority lookup.

Document Ingestion — Import PDFs and other documents into your project for AI-powered analysis.

Project Creation — Create new legal matters directly from chat.

Editor Tools — The AI can directly edit the document open in the editor. Find and replace text, insert new text, apply formatting, and search the document. Suggestions never drift or misalign.

Tool Execution Architecture — The AI can use multiple tools within a single response. Every tool execution renders a structured card in the chat UI.

The synthesis engine takes the output of research, analysis, and document extraction and transforms it into coherent, structured answers that lawyers can use immediately. It can improve existing text, summarize lengthy analysis, rewrite content for a different audience, and combine findings from multiple sources into a single coherent narrative.

This is the bridge between raw AI capability and usable legal work product. A research pipeline might produce verified citations, a document extraction might produce structured clause data, and a juxtaposition analysis might produce a map of contradictions, but none of those outputs are a memo, a brief section, or a client letter. The synthesis engine takes all of those inputs and produces polished, structured prose that reads like it was written by a senior associate who reviewed every source.

The synthesis engine also powers the Authority Manifest — a comprehensive list of all authorities cited or relied upon across all project documents, with verification status, treatment data, and doctrinal clustering. For brief writing and hearing preparation, lawyers need to know every case they are relying on, whether it is still good law, and how it fits into their doctrinal framework. The authority manifest provides this at a glance.

Matter Pulse is a living synthesis of the entire matter — parties, issues, timeline, key authorities, procedural posture — that updates automatically as new information is added to the case. When a lawyer picks up a file after weeks away, they need to get up to speed quickly. Matter Pulse provides an always-current summary of the entire case, updated with each new document uploaded, each new research finding produced, and each new analysis completed.

Unlike a static case summary that someone writes once and never updates, Matter Pulse evolves with the matter. Upload a new deposition transcript and the party index updates. Run a research query that finds a new controlling authority and the authority section reflects it. Complete a juxtaposition analysis that reveals a previously unnoticed gap in the pleadings and the issues section flags it. The lawyer never has to manually synthesize the state of a case — Matter Pulse does it continuously.

This is particularly valuable for matters with multiple team members. When a junior associate runs research on Monday, the partner reviewing the file on Wednesday sees those findings reflected in Matter Pulse without anyone having to write a memo or send an email. The matter's intelligence is always current, always accessible, and always comprehensive.

Generic AI knows the law in general. Retrieval-augmented generation makes the AI know your case. When you ask "does the complaint adequately plead reliance?", the AI reads your actual complaint, not a hypothetical one. This is the system that gives the AI access to the user's actual case materials, not just its training data, when generating responses.

The platform draws from verified legal text, your firm's indexed documents, and your uploaded case materials. Retrieval is scoped by project and filtered by jurisdiction, practice area, and court tier. A lawyer working on a New York breach of contract case does not want California employment law results cluttering the AI's context. Filters ensure retrieval is jurisdiction-appropriate and practice-area-relevant.

The AI does not forget what it retrieved in previous turns. Multi-turn research conversations feel continuous and coherent, as if the AI is building a working understanding of the case with each exchange.

Context windows are temporary. Litigation is not. Total Matter Awareness ensures that every document in your matter is permanently cataloged with identity, classification, and cross-document metadata. Whether a document was uploaded six months ago or six minutes ago, it remains accessible across every conversation and every team member. No data loss. No context fatigue. One unified, defensible record for the life of the matter.

This does not consume your context window. The AI retrieves documents through agentic lookup — searching the project manifest, identifying the relevant files, and pulling only the necessary content into context on demand. The full matter stays available without competing for token budget with the current conversation.

When you upload a filing, the system automatically extracts metadata: filing party, court, date filed, docket number, case caption. For same-case filings, it suggests ECF docket citations. For external authorities, it classifies the document by category — party brief, court order, case law, correspondence, law review — and builds a tiered identity map so the AI recognizes the document whether you call it "the MTD," "Defendant's Motion to Dismiss," or "ECF No. 45." Documents can be promoted from chat attachments, the Knowledge Base, or verified authorities directly into the permanent project manifest with one click.

Multiple users can edit the same document simultaneously, with changes syncing in real-time. You see other users' cursor positions and text selections in real-time — you can see exactly what your colleague is looking at, including their highlighted text. Click on a collaborator's avatar to scroll to their position in the document. Active users are displayed, and simultaneous edits merge without conflict.

Our editor provides multiple modes — enhanced, focused, and pageless — with a custom toolbar designed for legal-specific formatting. A command palette provides quick access to actions. Find and replace supports pattern matching. Focus mode strips away distractions for concentrated writing.

The AI is not just a chatbot sitting next to a document — it can reach into the document and make changes directly. "Bold all the case names in this brief." "Replace every instance of 'Defendant' with 'Respondent'." "Insert a new section addressing the statute of limitations after paragraph 12." This eliminates the copy-paste workflow where you ask the AI for text, then manually paste it into the right place. Suggestions never drift or misalign, regardless of how many edits have been applied.

A full review workflow supports submit for review, respond to review, and finalize or reject cycles. Review status banners keep everyone informed. AI-generated inline comments highlight areas for attention. Document locking during review prevents conflicting edits. AI-powered style detection learns from your existing documents and validates consistency, with manual style override and formatting source tracking for full control.

Legal documents go through dozens of revisions. A contract might be negotiated over weeks with changes from multiple parties. A brief gets revised as research develops. Without version control, lawyers rely on file names ("Motion_v3_FINAL_FINAL_revised.docx") or email chains to track changes — a system that inevitably fails. The version control system provides a structured, reliable history with word-level comparison.

Every version shows the version number with a bookmark icon for manually named versions, a custom label (e.g., "After partner review" or "Pre-filing draft"), who created it and when, word count, and character diff statistics showing additions and deletions with color coding (green for additions, red for deletions). Auto-save badges distinguish system-generated versions from manually created ones, and the latest version carries a "Current" badge. Version creators can attach optional comments and notes.

For each version you can view the full content of any historical version, compare with word-level diff against the current version showing green highlighting for additions and red strikethrough for deletions with a legend explaining the color coding, restore to roll back to any previous version (the current version is automatically saved as a backup before restoration — you can never lose work), and delete for permanent removal (owners only, and the current version cannot be deleted).

The system automatically creates versions at regular intervals as a safety net — you never lose more than a few minutes of work. Manual versions can be created at any point with custom labels and comments — "version before sending to client" or "incorporates judge's feedback." You can toggle auto-saves on or off in the version list to focus on meaningful milestones.

The version history integrates with the Redline Review Engine. When the AI suggests changes during a review, each suggestion is tracked individually with accept/reject. The version history captures the state before and after review, so the lawyer can always see what the AI suggested and what they accepted.

The Placeholder System lets you define global placeholders (party names, dates, amounts, addresses) that auto-populate across all documents in a matter. A matter might reference "Defendant" in 50 places across 10 documents. Placeholders let you define it once and update everywhere. When the defendant's name changes (as it does when entities are added or removed from a case), one change propagates universally across every document.

Placeholders are detected and highlighted inline as you write, making it clear which terms are dynamic. They are organized by category — parties, dates, financial figures, addresses — for easy management. Completion progress tracking shows you how many placeholders have been filled versus how many remain open. Version history tracks changes to placeholder values over time, so you always know when a value changed and what it was before.

This eliminates one of the most error-prone aspects of legal document production: finding and replacing the same term across multiple documents. When you are producing a set of closing documents, updating the closing date in one placeholder updates it in the purchase agreement, the escrow instructions, the title documents, and every other document that references it. No manual find-and-replace, no missed instances, no inconsistencies.

Every legal matter is more than a collection of documents — it is a web of entities, relationships, issues, and facts that span across every filing, contract, and communication. The matter management system builds a structured model of each case that makes this complexity navigable.

The Matter Graph maps entity relationships across all documents in the matter. When a contract references a parent company and a deposition names a subsidiary, the system connects them. The Party Index tracks all parties with their roles and relationships — plaintiff, defendant, third-party, guarantor, assignee — across every document. The Provision Index catalogs contract provisions and clauses for quick reference and comparison. The Topic Index organizes legal issues topically, so you can find everything related to "personal jurisdiction" or "breach of fiduciary duty" regardless of which document it appears in.

Matter Facts are extracted facts from case documents — the factual building blocks that support legal arguments. Matter Issues are legal issues identified across the matter, linked to the facts and authorities that address them. Matter Clauses are relevant contract clauses pulled from agreements in the case file. And Matter Pulse provides the living synthesis of the entire matter — the always-current overview described above.

Core matter operations include creating, updating, deleting, and duplicating legal matters. Jurisdiction and status tracking keep the matter's procedural posture current. URL-based navigation provides deep linking into any matter. Project activity logging maintains a full audit trail of everything that happens on a case.

After a long research conversation (maybe 40 messages back and forth with the AI about personal jurisdiction, forum selection, and the enforceability of a choice-of-law clause) the valuable findings are scattered. The key quote from Daimler is in message 7. The analysis of long-arm jurisdiction is in message 15. The factual finding about the defendant's contacts is in message 23. Extracting all of this manually means re-reading the entire conversation and copy-pasting into a memo. Curation automates this.

The system scans the conversation, detects the legal issues discussed, and extracts the material that matters, organized by issue with full provenance. There is no synthesis or summarization. The system extracts raw material with attribution, preserving the lawyer's own analysis exactly as it was expressed.

Every extracted item carries provenance metadata identifying where it came from and who said it.

Completed reports are rendered in the Curation Notebook with collapsible issue sections, copy-to-clipboard, and a "Discuss in new chat" button that creates a new conversation seeded with the curation report.

Lawyers lose 20-40% of billable time because they forget to record it. They reconstruct their day at 6pm, guessing how long tasks took. The time tracking system solves this in two ways: the AI can proactively create time entries as it works — when it drafts a motion, it logs the time spent on that task — and gap detection identifies periods where the lawyer was active but did not log time, prompting them to fill in the gaps.

Time entries support billable and non-billable classification, project and activity association, and AI-generated billing narratives using present-tense "-ing" convention (the standard at many firms: "Reviewing documents" not "Reviewed documents"). A submit/approve/reject workflow supports firm-level billing review. Gap detection alerts surface unbilled periods, and you can create entries directly from detected gaps. Per-project and global timers persist across navigation. Multiple export formats integrate with external billing systems. AI analyzes your activity and suggests time entries for periods where you were working but did not log time, presented in a smart suggestions panel with accept/reject.

Calendar management is fully integrated with time tracking. Full calendar management includes attendee tracking, RSVP, reminders, and multi-user availability checking. Every calendar event can be classified as billable or non-billable, bridging the gap between "what happened" and "what gets billed." A deposition scheduling automatically suggests a billable time entry, eliminating the double-entry that plagues traditional practice management. The AI can schedule events as a natural byproduct of case analysis — conversationally, through chat — and every event flows into the billing system.

Deadline tracking provides priority-based deadlines with project association, completion toggling, an upcoming deadlines widget, and a gap alert system for approaching deadlines. Legal deadlines are life-or-death. Missing a statute of limitations or a filing deadline can be malpractice. The AI creates deadlines as a natural byproduct of document analysis ("this complaint must be answered within 21 days — I have created a deadline for April 7th"), and you can query deadlines conversationally ("what deadlines do I have this week?").

The Immediate Action Pad is a full-featured AI chat interface accessible directly from the dashboard — not tied to any project, matter, or document. It is the entire conversational AI system with zero friction: no project to create, no matter to configure, no document to select. Just open and start.

The biggest barrier to AI adoption is friction. If a lawyer needs to create a project, name it, set a jurisdiction, and select a document before they can ask a question, they will not bother — they will just search the web or call a colleague. The Immediate Action Pad eliminates every barrier between "I have a question" and "I am getting an answer." It sits on the dashboard, ready to go, with the full power of the AI behind it.

You get the full chat experience with no setup: streaming AI responses using the same models, the same triage system, the same quality as any project-based conversation. Multiple conversations let you create, switch between, rename, and pin scratchpad conversations. Pinned messages let you bookmark important findings for quick reference. Full-text search works within your scratchpad conversations. Message history provides full scrollback and conversation persistence. When collapsed, the pad shows rotating example prompts (drafting, litigation, legal analysis) for inspiration.

From scratchpad to project: when a scratchpad conversation produces valuable work — real research, a draft worth refining, analysis worth preserving — you can save it to an existing project or create a new project directly from the scratchpad. The entire conversation (messages, AI responses, pinned items) transfers to the project. This means zero work is ever lost: start with a quick question on the scratchpad, and if it turns into a real matter, promote it with one click.

The lawyer's day is full of quick questions. "Is there a statute of limitations issue here?" "What is the standard for TRO in this circuit?" "Draft a quick email to opposing counsel about the deposition schedule." None of these warrant creating a project. All of them benefit from AI. The Immediate Action Pad makes the AI as accessible as a search bar — always there, always ready, no overhead.

In any firm, documents are constantly in flight — sent for review, awaiting approval, stuck with someone who is busy. Lawyers spend significant time chasing down the status of documents they have sent to others, and miss reviews they have been assigned. The My Workflow widget centralizes all of this in one place.

The widget provides three tabs. Awaiting My Review shows documents that have been sent to you for review. It displays the document title, who requested the review, and how long it has been waiting. One-click navigation takes you straight to the document. You can dismiss items that are no longer relevant. A red badge count shows how many reviews are pending — impossible to miss.

Sent for Review shows documents you have sent to others. Each reviewer has a status badge — green checkmark (approved), red alert (changes requested), clock (still pending). Actions include going to the document, reminding the reviewer if they are taking too long, reassigning to someone else, or canceling the review. This is the "where is my document?" tracker that eliminates the "hey, did you get a chance to look at that draft?" emails.

Updates is a project activity feed showing what is happening across your matters — new documents added, analyses completed, reviews finished. You can dismiss items individually or clear all. It keeps you current without checking each matter individually.

Additional features include a project filter dropdown (view all projects or filter to one), a reassign dialog with two-tier display showing matter team collaborators first then the broader firm directory, a share project button for quick access to collaboration settings, and real-time badge counts that update as events occur.

Legal matters are confidential. A firm may have 50 lawyers, but only 3 should see a particular matter. The role-based access control system ensures that access is controlled at the matter level, with full audit trails of who accessed what.

You can add collaborators by inviting team members to a matter with specific permission levels. Permission levels provide granular control over who can view, edit, analyze, and administer each matter. Ownership transfer lets you move matter ownership to another user — essential when an attorney leaves the firm or a matter is reassigned. The invitation system delivers team member invitations with specified access levels.

Permission checking is pervasive: every operation validates permissions before execution. Document access, AI interactions, tool use, and administrative actions all respect the permission model. A viewer cannot trigger an AI analysis. A commenter cannot edit document text. An editor cannot transfer ownership. The system enforces these boundaries at every level, not just the UI.

You can remove collaborators when team members leave a matter, immediately revoking their access. The combination of granular permissions, invitation management, ownership transfer, and comprehensive audit trails means that confidentiality is maintained through every stage of a matter's lifecycle — from initial creation through active litigation to final archiving.

Lawyers are constantly resending documents. A partner asks for the latest draft. Co-counsel needs the research summary. The client wants the timeline. Every time, the lawyer digs through files, finds the right version, formats it for the audience, and sends it. This is tedious, error-prone (wrong version sent), and wastes time that could be billed.

The Export Control Panel allows comprehensive export of project materials. Document selection provides multi-select checkboxes for all project documents — export one, some, or all. Chat history export lets you select specific conversations to include, with filtering: include all messages (full transcript), include only AI responses (research findings without the back-and-forth), or include only user messages (just the questions and instructions). Pinned messages can be toggled to include pinned messages from all conversations — the curated highlights without the noise. Output is a professionally formatted PDF via print preview, ready for delivery.

When a partner asks "send me everything on the Jones matter," the answer is one export — documents, research conversations, and pinned findings — formatted and ready to send. No digging through email threads or file systems.

The Share Modal provides granular access control for inviting team members. You can invite by email or user identifier. Role-based permissions include Editor (full document editing + chat participation), Commenter (document comments + chat view only), Viewer (document view only + chat view only), and Custom (mix and match permissions — e.g., can view documents but participate in chat). Permission dimensions cover document access (none / view / edit / comment) crossed with chat access (none / view / participate / pinned_only). Collaborator management lets you add and remove collaborators, update permissions per user, transfer ownership, and leave as owner.

Ready to Deploy the Nuclear Suite?

Every plan includes all 38 capabilities. Choose your security posture and work capacity.

View Pricing