Workflow — eDiscovery First-Pass

First-pass review without the review center.

200K–5M document sets, post-TAR — responsiveness, privilege, and hot-doc tagging on your case taxonomy. Tagged production sets and privilege logs into Relativity, Reveal, Everlaw, or Lighthouse. Replaces contract-attorney review at a fraction of the per-hour rate.

$35–$65
Per hour, contract attorney first-pass review (billed)
200K–5M
Documents in a typical post-cull review set
60–85%
Volume off the contract-attorney line after cutover
What This Replaces

The Contract-Attorney Review Center on Every Investigation

The work the contract-attorney review center does on every production — and the cost of leaving it there.

The labor

eDiscovery first-pass today moves through ALSP review centers at Epiq, Consilio, KLDiscovery, Cobra Legal Solutions, UnitedLex, Lighthouse, and adjacent firms. Onshore contract attorneys cost $35–$65 per hour billed; offshore reviewers $15–$30. A million-document review at 50–75 documents per attorney per hour runs hundreds of thousands of dollars before the privilege second-pass even starts.

The cycle time

Typical first-pass timelines run 6–16 weeks at the ALSP, with reviewer ramp, calibration meetings, and quality-control sampling eating into the schedule. Every week the production set isn't tagged is a week the case team can't see the hot-doc clusters, can't refine the responsiveness call, and can't anticipate which documents opposing counsel will hammer on at deposition.

The Workflow

Input · Analysis · Output

What goes in, what we do to it, and what shows up in the review platform.

Input

TAR-prepared document set

  • Post-cull document set in Relativity, Reveal, Everlaw, or Lighthouse
  • Custodian metadata and email-thread structure
  • Case-specific issue taxonomy and seed coding
  • Prior privilege determinations from related matters
  • Search-term reports and TAR validation results
  • Foreign-language documents (English-needed flag)
  • Native files (Word, Excel, PowerPoint, PDF, images)
Analysis

Tag responsiveness, privilege, hot-doc

  • Responsiveness coding per case-specific taxonomy
  • Privilege identification (attorney-client, work-product)
  • Hot-doc identification (case-team-defined criteria)
  • Issue tagging per case-specific issue list
  • Foreign-language responsiveness without full translation
  • Confidence score per coding decision
  • Exceptions to second-pass attorney queue
Output

Tagged set + log into the platform

  • Relativity (REST API and RDOs)
  • Reveal (REST API)
  • Everlaw (REST API)
  • Lighthouse (Smart Workflows)
  • Privilege log draft with metadata fields populated
  • Hot-doc memo for the case team
  • Coding-decision audit trail per document
Side by Side

eDiscovery First-Pass Today vs. With Last Rev

The numbers that matter: cycle time, per-document cost, accuracy, and audit posture.

Dimension Contract-Attorney Review CenterLast Rev eDiscovery First-Pass
Cycle time, set received to first-pass tagged 6–16 weeks5–14 days
Per-hour or per-document cost $35–$65/hour onshore, $15–$30/hour offshorePer-document, benchmarked at 20–40% of ALSP unit cost
Surge handling on rolling productions Add reviewers, recalibrate, re-QCElastic by design — same case taxonomy applied across rolling productions
Coding consistency across reviewers Variable — judgment drift, reviewer turnover, calibration overheadDeterministic — same taxonomy applied identically across the entire set
Privilege identification posture Reviewer-by-reviewer, second-pass attorney for confirmationPrivilege-aware first-pass + structured second-pass queue with metadata pre-populated
Review-platform integration Manual coding inside Relativity / Reveal / EverlawDirect via Relativity / Reveal / Everlaw / Lighthouse APIs
Renegotiation leverage at next review-vendor renewal None — you're locked in60–85% of first-pass volume off the contract
How It Works

From TAR-Prepared Set to Tagged Production

Five steps. Every one logged. Every one reversible if your confidence threshold isn't met.

Submission Lands
TAR-prepared document set in Relativity, Reveal, Everlaw, or Lighthouse — with custodian metadata, email-thread structure, search-term reports, and case-specific issue taxonomy from case counsel.
Extraction & Classification
Responsiveness coding per the case taxonomy. Privilege identification (attorney-client, work-product, common-interest). Hot-doc identification per case-team-defined criteria. Issue-tag application across the set.
Validation Against Case-Team Bar
Coding decisions validated against case-team calibration sample and seed coding. Anything below your confidence threshold per coding decision is routed to a human exception queue — your call which queue, ours or yours.
Push to Review Platform
Tagged set, privilege log draft, and hot-doc memo delivered into Relativity, Reveal, Everlaw, or Lighthouse via the documented integration. Privilege log metadata pre-populated for the second-pass attorney.
Audit Log Persisted
Every coding decision, privilege determination, and hot-doc tag logged with the source document, model version, prompt, and confidence score. FRCP-defensible chain of custody for clawback events and Rule 26(g) certifications.
Compliance & Defensibility

Built to Meet the Quality Bar Litigation & Investigations Run On

FRCP Rule 26(g) certification posture
Every coding decision is logged with model version, prompt, and confidence score so a Rule 26(g) certification can rest on a defensible reasonable-inquiry record. The audit trail produces what an opposing counsel motion to compel would actually need.
Privilege and work-product posture
Privilege-aware first-pass review with metadata pre-populated for the second-pass attorney. Clawback events trace back to the document, the page, and the basis for the original privilege determination — cleaner than reconstructing from a contract attorney's notes.
TAR-validated workflow integration
Operates inside the TAR validation framework your case team already uses (Sedona Conference principles, statistical-sampling validation, control-set comparison). The first-pass output goes back into the same QC gate — not a parallel track.
Data residency & investigation confidentiality
Investigation documents contain case-team work product, witness statements, and pre-disclosure information. Deployable in your VPC or our SOC 2 environment. Encryption in transit and at rest; retention policies tied to your matter and post-judgment retention rules.
Common Questions

What Litigation & Investigations Teams Ask About First-Pass Review

How is this different from the AI features built into Relativity, Reveal, Everlaw, or Lighthouse?
Relativity, Reveal, Everlaw, and Lighthouse are the review platforms — and each has improving in-platform AI features. The competitor on this page is the contract-attorney review center on your matter budget — Epiq, Consilio, KLDiscovery, Cobra Legal Solutions, UnitedLex, Lighthouse review services, or a captive offshore center billed at $35–$65 per hour onshore. We undercut that labor cost, integrate directly into your existing review platform, and deliver tagged production sets and privilege log drafts into the system of record. We do not compete head-to-head with the in-platform AI most clients already license — most run them complementarily.
How is this different from your privilege review page?
First-pass review is the high-volume responsiveness, privilege, and hot-doc tagging across the entire post-cull set. Privilege review is the focused second-pass workflow — privilege determinations, work-product analysis, and privilege log generation on the documents the first-pass flagged. Different scopes, different deliverables. We built each as a separate page so the workflow stays specific to what the case team buys.
We have a contract-attorney review center on retainer. How does this work alongside that?
Most case teams keep the review-vendor arrangement in place during pilot and early production — we route exceptions, complex privilege calls, and any document genuinely requiring attorney judgment to the team you already have. Volume to the contract-attorney center drops 60–85% on first-pass review once cutover completes. You renegotiate at the next renewal from a much better position, or shift the relationship to higher-complexity work like privilege second-pass or deposition prep.
What's your accuracy bar versus a contract-attorney first-pass reviewer?
Our pilot success threshold is responsiveness, privilege, and hot-doc tagging accuracy at parity with or above your incumbent review center, measured on the same shadow-data sample of documents and validated against the case-team calibration set. Anything below your defined confidence threshold per coding decision is routed to a human exception queue — your call which queue, ours or yours.
How do you handle foreign-language documents in the production set?
We run responsiveness review without full translation for the high-volume foreign-language portion of the set, then surface candidates for translation only on the hot-doc shortlist. Mandarin, Japanese, Korean, German, Portuguese, and Arabic are well covered; less common languages route to native-speaker queue. See our foreign-language document review page for the dedicated workflow.
Can you actually integrate with Relativity, Reveal, Everlaw, and Lighthouse?
Yes — through the documented integration surface each platform supports. Relativity via the Relativity REST API and RDOs; Reveal, Everlaw, and Lighthouse via their published REST APIs. Your IT team reviews and approves a service account, and we connect through the documented integration. We do not require platform-side custom development.
How long until a pilot is running on a live matter?
eDiscovery pilots typically run 4–6 weeks: 1 week of integration and case-taxonomy mapping with case counsel, 2–3 weeks of shadow-mode running on a constrained subset of the post-cull set with no platform-side coding writes, 1–2 weeks of supervised cutover. Production rollout is staged after the pilot meets your accuracy and SLA bar.
What does pricing look like compared to our current per-document review rate?
We benchmark against your current per-hour or per-document review unit cost — typically translates into per-document economics depending on average review velocity. Our target is 20–40% of that per-document cost at higher accuracy and faster cycle time. Pricing structures around volume tiers and outcome SLAs, not hourly billable rates.

Two Ways to Start

Take the AI assessment for a structured read on first-pass-review feasibility on your typical matters. Or talk to us if you already know which review-vendor line is bleeding the most labor cost.

Other Workflows

More Legal Workflows We Replace

The same approach, applied to the other document-heavy labor lines on your legal-ops or ALSP budget.