Workflow — Foreign Language Review

Responsive without translating everything.

Mandarin, Japanese, Korean, German, Portuguese, Arabic, and beyond — responsiveness review without full translation, with hot-doc translation only on the shortlist. English summaries and tagged production into Relativity, Reveal, Everlaw, or Lighthouse. Replaces native-speaker contract attorneys at a fraction of the per-word and per-hour cost.

$0.18–$0.32
Per word, contract translation at the ALSP
$50–$120
Per hour, native-speaker contract attorney (billed)
60–85%
Volume off the native-speaker line after cutover
What This Replaces

The Native-Speaker Review Center on Every Cross-Border Matter

The work the native-speaker reviewer does on every foreign-language document — and the cost of leaving it there.

The labor

Foreign-language review today moves through native-speaker contract attorneys at Morae, Lighthouse, Consilio, Epiq, KLDiscovery, UnitedLex, and Cobra Legal Solutions. Pricing combines per-word translation rates ($0.18–$0.32 per word for contract-grade translation) with hourly review billing ($50–$120 per hour for native-speaker reviewers). A 100K-document Mandarin or Japanese set with full translation can run into seven figures before the case team has a hot-doc shortlist.

The cycle time

Cross-border discovery typically runs 8–20 weeks at the native-speaker review center, with native-speaker recruiting, language calibration, and per-language quality control eating into the schedule. Every week the foreign-language portion of the production isn't tagged is a week the case team can't see the cross-border hot-doc clusters and can't refine its theory of the case for international depositions or treaty requests.

The Workflow

Input · Analysis · Output

What goes into the foreign-language review, what we do to it, and what shows up in the review platform.

Input

Foreign-language document set

  • Documents in Mandarin, Japanese, Korean, German, Portuguese, Arabic
  • Mixed-language documents (foreign body + English exhibits)
  • OCR-extracted text from scans and image PDFs
  • Custodian metadata and email-thread structure
  • Case-specific issue taxonomy (English-defined)
  • Native-speaker glossaries for case-specific terms
  • Less common languages flagged for native-speaker queue
Analysis

Triage without full translation

  • Responsiveness coding without full translation
  • Privilege and hot-doc identification in source language
  • Issue tagging per English-defined taxonomy
  • Key-term identification across languages
  • Hot-doc translation queue — only the shortlist
  • English summaries for hot docs and witness exhibits
  • Confidence score per coding decision; exceptions to native-speaker queue
Output

Tagged set + translations into the platform

  • Relativity (REST API and RDOs)
  • Reveal (REST API)
  • Everlaw (REST API)
  • Lighthouse (Smart Workflows)
  • Translated hot docs as English-language production
  • English summary memo for the case team
  • Per-language coding-decision audit trail
Side by Side

Foreign Language Review Today vs. With Last Rev

The numbers that matter: cycle time, per-document cost, accuracy, and audit posture.

Dimension Native-Speaker Review CenterLast Rev Foreign Language Review
Cycle time, set received to first-pass tagged 8–20 weeks (recruiting + calibration)7–14 days
Translation cost on the full set $0.18–$0.32 per word, applied broadlyTranslation only on hot-doc shortlist — typically 1–5% of the set
Per-hour native-speaker review cost $50–$120/hour billedPer-document, benchmarked at 20–40% of native-speaker unit cost
Coding consistency across languages Variable — per-language reviewer pools, calibration driftSame case taxonomy applied across all languages identically
Audit log per coding decision Reviewer notes in source language, no machine-readable lineageSource paragraph + coding decision + model version + confidence per language
Mixed-language document handling Routed by primary language, slower QCMulti-language extraction in a single pass
Renegotiation leverage at next review-vendor renewal None — you're locked in60–85% of foreign-language first-pass volume off the contract
How It Works

From Foreign-Language Set to Tagged Production

Five steps. Every one logged. Every one reversible if your confidence threshold isn't met.

Submission Lands
Foreign-language document set in Relativity, Reveal, Everlaw, or Lighthouse — with custodian metadata, email-thread structure, English-defined case taxonomy, and any case-specific glossaries the team has built.
Extraction & Classification
Responsiveness, privilege, and hot-doc coding applied in the source language without full translation. Issue tags applied per the English-defined taxonomy. Key-term identification across languages. Mixed-language documents handled in a single pass.
Validation Against Case-Team Bar
Coding decisions validated against your case-team's per-language calibration sample. Anything below your confidence threshold per coding decision is routed to a native-speaker exception queue — your call which queue, ours or yours.
Push to Review Platform
Tagged production set, translated hot docs, and English summary memo delivered into Relativity, Reveal, Everlaw, or Lighthouse via the documented integration. Translation cost applied only to the hot-doc shortlist — typically 1–5% of the original set.
Audit Log Persisted
Every coding decision, hot-doc tag, and translation request logged with the source paragraph, language, model version, prompt, and confidence score. Defensible chain of custody for cross-border productions and treaty-based document requests.
Compliance & Defensibility

Built to Meet the Quality Bar Cross-Border Litigation Already Runs On

Cross-border discovery posture
FRCP Rule 26(g) certification supported by a per-language audit trail of coding decisions. Hague Convention, EU Evidence Regulation, and treaty-based document-request workflows accommodated through the same source-paragraph-cited audit log.
Per-language calibration discipline
Each language has its own calibration sample and seed coding maintained by the case team. The audit log records which calibration version applied to which document, so per-language QC remains defensible across the lifecycle of the matter.
Translation chain of custody
Translation requests trace from the source-language hot-doc to the English production version with the model version, prompt, and confidence score recorded. When opposing counsel challenges a translation, the chain of custody produces the basis on demand.
Data residency and PRC / GDPR posture
Cross-border matters often have data-residency constraints (PRC PIPL, GDPR transfer rules, sectoral data laws). Deployable in your VPC or our SOC 2 environment, including in-region deployments where the matter requires it. Encryption in transit and at rest; retention policies tied to your matter and treaty obligations.
Common Questions

What Cross-Border Litigation Teams Ask About Foreign Language Review

How is this different from the multilingual features built into Relativity, Reveal, Everlaw, or Lighthouse?
Each of those review platforms has improving multilingual features and we don't compete head-to-head. The competitor on this page is the native-speaker contract-attorney line on your matter budget — Morae, Lighthouse, Consilio, Epiq, KLDiscovery, UnitedLex, or a captive offshore center charging $0.18–$0.32 per word for translation plus $50–$120 per hour for native-speaker review. We undercut that labor cost, integrate directly into your existing review platform, and deliver tagged production sets and English summaries into the system of record.
How is this different from your eDiscovery first-pass page?
eDiscovery first-pass is the high-volume English-language review across the post-cull set. This page is the focused workflow on the foreign-language portion of the same set — responsiveness coding without full translation, hot-doc translation only on the shortlist, and English summaries for the case team. They share infrastructure but the unit economics, the bar, and the deliverables are different. We built each as a separate page so the workflow stays specific to what cross-border case teams buy.
Which languages do you support and what about less common ones?
Mandarin, Japanese, Korean, German, Portuguese, Arabic, French, Spanish, Italian, Russian, Hindi, and most major commercial languages are well-supported with high-confidence coding. Less common languages (Indonesian, Vietnamese, Turkish, Hebrew, Polish, etc.) route to a native-speaker exception queue — your call which queue, ours or yours. We are honest about which languages perform best and which need native-speaker QC; the audit log makes the per-language confidence visible to the case team.
What's your accuracy bar versus a native-speaker contract attorney?
Our pilot success threshold is responsiveness, privilege, and hot-doc coding accuracy at parity with or above your incumbent native-speaker reviewer, measured on the same shadow-data sample of foreign-language documents and validated against the case-team's per-language calibration set. Anything below your defined confidence threshold per coding decision is routed to a native-speaker exception queue — your call which queue, ours or yours.
How do you handle culturally-specific or legally-loaded terms that are hard to translate?
Case-specific glossaries are built during onboarding with native-speaker counsel from the matter team. Legally-loaded terms (e.g., specific PRC contract terms, German GmbH governance language, Japanese keiretsu relationships) are flagged with the original source paragraph and a glossary-aware English summary so the case team makes the call on a richer file than the standard machine-translation output. We don't make the legal call on culturally-loaded terms — we surface the original-language evidence and the contextual gloss.
How do you handle data-residency requirements for PRC, EU, or sectoral matters?
Data-residency constraints are accommodated through in-region deployments. PRC PIPL matters can run in PRC infrastructure; EU GDPR transfer constraints can be satisfied with EU-region deployments. Your IT team and outside counsel determine the compliance posture; we deploy to match. Audit log persists per the matter's retention policy and treaty obligations.
How long until a pilot is running on a live matter?
Foreign-language review pilots typically run 4–6 weeks: 1 week of integration, language-specific calibration, and case-glossary build with case counsel and matter-team native speakers; 2–3 weeks of shadow-mode running on a constrained subset of the foreign-language set; 1–2 weeks of supervised cutover. Production rollout is staged after the pilot meets your accuracy and SLA bar per language.
What does pricing look like compared to our current per-word and per-hour rates?
We benchmark against your blended per-word translation cost ($0.18–$0.32) plus per-hour native-speaker review cost ($50–$120) on the historical baseline. Our target is 20–40% of that combined unit cost, with the savings driven primarily by translating only the hot-doc shortlist and applying per-language coding consistently. Pricing structures around volume tiers and outcome SLAs, not hourly billable rates.

Two Ways to Start

Take the AI assessment for a structured read on foreign-language-review feasibility on your typical cross-border matters. Or talk to us if you already know which language is bleeding the most native-speaker labor cost.

Other Workflows

More Legal Workflows We Replace

The same approach, applied to the other document-heavy labor lines on your legal-ops or ALSP budget.