The Forensic Media Evaluation (FME) framework is the analytical engine behind every Rhetoric Audit analysis. Rather than assigning a single “bias score,” FME runs your article through a multi-stage pipeline that extracts, annotates, and verifies rhetorical patterns at the span level — tracing every score back to the exact words that produced it. You are not reading a vibe; you are reading a reproducible forensic record.Documentation Index
Fetch the complete documentation index at: https://www.rhetoricaudit.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Four core principles
FME is built on four commitments that govern how every analysis is conducted.Forensic, Not Editorial
FME scores what is structurally present in the text — framing, omissions, emotional load, evidence quality — not whether the analysis team agrees with the conclusion. The framework has no preferred ideology.
Multi-Axis, Not Left/Right
A single label flattens complex argumentation. FME decomposes each article into independent rhetorical dimensions — Aristotelian appeals, 24 fallacy types, Plutchik-8 emotions — that you can reason about separately.
Span-Anchored Evidence
Every score traces back to exact character-offset passages inside the article. When you see a Manipulation Risk Score of 72, you can always ask “why?” and receive a quoted span that justifies it.
Reproducible & Auditable
Aggregation is deterministic — no LLM makes the final rollup calculation. Every scan returns a
prompt_hash and fme_version so results are comparable across articles, sources, and time.The six-stage pipeline
FME V19.1 processes each article through six stages. Stages 1.5 and 1.6 run in parallel; all other stages are sequential.Stage 0 — Preprocessing & Chunking
The article is split into overlapping paragraph windows before any LLM call is made. Each window covers 3 paragraphs with a 1-paragraph overlap, ensuring that rhetorical patterns that span paragraph breaks are never truncated. This stage is fully deterministic — no model is involved.
Stage 1 — Span Annotation
Each chunk is passed to the FME span annotation model, which extracts three families of signals anchored to exact character offsets:
- 19 SemEval propaganda techniques — the full inventory from the SemEval-2020 shared task on fine-grained propaganda detection
- Aristotelian appeals — ethos (credibility), pathos (emotion), and logos (logic), scored per paragraph
- Plutchik-8 emotions — fear, anger, joy, sadness, anticipation, surprise, disgust, and trust, each scored independently
Stage 1.5 — Claim Grounding (parallel)
Factual claims extracted in Stage 1 are verified against three external sources:
- Google Fact Check Tools (FCT) — ClaimReview structured data from major fact-checkers
- Wikidata — structured entity and relationship data
- Wikipedia — encyclopedic context for people, organizations, and events
Stage 1.6 — Cross-Platform Corroboration (parallel)
While Stage 1.5 runs, the
fetch-intelligence edge function queries three real-time platforms in parallel:- X (Twitter) — social signal and amplification patterns
- News — coverage volume and framing across outlets
- Open Web — broader corroboration from non-news sources
Stage 2 — Aggregation
A deterministic rollup function combines the outputs of all previous stages. No LLM is involved here — the formula is fixed:This formula applies only when two or more corroborating signals exist. If fewer signals are available, the system flags the result rather than extrapolating. This design eliminates hallucination at the aggregation layer.
Stage 3 — Validation
The assembled report is validated against a Zod schema before it is returned to you. The validator checks:
- All required fields are present and correctly typed
- Character offsets are within bounds for the submitted article
- The
fme_versionandprompt_hashare embedded in the response
Benchmark performance
FME V19.1 achieved 100% band accuracy across the 14-article Suite C benchmark (2026-05-02), with a macro Mean Absolute Error of 2.5 (target ≤ 5) and a 0.0% hallucination rate. The full benchmark report is available at rhetoricaudit.com/test-results.
