Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.rhetoricaudit.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

The Forensic Media Evaluation (FME) framework is the analytical engine behind every Rhetoric Audit analysis. Rather than assigning a single “bias score,” FME runs your article through a multi-stage pipeline that extracts, annotates, and verifies rhetorical patterns at the span level — tracing every score back to the exact words that produced it. You are not reading a vibe; you are reading a reproducible forensic record.

Four core principles

FME is built on four commitments that govern how every analysis is conducted.

Forensic, Not Editorial

FME scores what is structurally present in the text — framing, omissions, emotional load, evidence quality — not whether the analysis team agrees with the conclusion. The framework has no preferred ideology.

Multi-Axis, Not Left/Right

A single label flattens complex argumentation. FME decomposes each article into independent rhetorical dimensions — Aristotelian appeals, 24 fallacy types, Plutchik-8 emotions — that you can reason about separately.

Span-Anchored Evidence

Every score traces back to exact character-offset passages inside the article. When you see a Manipulation Risk Score of 72, you can always ask “why?” and receive a quoted span that justifies it.

Reproducible & Auditable

Aggregation is deterministic — no LLM makes the final rollup calculation. Every scan returns a prompt_hash and fme_version so results are comparable across articles, sources, and time.

The six-stage pipeline

FME V19.1 processes each article through six stages. Stages 1.5 and 1.6 run in parallel; all other stages are sequential.
1

Stage 0 — Preprocessing & Chunking

The article is split into overlapping paragraph windows before any LLM call is made. Each window covers 3 paragraphs with a 1-paragraph overlap, ensuring that rhetorical patterns that span paragraph breaks are never truncated. This stage is fully deterministic — no model is involved.
2

Stage 1 — Span Annotation

Each chunk is passed to the FME span annotation model, which extracts three families of signals anchored to exact character offsets:
  • 19 SemEval propaganda techniques — the full inventory from the SemEval-2020 shared task on fine-grained propaganda detection
  • Aristotelian appeals — ethos (credibility), pathos (emotion), and logos (logic), scored per paragraph
  • Plutchik-8 emotions — fear, anger, joy, sadness, anticipation, surprise, disgust, and trust, each scored independently
Every annotation includes the source span so you can verify it in the original text.
3

Stage 1.5 — Claim Grounding (parallel)

Factual claims extracted in Stage 1 are verified against three external sources:
  • Google Fact Check Tools (FCT) — ClaimReview structured data from major fact-checkers
  • Wikidata — structured entity and relationship data
  • Wikipedia — encyclopedic context for people, organizations, and events
Results are cached for 30 days to keep latency low and costs predictable. Stage 1.5 contributes 70% of the final grounding signal when two or more verification sources return a signal.
4

Stage 1.6 — Cross-Platform Corroboration (parallel)

While Stage 1.5 runs, the fetch-intelligence edge function queries three real-time platforms in parallel:
  • X (Twitter) — social signal and amplification patterns
  • News — coverage volume and framing across outlets
  • Open Web — broader corroboration from non-news sources
Stage 1.6 contributes 30% of the final grounding signal when two or more signals are available.
5

Stage 2 — Aggregation

A deterministic rollup function combines the outputs of all previous stages. No LLM is involved here — the formula is fixed:
FGI = (Stage 1.5 score × 0.70) + (Stage 1.6 score × 0.30)
This formula applies only when two or more corroborating signals exist. If fewer signals are available, the system flags the result rather than extrapolating. This design eliminates hallucination at the aggregation layer.
6

Stage 3 — Validation

The assembled report is validated against a Zod schema before it is returned to you. The validator checks:
  • All required fields are present and correctly typed
  • Character offsets are within bounds for the submitted article
  • The fme_version and prompt_hash are embedded in the response
If validation fails, the scan is rejected rather than returning a partial result.

Benchmark performance

FME V19.1 achieved 100% band accuracy across the 14-article Suite C benchmark (2026-05-02), with a macro Mean Absolute Error of 2.5 (target ≤ 5) and a 0.0% hallucination rate. The full benchmark report is available at rhetoricaudit.com/test-results.
The benchmark validates five strata of article types — hard news, opinion/editorial, analysis, advocacy, and boundary-straddle pieces — ensuring the pipeline performs consistently across the content types you are most likely to submit.

What FME does not do

FME does not assign a single “true” or “false” verdict to any article. Facts can be accurate while the narrative remains manipulative; FME profiles the rhetorical structure, not the factual truth of individual claims in isolation.
FME avoids binary verdicts where evidence is mixed, flags missing sources rather than fabricating completeness, and preserves source attribution so you can verify the upstream evidence yourself. The output is designed to enrich your judgment — not replace it.