Methodology

How Rhetoric Audit
Reads a Story

Rhetoric Audit is a forensic media analysis platform. Instead of stamping a piece as "true" or "biased", we decompose it into structural dimensions a reader, researcher, or analyst can reason about.

Foundation

Analytical Principles

Forensic, not editorial

We score what is structurally present in a text — framing, evidence quality, omissions, emotional load — not whether we agree with the conclusion.

Multi-axis, not left/right

A single ideology label flattens complex argumentation. Our framework decomposes a piece into independent rhetorical dimensions that can be reasoned about separately.

Cross-source triangulation

For event-level questions, no single article is sufficient. We synthesize signals across news, social, video, web, and institutional sources before drawing inferences.

Reproducible & auditable

Every analysis returns a structured report you can compare across articles, time, and topics. Outputs are designed to be defensible — not just persuasive.

What We Analyze

Two Analytical Surfaces

Single article

Rhetorical analysis

A specific article — submitted by URL, raw text, or via the Chrome extension — is evaluated against the FME framework. The output is a structured rhetorical profile of that single piece.

Multi-source

Intelligence Brief

For an event, topic, or entity, the platform collects normalized signals across news, social, video, web, and institutional sources, then synthesizes a brief covering narratives, authenticity, emotion, and risk.

Process

Analysis Pipeline

01

Source acquisition

An article URL, raw text, or a topic + selected source set is captured. For multi-source briefs, normalized signals are gathered across news, social, video, web, and institutional channels.

02

Signal extraction

The text is decomposed into rhetorical features: claims, framing devices, evidence patterns, emotional cues, omitted context, and narrative structure.

03

Forensic evaluation

An LLM-driven analytical layer applies the FME framework to score each dimension and surface specific, evidence-anchored observations.

04

Synthesis & dissonance check

For multi-source briefs, the system compares social narratives against institutional reality to identify divergence — where public sentiment and verifiable evidence disagree.

05

Structured report

Outputs are rendered as a comparable, exportable report — usable across editorial review, research workflows, policy briefings, and UAT-style auditing.

Quality discipline

We avoid binary verdicts where the underlying evidence is mixed.
We separate opinion-based framing from factually verifiable claims.
We flag rate limits and missing sources rather than fabricate completeness.
We preserve source attribution so readers can verify upstream evidence.
We treat AI outputs as structured analysis — not as authoritative truth.

What we don't do

We do not assign an article a single overall 'true' or 'false' verdict.
We do not publish or sell user-submitted article text.
We do not target individuals, demographics, or specific publications for ranking.
We do not present model output without the structural framework that produced it.

Read the FME Framework

The dimensions referenced throughout this page are defined in our Forensic Media Evaluation (FME) framework.