Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.rhetoricaudit.com/docs/llms.txt

Use this file to discover all available pages before exploring further.

Every Rhetoric Audit analysis returns 15 diagnostic parameters drawn from the FME framework. Together they give you a multi-dimensional profile of an article’s rhetorical structure — covering ideology, manipulation tactics, logical integrity, emotional engineering, and external verification. This page explains each parameter in plain terms so you can read your results with confidence.
What it measures: The underlying worldview the article reasons from — the assumptions about society, institutions, and human nature that shape how the argument is constructed. This is distinct from explicit political claims; it captures the implicit lens.Output: A descriptive label. Examples include “Progressive Institutionalism,” “Market Libertarianism,” “Anti-Imperialist Internationalism,” and “Nationalist Conservatism.”How to interpret it: The Philosophical Frame tells you where the author is standing when they make their argument, not just what they are arguing for. Two articles with identical factual claims can differ sharply in framing. Use this parameter alongside the Bias Spectrum to understand both the ideological lean and the philosophical assumptions driving it.
What it measures: The probability that the article is using propaganda tactics and emotional engineering deliberately to bypass your critical thinking rather than to inform it. This aggregates emotional loading, logical fallacy density, and intent transparency into a single composite score.Output: A percentage from 0 to 100. Higher scores indicate greater manipulation risk. Typical thresholds: 0–30 = Low, 31–60 = Moderate, 61–80 = High, 81–100 = Severe.How to interpret it: A high score does not automatically mean the article’s facts are wrong — it means the rhetorical techniques used are more consistent with persuasion than with neutral reporting. Cross-reference with the Evidence Validity Index and External Claim Grounding to distinguish manipulative framing from factual inaccuracy.
What it measures: How frequently and severely logical fallacies appear across the article, normalized per unit of text. This captures both the volume of fallacies and how central each is to the article’s core argument.Output: A score from 0 to 100. A higher number means more fallacies per unit of text and/or fallacies that are load-bearing to the argument rather than incidental.How to interpret it: A low Fallacy Density Index does not mean the article is unbiased — it means the logical structure is relatively sound. An article can be ideologically extreme while committing few formal fallacies. Use this parameter alongside Logic Fractures (parameter 04) to see which specific fallacies are driving the score.
What it measures: The specific logical fallacies present in the article, with exact quoted text and an impact assessment for each instance. FME detects 24 fallacy types from the recognized inventory, including ad hominem, straw man, false dichotomy, appeal to fear, bandwagon, slippery slope, appeal to authority, and more.Output: A list of fallacy instances. Each entry includes the fallacy type, the quoted span from the article, and an impact label (Low / Medium / High) reflecting how central the fallacy is to the argument.How to interpret it: Expand individual fractures in the UI to read the exact passage the system flagged. High-impact fallacies are the ones worth scrutinizing most carefully — they indicate that a key argumentative move in the article rests on flawed reasoning. A large number of low-impact fallacies suggests rhetorical sloppiness rather than deliberate deception.
What it measures: The quality of empirical backing for the article’s central claims — how well the article grounds its assertions in verifiable, attributable evidence versus relying on anecdotal accounts, unsourced assertions, or assumed consensus.Output: A score from 0 to 100. Higher scores indicate stronger evidence. 0–40 = Weak, 41–70 = Moderate, 71–100 = Strong.How to interpret it: This parameter assesses the type and quality of evidence within the text itself, not whether external fact-checkers have verified the claims. For external verification, see parameter 15 (External Claim Grounding). An article can score high here — meaning it cites many sources — while still scoring poorly on External Claim Grounding if those sources are unreliable.
What it measures: The emotional engineering embedded in the article, broken into three independent dimensions: Fear, Hope, and Urgency. Each dimension captures a distinct psychological lever used to move the reader.Output: Three independent scores, each from 0 to 100. Example: Fear 71 / Hope 22 / Urgency 64.How to interpret it: High fear with low hope is a common pattern in threat-construction narratives — the article presents a danger with no constructive path forward, which tends to produce anxiety or paralysis rather than informed action. High urgency scores are often paired with calls to action or with time-sensitive framing. Balanced triads (moderate scores across all three) are typical of analytical or explanatory journalism.
What it measures: The story structure the article adopts — the archetypal role assignment that shapes how readers perceive actors in the narrative. This draws on narrative theory to classify the structural frame being used.Output: A categorical label: Hero Focus, Scapegoat Focus, or Victim Focus, sometimes with a secondary qualifier (e.g., “Victim Focus / Threat Construction”).How to interpret it: Narrative archetypes are not inherently manipulative — journalism frequently features heroes, victims, and antagonists. The significance is in whether the archetype is presented with proportional complexity. Scapegoat Focus articles, for example, assign blame to a single actor or group in ways that may oversimplify causal chains. Cross-reference with Strategic Silence (parameter 13) to see which perspectives are absent from the narrative.
What it measures: How openly the article discloses its perspective, agenda, or framing rather than presenting opinion as neutral reporting. This is a spectrum from active obfuscation to candid disclosure.Output: A score from 0 (Obfuscating) to 100 (Informing). Low scores indicate that the article’s persuasive intent is masked; high scores indicate that the author’s perspective is stated openly.How to interpret it: A low Intent Transparency score is a signal to read the article as an advocacy piece even if it is formatted as news. Note that opinion columns and labeled editorials will naturally score lower here — the flag is most meaningful when an article presents itself as objective news reporting while masking a strong viewpoint.
What it measures: The cognitive complexity of the article using Webb’s Depth of Knowledge (DOK) framework, which was originally developed for educational assessment but applies well to analytical writing.Output: A DOK level from 1 to 4: DOK-1 (recall / summary), DOK-2 (skill application), DOK-3 (strategic thinking), DOK-4 (extended synthesis). Most journalism falls in the DOK-2 to DOK-3 range.How to interpret it: DOK level is not a quality judgment — a well-reported breaking news brief is appropriate at DOK-1. The parameter becomes most useful when you are comparing multiple articles on the same topic to understand which ones offer deeper analytical engagement, or when you need to flag that a piece presenting itself as analysis is actually operating at the recall level.
What it measures: An assessment of author credibility based on citation patterns, linguistic tone, and epistemological humility — how the article handles uncertainty, dissenting evidence, and the limits of its own claims.Output: A score from 0 to 100, where higher scores indicate stronger credibility signals. Supporting indicators are surfaced in the expanded view.How to interpret it: This parameter evaluates the rhetorical signals of reliability within the text — hedging language, explicit uncertainty, citation quality, and balance of perspectives — rather than looking up the author externally. A high score means the article behaves like credible writing; it does not verify the author’s biographical credentials.
What it measures: The sensitivity of the topic covered to societal disruption and market reaction. This captures how the subject matter — irrespective of how the article treats it — sits in the landscape of high-stakes discourse.Output: A categorical rating: LOW, MOD (Moderate), or HIGH.How to interpret it: High-volatility topics include financial markets, geopolitical conflict, public health crises, and electoral processes. A HIGH volatility rating does not mean the article is irresponsible — it is a flag that the topic itself warrants extra care in how you distribute or act on the information. Use this alongside the Manipulation Risk Score to assess whether a high-stakes topic is also being treated in a high-risk rhetorical manner.
What it measures: A dense forensic synthesis of the article’s complete rhetorical strategy, written at the level of a senior researcher reviewing the piece. This is a qualitative output, not a numeric score.Output: A 3–4 sentence paragraph that integrates findings across all other parameters into a coherent rhetorical characterization of the article.How to interpret it: This is the most condensed single view of the article’s rhetorical architecture. It is useful as a starting point for understanding the overall picture before drilling into individual parameters, or as a quotable summary when briefing colleagues. The synthesis is generated from the structured outputs of all other parameters — it does not introduce new judgments not already supported by the parameter scores.
What it measures: The critical context, counter-evidence, affected stakeholders, or alternative framings that are conspicuously absent from the article — what the author chose not to include, and why that absence matters.Output: A severity rating (LOW / MODERATE / HIGH) plus a list of specific identified omissions, each described in plain language.How to interpret it: What a journalist leaves out can be more revealing than what they include. A HIGH Strategic Silence rating means the analysis identified material context that a well-rounded treatment of the topic would normally include. Review the listed omissions to understand which perspectives or facts the article’s framing depends on suppressing. This parameter works well alongside Narrative Archetype — a Scapegoat Focus article, for example, will frequently show HIGH Strategic Silence around the scapegoated party’s counter-narrative.
What it measures: The balance of the three classical rhetorical appeals across the article’s paragraphs: ethos (appeals to credibility and authority), pathos (appeals to emotion), and logos (appeals to reason and evidence).Output: A per-paragraph decomposition showing the dominant appeal in each section, plus aggregate percentages for the full article.How to interpret it: Most well-written articles blend all three appeals. An article dominated by pathos with little logos signals emotional argumentation over reasoned argument. An ethos-heavy article relies primarily on the authority of the speaker rather than the strength of the evidence. The per-paragraph view is most useful for locating the specific sections where the rhetorical balance shifts — often the most manipulative passages are concentrated in the opening and closing paragraphs.
What it measures: How well the article’s factual claims hold up against automated verification using external databases. This connects to FME Pipeline Stage 1.5, which queries three sources: Google Fact Check Tools (ClaimReview), Wikidata, and Wikipedia.Output: A grounding score from 0 to 100, plus a list of specific claims that were verified, disputed, or unverifiable, each tagged with the source that returned a result.How to interpret it: A high grounding score means the article’s claims are largely corroborated by structured external data. A low score, or a list of disputed claims, does not automatically mean the article is wrong — some legitimate claims are simply not yet represented in ClaimReview or Wikidata. Treat disputed claims as items warranting your own primary-source investigation. Verification data is cached for 30 days, so very recent claims may not yet have a fact-check record available.
Parameters 02, 03, 05, 08, and 15 together form the core integrity assessment of an article. If you are short on time, start with these five before drilling into the others.