HEO · Hybrid Engine Optimization · Scorecard

HEO 90-Day
Scorecard

Track all six core HEO metrics — ERS, PCR, CF, CAR, RR, and CFS — across four measurement intervals: Day 0 baseline, Day 30, Day 60, and Day 90. Developed by Jason Todd Wade, BackTier / NinjaAI.

Jason Todd Wade · BackTier / NinjaAI·6 Metrics · 4 Intervals · 90-Day Cycle·Published May 2026
ERSPCRCFCARRRCFS
What This Scorecard Does

One Document. Six Metrics. Four Intervals.

The HEO 90-Day Scorecard is the measurement instrument for a complete HEO engagement cycle. It does not replace the HEO Audit Template — it receives the output of that template at four points in time and converts raw audit data into a trackable progression record. Every cell in the scorecard corresponds to a calculated metric value from a completed 80-query audit run.

The scorecard serves three functions simultaneously. First, it is a diagnostic tool — the pattern of which metrics improve and which stagnate tells you exactly which implementation phases are working and which need additional attention. Second, it is a reporting tool — the delta columns provide the evidence base for client reporting, showing percentage improvement per metric per interval. Third, it is a target-tracking tool — the 90-day target column defines the threshold for declaring an HEO engagement successful.

The six metrics — Entity Representation Score (ERS), Platform Coverage Rate (PCR), Citation Frequency (CF), Citation Accuracy Rate (CAR), Recommendation Rate (RR), and Citation Favorability Score (CFS) — were defined by Jason Todd Wade as the minimum viable measurement set for HEO. Each metric isolates a distinct dimension of AI Visibility that cannot be inferred from the others. A business can have high PCR (present on all platforms) but low ERS (poorly represented on each). A business can have high CF (cited frequently) but low CAR (cited inaccurately). The scorecard makes these distinctions visible and actionable.

The Scorecard

90-Day HEO Metric Progression Table

Record calculated metric values at each interval. The Delta column shows the absolute change from Day 0. The Target column shows the 90-day threshold defined by NinjaAI for each metric.

MetricNameScaleDay 0
Baseline
Day 30Day 60Day 90Delta
D90 − D0
Target
ERSEntity Representation Score0 – 5 integer≥ 3.5
PCRPlatform Coverage Rate0 – 100%100%
CFCitation Frequency0 – 20 per platform≥ 12/20
CARCitation Accuracy Rate0 – 100%≥ 90%
RRRecommendation Rate0 – 100%≥ 40%
CFSCitation Favorability ScorePositive / Neutral / Negative %Positive ≥ 70%

Populate each cell with the calculated metric value from the corresponding 80-query audit run. Use the HEO Audit Template to run each audit and the HEO Metrics Tracker for calculation formulas.

Baseline Reference

Typical Day 0 Ranges by Metric

Most businesses beginning an HEO engagement fall within these baseline ranges. Scores below the typical range indicate a more severe AI Displacement condition and may require additional Phase 01 and Phase 02 work before AEO and GEO layers will have measurable effect.

MetricTypical Day 0 RangeBelow Typical (Severe)90-Day TargetPrimary Driver
ERS0.5 – 1.50 (AI Displacement)≥ 3.5Entity clarity + AEO content
PCR0 – 25%0% (absent all platforms)100%Entity clarity + NAP consistency
CF0 – 3/200/20 (zero citations)≥ 12/20AEO content architecture
CAR30 – 60%Below 30% (critical)≥ 90%Entity data normalization
RR0 – 10%0% (never recommended)≥ 40%Reputation signals + outcomes
CFSPositive: 10–30%Positive: 0% (all neutral/neg)Positive ≥ 70%Review signals + content tone
Metric Reference

Quick Reference: All Six HEO Metrics

Each metric measures a distinct dimension of AI Visibility. Full definitions, scoring rubrics, and 90-day targets are documented at ninjaai.com/heo-metrics-tracker.

ERS
Entity Representation Score
Measures the overall quality of how AI systems represent your entity — from absent (0) to authoritative and recommended (5).
FormulaAverage of 0–5 scores across all platforms
Typical Baseline0.5 – 1.5
90-Day Target≥ 3.5
PCR
Platform Coverage Rate
Measures the percentage of the four major AI platforms (ChatGPT, Perplexity, Gemini, Copilot) where the entity appears in at least one response.
Formula(Platforms with ≥1 citation ÷ 4) × 100
Typical Baseline0 – 25%
90-Day Target100%
CF
Citation Frequency
Measures how often the entity is cited across a 20-query test set on a single platform. Reported as a fraction (e.g., 8/20) and as a rate (40%).
FormulaCitations per platform ÷ 20 queries
Typical Baseline0 – 3/20
90-Day Target≥ 12/20
CAR
Citation Accuracy Rate
Measures the percentage of factual claims in AI citations that are accurate. Requires manual verification of a sample of AI responses against the entity's canonical data.
Formula(Correct factual claims ÷ total claims checked) × 100
Typical Baseline30 – 60%
90-Day Target≥ 90%
RR
Recommendation Rate
Measures the percentage of AI citations that include an active recommendation signal — language indicating the entity is a preferred, top, or best choice.
Formula(Citations with recommendation signal ÷ total citations) × 100
Typical Baseline0 – 10%
90-Day Target≥ 40%
CFS
Citation Favorability Score
Measures the sentiment distribution of AI citations — what percentage are positive (favorable), neutral (factual only), or negative (critical or warning-based).
Formula(Positive citations ÷ total citations) × 100
Typical BaselinePositive: 10–30%
90-Day TargetPositive ≥ 70%
How to Use This Scorecard

Eight-Step Scoring Procedure

The scorecard is a passive instrument — it receives data from completed audit runs. The following procedure ensures that each measurement interval produces comparable, reliable data.

01
Run the Day 0 Baseline Audit
Before recording any scores, run the full 80-query HEO baseline audit using the HEO Audit Template. Record raw query results for all four platforms across all four query categories. Do not begin any HEO implementation work before completing the Day 0 audit — the baseline must reflect the entity's pre-intervention state.
02
Calculate All Six Day 0 Metric Scores
Using the formulas from the HEO Metrics Tracker, calculate ERS (0–5 average), PCR (% of platforms with ≥1 citation), CF (citations ÷ 20 per platform), CAR (correct claims ÷ total claims × 100), RR (recommended citations ÷ total citations × 100), and CFS (positive citations ÷ total citations × 100). Record all six values in the Day 0 column.
03
Execute Phase 01 and Phase 02 of the HEO Checklist
Complete the Entity Audit (Phase 01) and SEO Foundation Build (Phase 02) before the Day 30 measurement. These phases address the root causes of low PCR and low ERS — entity clarity, NAP consistency, and technical SEO infrastructure. No AEO or GEO work will produce measurable results until these phases are complete.
04
Re-Run the Audit at Day 30
At the 30-day mark, re-run the full 80-query audit using the same query set as the baseline. Use the same platforms, the same query text, and the same scoring methodology. Calculate all six metric scores and record them in the Day 30 column.
05
Execute Phase 03 and Phase 04 of the HEO Checklist
Complete the AEO Layer Build (Phase 03) and GEO Layer Build (Phase 04) before the Day 60 measurement. These phases address CF, CAR, and the beginning of RR movement. AEO content architecture is the primary driver of Citation Frequency gains.
06
Re-Run the Audit at Day 60
At the 60-day mark, repeat the audit and record Day 60 scores. By this point, CF and CAR should show measurable improvement. If they have not, diagnose the AEO content gaps before proceeding to the Day 90 measurement.
07
Re-Run the Audit at Day 90
At the 90-day mark, complete the final measurement interval. Compare all six scores against the Day 0 baseline and against the 90-day targets. RR and CFS should show their most significant movement in this interval as reputation signals propagate.
08
Calculate Delta and Assess Against Targets
For each metric, calculate the absolute delta (Day 90 − Day 0) and the percentage change. Compare each final score against the 90-day target. Any metric below target requires an additional implementation cycle — return to the specific phase responsible for that metric and execute the remaining checklist items.
Frequently Asked Questions

Scorecard FAQ

How often should I run the HEO audit to fill in the scorecard?
The scorecard is designed for four measurement intervals: Day 0 (baseline), Day 30, Day 60, and Day 90. Running the full 80-query audit at each interval provides enough data points to identify trend direction, measure implementation impact, and project whether 90-day targets will be met. Running the audit more frequently than every 30 days is not recommended — AI system response patterns require time to reflect entity changes, and over-measurement creates noise rather than signal.
What is the 90-day target for each HEO metric?
The 90-day targets defined by NinjaAI are: ERS ≥ 3.5 (from a typical baseline of 0.5–1.5), PCR = 100% (all four platforms), CF ≥ 12/20 per platform, CAR ≥ 90%, RR ≥ 40%, CFS Positive ≥ 70%. These targets represent the threshold at which an entity has achieved durable AI Visibility — meaning it is consistently present, accurately represented, and actively recommended across all major AI answer platforms.
Which HEO metric typically improves first?
Platform Coverage Rate (PCR) typically improves first, usually within the first 30 days, because it is the most directly influenced by entity clarity work — fixing NAP consistency, resolving name variants, and establishing the canonical entity record. Citation Frequency (CF) typically improves next, around Day 30–60, as AEO content architecture creates answer-ready content that AI systems can extract. Recommendation Rate (RR) and Citation Favorability Score (CFS) are the last to move, typically showing meaningful improvement between Day 60 and Day 90.
What should I do if a metric is not improving between measurement intervals?
If a metric shows no improvement between intervals, diagnose the specific phase responsible for that metric. PCR stagnation indicates unresolved entity clarity issues. CF stagnation indicates insufficient AEO content. CAR stagnation indicates factual inconsistencies in published content or directory data. RR and CFS stagnation indicate insufficient reputation signals — check for missing case studies, testimonials, or documented specific outcomes.
Can I use this scorecard for multiple clients?
Yes. The scorecard is entity-agnostic — it tracks the six HEO metrics for any business entity. For agency use, create a separate scorecard instance for each client, using the same query categories adapted to the client's specific market, service area, and target queries. The 80-query test suite should be rebuilt for each client using the HEO Audit Template's category framework, not copied directly from another client's audit.
What does an ERS score of 0 mean?
An ERS of 0 means the entity does not appear in AI-generated answers for any of the 20 test queries on a given platform. This is the AI Displacement condition — the AI system is answering queries about the entity's category without mentioning the entity at all. An ERS of 0 is the most urgent HEO condition and requires immediate Phase 01 (Entity Audit) and Phase 02 (SEO Foundation) work before any AEO or GEO layer work will have effect.
HEO · Hybrid Engine Optimization
Coined by Jason Todd Wade →
HEO Definition
Canonical 2,500-word definition of Hybrid Engine Optimization.
ninjaai.com/heo
Implementation Checklist
47-point, five-phase implementation checklist.
ninjaai.com/heo-implementation-checklist
Metrics Tracker
Six metric definitions, formulas, and scoring rubrics.
ninjaai.com/heo-metrics-tracker
Audit Template
80-query test suite and baseline scoring methodology.
ninjaai.com/heo-audit-template
Case Study
90-day documented outcome across all six HEO metrics.
ninjaai.com/heo-case-study
Start Your HEO Engagement
Ready to run your Day 0 baseline audit?
Request Free AI Visibility AuditView Audit TemplateView Checklist