HomeHEOHEO Audit Template
HEO Audit Template · NinjaAI

The HEO Baseline Audit Template

The complete 80-query test suite for establishing your Hybrid Engine Optimization baseline. Four query categories. Four AI platforms. Six metric calculation formulas. This is the exact methodology NinjaAI uses to measure every HEO engagement at Day 0, Day 30, Day 60, and Day 90.

80
Total Queries
4
AI Platforms
4
Query Categories
6
Metric Formulas
~5h
Audit Duration
Audit Procedure

8-Step Scoring Procedure

1
Prepare Your Query Set

Download or copy the 80-query template. Replace [Business Name], [service type], [city], [problem], and [use case] placeholders with your entity's specific values. Prepare one row per query in your tracking spreadsheet with columns for: Query, Platform, Response (Y/N), Citation Text, Recommendation Signal (Y/N), Factual Claims (5 checkboxes), Sentiment (Positive/Neutral/Negative).

2
Run Category 1 — Category Queries

Run all 20 Category Queries on ChatGPT. Record whether the entity appears in each response. Then run the same 20 queries on Perplexity, Gemini, and Copilot. Total: 80 query runs for Category 1.

3
Run Category 2 — Problem-Aware Queries

Run all 20 Problem-Aware Queries on all four platforms. Record citation presence, recommendation signals, and sentiment framing for each response.

4
Run Category 3 — Comparison Queries

Run all 20 Comparison Queries on all four platforms. Focus on whether the entity's specific differentiators, credentials, or outcomes appear in the AI's comparative answers.

5
Run Category 4 — Entity-Direct Queries

Run all 20 Entity-Direct Queries on all four platforms. For each citation, verify five factual claims: business name, location, primary service, a specific credential or outcome, and contact method. Record accuracy for each claim.

6
Calculate ERS

For each platform, assign a score 0–5 based on the best response quality observed across all 80 queries: 0=Not found, 1=Mentioned incorrectly, 2=Mentioned correctly, 3=Listed as option, 4=Recommended, 5=Primary recommendation. Average the four platform scores.

7
Calculate PCR, CF, CAR, RR, and CFS

Using your completed tracking spreadsheet, apply each metric formula: PCR = platforms with ≥1 citation ÷ 4 × 100; CF = citations per platform ÷ 20; CAR = correct factual claims ÷ total claims checked × 100; RR = citations with recommendation signal ÷ total citations × 100; CFS = positive citations ÷ total citations × 100.

8
Document Baseline and Schedule Re-Audit

Record all six metric scores as your Day 0 baseline. Schedule re-audits at Day 30, Day 60, and Day 90. Compare each cycle against the Day 0 baseline and the 90-day targets defined in the HEO Metrics Tracker.

Query Library

80-Query Test Suite

Replace all bracketed placeholders with your entity's specific values before running. Run each category on all four platforms: ChatGPT, Perplexity, Gemini, and Copilot.

ChatGPTPerplexityGeminiCopilot

Queries where the AI must recommend a provider in your category. These test whether you appear as an option at all.

01Who are the best [service type] in [city]?
02What [service type] do you recommend in [city]?
03Top [service type] near [city]
04Best [service type] for [use case] in [city]
05Which [service type] should I hire in [city]?
06Highly rated [service type] in [city]
07Who provides [service type] in [city]?
08Trusted [service type] in [city]
09Affordable [service type] in [city]
10Experienced [service type] in [city]
11Local [service type] in [city]
12Reputable [service type] in [city]
13Award-winning [service type] in [city]
14Most recommended [service type] in [city]
15Established [service type] in [city]
16[City] [service type] recommendations
17Find a [service type] in [city]
18Who should I call for [service type] in [city]?
19Good [service type] in [city]
20Professional [service type] in [city]
Metric Formulas

Six Core HEO Metric Calculations

ERSEntity Representation Score
Scale: 0–5 integer · 90-Day Target: ≥ 3.5
What to Measure

Manual rating per platform: 0 = Not found, 1 = Mentioned incorrectly, 2 = Mentioned correctly, 3 = Listed as an option, 4 = Recommended, 5 = Primary recommendation. Average across all 4 platforms.

Calculation Formula
ERS = (ChatGPT score + Perplexity score + Gemini score + Copilot score) ÷ 4
How to Interpret Results

ERS below 2.0 indicates entity ambiguity — the AI cannot reliably find or identify the business. ERS 2.0–3.4 indicates presence without recommendation authority. ERS 3.5+ indicates the entity is recognized and recommended.

All Six Metrics at a Glance
CodeMetricScaleFormula90-Day Target
ERSEntity Representation Score0–5 integerERS = (ChatGPT score + Perplexity score + Gemini score + Copilot score) ÷ 4≥ 3.5
PCRPlatform Coverage Rate0–100%PCR = (Platforms with ≥1 citation ÷ 4) × 100100%
CFCitation Frequency0–20 per platformCF (per platform) = Responses mentioning entity ÷ 20 queries. CF (aggregate) = Total citations across all 4 platforms ÷ 80 total queries.≥ 12/20 on primary platform
CARCitation Accuracy Rate0–100%CAR = (Correct factual claims ÷ Total factual claims checked) × 100≥ 90%
RRRecommendation Rate0–100% of citationsRR = (Citations with recommendation signal ÷ Total citations) × 100≥ 40%
CFSCitation Favorability ScorePositive / Neutral / Negative distributionCFS = (Positive citations ÷ Total citations) × 100 for Positive %; same for Neutral and Negative.≥ 60% Positive
Audit FAQ

Common Audit Questions

HEO Resource Cluster
Coined by Jason Todd Wade →
HEO Definition

The canonical 2,500-word definition of Hybrid Engine Optimization.

ninjaai.com/heo →
Implementation Checklist

47-point checklist across the five-phase HEO implementation sequence.

ninjaai.com/heo-implementation-checklist →
Metrics Tracker

Scoring rubrics, measurement methodology, and 90-day targets for all six metrics.

ninjaai.com/heo-metrics-tracker →
Case Study

Documented 90-day HEO engagement with all six metric progressions.

ninjaai.com/heo-case-study →