Proof Dossier / CogniPaws
Living Document · updates with every rescan
Material connectionCogniPaws and UCPScore share an operator. Disclosed per FTC 16 CFR § 255.5 + AICPA Ethics Interpretation 101-1. The receipt chain is the trust, not the disclaimer.

37 100

From invisible to machine-readable in 21 days.

77 fixes across structure, trust, parsing, and theme. Verified by receipt SHA afbd702. Reproducible against the public rubric. Updates with every rescan.

Case
CogniPaws · Shopify Dawn · 3 SKUs
Method
Scan → Fix → Rescan
Control
Same SHA · same scanner · same window
Result
Deterministic · afbd702

Receipt-pinned output

✓ Receipt afbd702

Score chain · 8 scans

37
40
73
77
86
95
97
100

Timeline

21 days

Fixes

77

Same store state + same rubric SHA = same score. Re-run the scanner — you get afbd702.

Core Finding

The store did not win because it was bigger. It won because AI systems could understand it better.

Day 1

Invisible to inference.

The store was live. But AI systems couldn’t extract, trust, or recommend it.

Day 1 score

37/100

Measured, not selected. First scan against stock Dawn theme — no prior tuning.

1,741-store benchmark · May 2026

62.1/100 avg

CogniPaws Day 1 sat 25 points below the 1,741-store benchmark average. Margin of error ±0.3 at 95% CI.

Below all 4 DTC peers

Allbirds 49 · Skims 42 · Gymshark 41 · Bombas 38

Even the lowest-scoring named comparison brand outscored the Day 1 baseline.

Scan #1 — Failure state

Machine readability: 23%
Day 1 surface Tier-1 ceilingHover for gap

Structured

8/40

No schema markup. Products unparseable as entities.

Trust

6/20

No byline, citations, or credibility signals to cite.

Parser

4/20

JS-required rendering blocks AI extraction.

Theme

8/20

Fake template sections drown signal in boilerplate.

Scan #1 conclusion

This was not a bad store.

It was an unreadable one.

AI systems could not interpret it.

Falsification line

Re-run scan #1 against the same store-state at rubric SHA afbd702 — you should get 37 ± 0. If you don’t, the receipt chain is broken.

Scan #1 of 8 · pinned to receipt chain afbd702 · per-scan record at audit-trail.json#/scans/1. Reproducible against the public rubric. 37 was the scanner’s output, not a starting point chosen for narrative effect — selection-bias counter-signal per Pannucci 2010.

Score progression

Eight scans. One proof trail.

Each scan ran the same scanner against the same store at the same rubric SHA. One delta is honest instrumentation; the rest are causal.

Store-side fix Instrumentation Tier-1 ceiling Third-party controlY-axis bands · rubric tiers

Scan 1 · 0–49

37

Invisible

AI cannot extract product entities.

Scan 3 · 50–79

73

Readable

Entities extractable. Instrumentation-corrected.

Scan 6 · 80–94 → 95+

95

Recommendation-ready

Crossed the threshold via store-side fixes.

Scan 8 · 95+

100

Fully legible

Deterministic output · receipt afbd702.

The first jump came from instrumentation.

Every gain after that came from structural fixes.

Falsification line

Re-run scan #6 against the same store-state at rubric SHA afbd702 — you should get 95 ± 0. If you don’t, the receipt chain is broken.

6 of 7 scan-to-scan deltas are causal store-side fixes; 1 is honest instrumentation (Firecrawl maxAge:0 cache bypass) — labeled, not laundered. Per Bradford Hill criteria · pinned to receipt chain afbd702. Third-party control case (anonymized affiliate, not UCPScore-operated) pre-registered at 61/100 on 2026-04-27 with 77 prioritized fixes queued before any work was authored — selection-bias counter-signal per Wald / Pannucci 2010. Full record in audit-trail.json.

Fix system

77 fixes. One machine-readable system.

Every fix mapped to how AI extracts, trusts, parses, and ranks the store. Four lanes operating in parallel — counts from audit-trail.json, deltas pinned to the receipt chain.

Structured data

8

40

/40

27 fixes

before 20%after 100%

Product, organization, FAQ, and labeled attributes became extractable entities.

Fixed: No schema markup · products unparseable as entities

Trust signals

6

20

/20

12 fixes

before 30%after 100%

Warnings, disclosure, byline, and credibility markers became citable.

Fixed: No byline · no citations · no credibility signals

Parser / scanning

4

20

/20

15 fixes

before 20%after 100%

Server-rendered evidence, cache bypass, and parsing discipline made the score measurable.

Fixed: JS-required rendering · AI extraction blocked

Theme cleanup

8

20

/20

23 fixes

before 40%after 100%

Stock-Dawn placeholder sections removed; signal density increased.

Fixed: Fake template sections · boilerplate drowns signal

No single fix moved the score.

The system became readable when all four lanes aligned.

Total system load

77fixes

Per-lane counts (Structured 27 · Trust 12 · Parser 15 · Theme 23 = 77) pulled from audit-trail.json#/fixesApplied. Each lane’s before/after delta is reproducible against the deterministic scanner at rubric SHA afbd702. Each lane’s “Fixed:” line points back to the failure mode named in the Day 1 diagnostic — fix system answers diagnostic, dimension by dimension.

Benchmark

Bigger brands lost.

Not because they were worse —
because AI couldn’t read them.

At the benchmark scan window, CogniPaws (3 SKUs, stock Dawn, no outside capital) sat 28 points above the next-highest DTC brand on machine-readability. The four DTC brands had raised $1.2B+ in cumulative venture capital between them.

Methodology

Same scanner code path · same rubric SHA afbd702 · same scan window 2026-04-22. CogniPaws shown at scan 4 (mid-progression) for apples-to-apples comparison; final score 100 reached by scan 8.

Same-week comparison

62.1 = InsightPath baseline · Phase 2 median 2026-05-11

CogniPawsSame-week dominance77100
AllbirdsBelow corpus49
SkimsBelow corpus42
GymsharkBelow corpus41
BombasBelow corpus38
Corpus avgHover any brand for scale + capital context

Visibility did not win.

Legibility did.

Comparison data pulled from audit-trail.json#/competitorBenchmarks · scan window 2026-04-22 · rubric SHA afbd702. The four DTC brands were scanned with the same code path on the same week against the same gap registry SHA — selection-bias mitigation per Pannucci 2010. CogniPaws is shown at scan 4 (the same-week measurement), not its final scan-8 score, to keep the comparison apples-to-apples per Bradford Hill criteria.

Signal Stack

Same proof. Four AI-readable layers.

Slide Ledger · visual proof story

Podcast Briefing · listenable summary

Video Walkthrough · human walkthrough

Receipt Chain · machine proof

{
  "receipt":  "afbd702",
  "score":    100,
  "result":   "GO",
  "deviations": []
}
Open audit-trail.json
→ Verify yourself

AI extraction surface

What an AI agent consumes from this page.

Not a summary card. The literal payload an extractor returns — and an honest audit of the schemas this page currently emits.

Extracted entity payload

application/json
{
  "entity": "CogniPaws",
  "platform": "Shopify",
  "theme": "Dawn",
  "shopify_handle": "4qhyzx-ut.myshopify.com",

  "core_claim": "Legibility > Visibility",

  "outcome": {
    "baseline": 37,
    "final": 100,
    "delta": 63,
    "dimension": "ucpscore_v1"
  },

  "telemetry": {
    "timeline_days": 21,
    "scans": 8,
    "fixes": 77
  },

  "method": ["Scan", "Fix", "Rescan"],

  "proof": {
    "receipt_sha": "$afbd702d6a2e4b44ebe7a67043f18b9b7c1668b1",
    "audit_trail": "https://ucpscore.ai/case-study/cognipaws/audit-trail.json",
    "rubric_sha": "afbd702d6a2e4b44ebe7a67043f18b9b7c1668b1",
    "reproducible": true
  },

  "bias_disclosures": [
    "industry_sponsorship",
    "n_equals_1",
    "self_grading",
    "founder_operator",
    "survivorship",
    "reproducibility_layer_4",
    "selection",
    "causal_vs_correlational"
  ],

  "same_week_comparison": {
    "scan_window": "2026-04-22",
    "rubric_sha": "afbd702",
    "cognipaws_at_window": 77,
    "competitors": {
      "Allbirds": 49,
      "Skims": 42,
      "Gymshark": 41,
      "Bombas": 38
    }
  }
}

Schema surface audit

Honest inventory of what the page emits today vs. what’s wired next. 8 live · 2 queued.

  • Article (JSON-LD)

    ✓ Live

    Author = UCPScore Intelligence Desk · publisher = UCPScore · mentions + citation + isBasedOn linked to audit-trail.

  • Organization (JSON-LD)

    ✓ Live

    UCPScore publisher node with @id stable identifier · knowsAbout entity graph.

  • BreadcrumbList

    ✓ Live

    UCPScore → Intelligence Desk → CogniPaws Case Study.

  • SpeakableSpecification

    ✓ Live

    Voice-agent lift targets on h1, h2, and [data-speakable] selectors.

  • reviewedScans (custom cluster)

    ✓ Live

    Article hasPart Claim[] · 8 scans pinned to audit-trail.json#/scans/N · 0 of 12 authority sites use this in production. The negative result is the moat.

  • Dataset

    ✓ Live

    audit-trail.json — distribution: JSON + CSV · variableMeasured + biasDisclosures fully populated.

  • OpenGraph + Twitter card

    ✓ Live

    Title, description, canonical URL via Next.js metadata.

  • HTML5 article semantics

    ✓ Live

    <main>, <section>, <article>-grade structure — server-rendered, no JS gate.

  • FAQPage (JSON-LD)

    ⏳ Queued

    Auto-emitted from Q:/A: markdown when FAQ block ships in body.

  • VideoObject + AudioObject + PodcastEpisode

    ⏳ Queued

    Multimedia schema with full chapter + transcript wiring — separate session.

This is no longer content.

It is structured, extractable truth.

Payload fields all reproduce against audit-trail.json · receipt SHA afbd702. Schema audit reflects current emission state honestly per verify-first doctrine — the queued schemas are the next implementation push, not claims this page already makes. A case study about machine-readability that pretended its own schema surface is what the audit shows would be the lie the case study exists to refute.

Run your score

You don’t have to trust this case study.

Run the same scanner against your own store.

60-second scan · same deterministic rubric SHA afbd702 · public methodology · your own receipt. The scan is free because the receipt is the trust mechanism — and the trust mechanism is the product.

Same proof contract. Different store.

What you get

  • 60-second scan

    No credit card · no email gate · no signup wall.

  • Same scoring path

    Same deterministic scanner you just verified above. Public rubric.

  • Your own receipt SHA

    Cryptographic proof of state — yours to keep, audit, or republish.

  • Audit-trail downloadable

    JSON + CSV · variableMeasured + biasDisclosures fully populated.

The trust inversion

Other case studies ask you to trust the result. UCPScore asks you to verify it — and gives you the receipt to do it.

Free scan · public rubric registry · receipt SHA pinned per scan · CFA doctrine. Comparison data, methodology, and reproduction instructions in audit-trail.json.