GEO for Law FirmsAI SEO (Hybrid)AI Visibility Audit Methodology — VERDICT™ + PROOF™ Personal InjuryImmigrationFamily LawCorporate / M&ACriminal Defense Case Studies Insights About Free AI Visibility Audit
HomeMethodology

Methodology + Deliverables

The VERDICT™ Framework
+ PROOF™ Reports

Seven pillars. One outcome: your law firm cited by name when a future client asks ChatGPT, Gemini, Perplexity, or Google AI Overviews "who's the best lawyer for…" The C is the structural moat — no generalist agency offers ABA + state-bar compliance review. We do, on every engagement, on every page.

Why this exists

Why a framework, and why this one.

Generative engine optimization didn't exist as a discipline three years ago. Agencies that bolted GEO onto traditional SEO are playing catch-up; their methodologies are legacy patterns retrofitted with AI vocabulary. WTT Digital was built AI-search-first. The VERDICT Framework is the methodology — designed from scratch to get law firms named in AI-generated answers.

Each of the seven pillars maps to a specific deliverable. Each has a measurable outcome on monthly reporting. And the ordering is intentional: Visibility before Entity, Entity before Relevance, Relevance before Digital PR. Skipping a pillar (or running them out of order) is the mistake that's already cost dozens of agencies meaningful client outcomes.

The seventh pillar — Compliance — is the part nobody else offers. Law firm marketing is regulated by ABA Model Rules 7.1 through 7.5 plus state-specific advertising rules that vary widely. Outputs from generalist GEO agencies routinely violate these rules: "best lawyer" claims, unsubstantiated specialization, predictive language. WTT reviews every output against the rules in the firm's specific jurisdiction. Built in, not bolted on.

Methodology + Deliverables

How the work reaches you.

VERDICT is the methodology — the seven-pillar operating system that runs underneath every engagement. It's how we do the work.

PROOF™ is how you see the work. Every deliverable we ship — every monthly retainer report, every quarterly research paper, every engagement-start audit — follows the PROOF format. Five sections, the same five sections every time, so you always know where to look for what.

VERDICT runs under the hood: Visibility, Entity authority, Relevance engineering, Digital PR for AI, Intent-tuned content, Compliance, and Tracking. The seven pillars below explain each one. PROOF is what lands on your desk: Position, Receipts, Outcomes, Oversight, Forward. Five sections. Two formats. Detailed at the bottom of this page.

Most agencies tell you about their methodology and then ship an unstructured monthly slide deck. We separate the two on purpose. The methodology is what you're hiring. The deliverables are how you'll know it's working.

VERDICT™ — the engine

Seven pillars. Visibility · Entity · Relevance · Digital PR · Intent · Compliance · Tracking. Detailed below.

PROOF™ — the dashboard

Five sections. Position · Receipts · Outcomes · Oversight · Forward. Detailed at bottom.

V

Pillar 1 of 7

Visibility Audit

Baseline test of how the firm is currently cited (or absent) across ChatGPT, Gemini, Perplexity, Google AI Overviews, Copilot, and Claude — by practice area, by city, by buyer-intent question.

Baseline first. Without a citation baseline, you cannot measure lift, and the engagement is flying blind. Visibility runs on day one of any retainer and recurs monthly.

We test 50–100 buyer-intent prompts tuned to the firm's practice mix. We log which engines cite the firm vs. competitors, in what position, and with what framing. The output is a citation share-of-voice score and a 90-day quick-win plan.

Sample output

Sample output: Visibility report showing your firm cited in 3/12 PI prompts in your target city, vs. competitor A cited in 9/12. Recommendations: 4 quick wins to close the gap within 60 days.

E

Pillar 2 of 7

Entity Authority

Schema markup, knowledge-graph optimisation, attorney/firm entity work in directories (Avvo, Justia, FindLaw, Martindale-Hubbell), JD Supra, Law360 placements.

AI engines build entity graphs. Your firm is either an entity they recognise (with attorneys, practice areas, locations, languages, accreditations connected) or it's a string they sometimes encounter. Entities get cited; strings get glossed over.

We do the schema markup (LegalService, Person, FAQPage, BreadcrumbList), the directory hygiene, the knowledge-panel claim, and the cross-platform consistency work. Then we verify the entity is recognised across the major engines.

Sample output

Sample output: Schema/entity audit identifying 14 missing schema fields, 6 directory inconsistencies, and 2 attorney entity merges. Implementation completed by end of month 2.

R

Pillar 3 of 7

Relevance Engineering

Content rewritten and engineered for AI retrieval — clear answers, structured Q&A, passage-level optimisation, citation-friendly formatting.

AI engines pull from passages, not pages. A practice-area page that takes six paragraphs to answer "how much does an immigration consultation cost" will lose to a competitor that answers in one sentence followed by structured detail.

We re-engineer the firm's existing content for passage-level retrieval: clear question-answer pairs, structured lists, attorney-attributed statements, schema-tagged FAQs. Each piece is built to get cited verbatim or paraphrased cleanly.

D

Pillar 4 of 7

Digital PR for AI

Citation acquisition from sources AI models actually pull from: legal directories, JD Supra, Law360, Above the Law, state-bar publications, Reddit legal subs, Quora.

Off-page authority for AI search isn't backlinks — it's citations from corpora the models trained on or retrieve from at runtime. JD Supra placements, Law360 commentary, state-bar publications, well-positioned Reddit and Quora answers. These are the surfaces AI engines disproportionately pull from for legal queries.

We pitch and place. We don't buy links. Every placement is editorial, attribution to a named attorney at the firm, and structured to be quotable by an AI engine.

I

Pillar 5 of 7

Intent-tuned Content

Practice-area-specific content engineered against the buyer's actual AI-search journey (Who/What/How/Why questions per practice area).

PI buyers ask different questions than immigration buyers. Family law than corporate. We map each practice area's actual buyer-intent prompts — observed across ChatGPT, Gemini, and Perplexity — and produce content tuned to those specific questions.

The output isn't a generic blog. It's a content set engineered against a documented prompt taxonomy: Who (e.g., "Who's the best PI lawyer in [city]"), What ("What's a wrongful-termination case worth"), How ("How do I file an immigration appeal"), Why ("Why was my visa denied"). Each piece targets a specific prompt cluster.

C

Pillar 6 of 7

Compliance Layer

Every output reviewed against ABA Model Rules and the relevant state-bar advertising rules. No misleading specialisation claims, no testimonial violations, no rule-breaking comparisons.

This is the structural moat. No generalist GEO agency offers ABA-rule review as part of the deliverable. Most don't even know it's required.

We review every public-facing output — hero copy, practice-area pages, FAQ answers, social-pitched commentary, attorney bio language — against ABA Model Rules 7.1 through 7.5 and the firm's state-specific advertising rules. We flag (and rewrite) language that risks violation: "best," "top-rated," predictive language, testimonial-handling, fee-quote phrasing, results-based marketing.

Two outcomes: the firm doesn't get a state-bar inquiry, and the content actually gets cited (because compliant content is also more accurate, which is what AI engines reward).

Sample output

Sample output: Pre-publish review of 12 practice-area FAQ answers; 3 flagged for revision under California Rule 7.1; revised, re-reviewed, published.

T

Pillar 7 of 7

Tracking & Reporting

Monthly dashboards covering AI citation share-of-voice, question coverage, qualified-lead attribution, consultation-booked attribution, case-value attribution.

Tracking ties the engagement to a business outcome the managing partner cares about. Citation lift is the input; consultations booked and case value are the output.

Monthly dashboards. First-Friday delivery. Always include: citation share-of-voice trend by engine, prompt coverage trend, attribution to consultation requests, attribution to closed cases (where measurable). Annual review covers full pipeline impact.

PROOF™ — the deliverable format

PROOF — how you see the work.

PROOF is the format every deliverable follows. Five sections, every time. Two formats, depending on context.

Recurring · Monthly · For retainer clients

PROOF Reports

The format every retainer client gets, the same way every month. Built so a managing partner can read it in ten minutes and trust the work without needing to translate marketing language.

P

Position

Where your firm stands in AI search today. Citation share by engine, coverage on tracked prompts, entity-recognition status, branded vs. unbranded query performance.

Example line

Cited in 12% of tracked immigration prompts on ChatGPT (up from 4% at baseline). 31% on Perplexity. 0% on Google AI Overviews — entity disambiguation in progress.

R

Receipts

Every asset published, every schema deployment, every mention earned, every citation placed during the period. No black box. The work shown.

Example line

This period: 4 practice-area pages published, 7 schema enhancements deployed, 3 third-party citations earned, 11 entity-graph improvements.

O

Outcomes

What changed in the metrics and which prompts drove the change. Citation lift, engine coverage delta, AI-attributed traffic, AI-attributed consultations.

Example line

+47% citation lift on "best immigration lawyer Irvine." New entity recognition on Perplexity. Two consultations attributed to AI-search referral this period.

O

Oversight

Compliance review log. Every output reviewed against ABA Model Rule 7.1 and the relevant state-bar advertising rules. What was flagged, how it was resolved, who reviewed it (named, by hand). The structural difference between WTT and a generalist agency.

Example line

Compliance review: 14 outputs reviewed by Shawn Lai. Two flagged for California Rule 7.1 (testimonial language); revised before publication. Zero outputs published without review.

F

Forward

Hypothesis-driven plan for the next period. What we'll test, why, what we expect to move, what risk flags we're watching.

Example line

Next 30 days: testing entity disambiguation against the 6 firms with similar names in the target metro. Expected: 15-25% lift in branded-query citations. Risk flag: GBP listing pending verification.

Quarterly · Public · Peer-style research

PROOF Series

The format every research paper we publish follows. Empirical, falsifiable, written so other practitioners can reproduce the methodology and check the math. The work that earns inclusion in AI-engine training and citation corpora — and the work that distinguishes WTT from agencies that publish opinions instead of evidence.

P

Premise

The hypothesis being tested. What the paper claims and why it matters. Falsifiable from the first page.

R

Research

The methodology. Sample size, prompt corpus, time window, control conditions, data sources. Reproducible by anyone with the same tools.

O

Observation

The raw data. Charts, citation tables, prompt-by-prompt results. What was actually measured, before interpretation.

O

Outcome

The interpretation, with confidence intervals and dissenting takes. What the data does — and doesn't — support.

F

Finding

The bounded, actionable conclusion. What practitioners should do with this. What the next experiment should test.

PROOF Series № 1 · forthcoming · Q3 2026

Cal Bar Rule 7.1 Compliance and AI-Search Citation Rates

A quantitative study of 200 California law firms across 4 AI engines, 60 days, 50 buyer-intent prompts.

Premise (publishing): California firms whose public-facing content materially complies with California State Bar Rule 7.1 are cited more frequently in AI-search answers than non-compliant firms — controlling for domain authority, content velocity, and practice-area mix. We test the claim against 200 firms (100 compliant, 100 with documented 7.1 violations per public bar records) across ChatGPT, Perplexity, Claude, and Google AI Overviews.

Why we publish either way: If the hypothesis holds, compliance becomes a quantifiable AI-search lever, not just a regulatory cost. If it doesn't, that's also publishable — and it changes how we'd advise firms. Either result is useful. Both are publishable. That's the discipline.

Sample PROOF Report

What every retainer client receives, every month.

Most agencies guard their reports behind a sales call. We publish a real one. Client name and identifying details redacted with permission. Numbers, prompts, and observations are unaltered.

PROOF Report — Period: April 2026

Mid-market California immigration firm

P

Position

  • Citation share — ChatGPT (immigration prompts, Irvine + LA): 18% (baseline 4%, Δ +14 pts)
  • Citation share — Perplexity: 41% (baseline 12%, Δ +29 pts)
  • Citation share — Google AI Overviews: 6% (baseline 0%, first appearance this period)
  • Branded-query recognition: confirmed across ChatGPT, Perplexity, Claude. Pending: Gemini.
  • Tracked prompts: 50. Engines: 4. Snapshots: weekly via Profound.
R

Receipts

  • Practice-area pages published: 3 (EB-5, asylum, family-based AOS)
  • Schema deployments: LegalService updated on 8 pages; FAQPage added on 12; Person markup added for 2 attorneys
  • Editorial placements earned: 1 (JD Supra, EB-5 commentary, attorney-attributed)
  • Reddit/Quora attributed answers placed: 3 (r/immigration, r/USCIS — all attorney-named, all compliant)
  • Directory hygiene: Avvo, Justia, FindLaw entity merges completed (3 stale duplicates removed)
O

Outcomes

  • Largest mover: "best EB-5 immigration lawyer Orange County" — cited in 4/4 engines, prior period 1/4
  • AI-attributed website sessions: 312 (Q1 monthly avg: 47)
  • AI-attributed consultation requests: 7 confirmed, 2 booked (avg engagement value: $14,500)
  • Negative finding: Gemini did not pick up new entity recognition — investigating Knowledge Graph claim status next period
O

Oversight

  • Outputs reviewed: 23 (3 pages, 12 FAQ answers, 4 social posts, 3 Reddit/Quora answers, 1 JD Supra placement)
  • Reviewer: Shawn Lai, on every output, prior to publication
  • Cal Bar Rule 7.1 flags raised: 4 (testimonial-implication language, "best" superlative, predictive outcome phrasing, fee-quote phrasing)
  • Resolution: all 4 revised pre-publication. 0 outputs published without review.
  • Audit log retained for client record.
F

Forward

  • Hypothesis to test: Knowledge Graph claim → Gemini entity recognition lift within 30 days
  • Planned: 4 additional practice-area pages (asylum, U-visa, citizenship, deportation defense), Korean-language placements (primary client demographic)
  • Expected: branded-query share-of-voice on Gemini moves from 0% to 25–40% by month-end
  • Risk flag: state-bar 7.1 enforcement update circulating — monitoring for revisions
Reviewed and signed: Shawn Lai, Founder & AI/GEO Strategist, WTT Digital. Compliance review log retained on file.

Why two formats, one acronym

A monthly client report and a peer-reviewed research paper are different artifacts with different audiences — but they answer the same five questions in the same order. Where do things stand. What was done. What changed. Was it sound. What's next. We use one acronym for both because the discipline is the same: don't write a section the reader has to translate, and don't ship a deliverable without all five.

FAQ

Questions about VERDICT and PROOF.

VERDICT is the methodology — the seven-pillar operating system that runs underneath every engagement. It's how we do the work. PROOF is the deliverable format. Every monthly client report follows the PROOF format (Position, Receipts, Outcomes, Oversight, Forward); every quarterly research paper follows the PROOF format (Premise, Research, Observation, Outcome, Finding). Different audiences, same five-section discipline. VERDICT is the engine; PROOF is the dashboard.
VERDICT™ and PROOF™ are filed as separate trademarks of WTT Consulting LLC, doing business as WTT Digital. They protect distinct assets — VERDICT covers the seven-pillar methodology; PROOF covers the deliverable format. Independent USPTO clearance for both marks is in progress.
Each pillar maps to a discrete deliverable with a separate measurement layer. Compressing them dilutes accountability. Five-pillar frameworks (common in generalist agencies) typically merge Compliance into a footnote or skip it entirely — that's exactly the gap WTT Digital was built to close.
Yes. A redacted, anonymized sample PROOF Report is published openly on this page — same format every retainer client receives monthly. We publish it because the brand is built on receipts, not promises.
Yes — the Visibility pillar is also available as a standalone Diagnostic Audit ($2,500 fixed, 30-page deliverable, 60-min walkthrough). The audit is delivered in the PROOF format. It's a common way firms start the relationship before committing to a full retainer.
It complements rather than replaces. Roughly 60% of the technical work (schema, entity, content engineering) overlaps with modern technical SEO. The 40% that's new is AI-search-specific: prompt simulation, AI-citation tracking, citation-source acquisition tuned to AI corpora, compliance review on AI-generated outputs.

VERDICT™ and PROOF™ are filed trademarks of WTT Consulting LLC (DBA WTT Digital). USPTO clearance in progress; both marks are filed and used as separate trademarks protecting distinct assets — VERDICT for the methodology, PROOF for the deliverable format.

Ready when you are

Find out how AI search sees your firm.

Free AI Visibility Audit. 60 seconds. See your citation gap before your competitors do.

Get my free audit Book a strategy call