Methodology + Deliverables
Seven pillars. One outcome: your law firm cited by name when a future client asks ChatGPT, Gemini, Perplexity, or Google AI Overviews "who's the best lawyer for…" The C is the structural moat — no generalist agency offers ABA + state-bar compliance review. We do, on every engagement, on every page.
Why this exists
Generative engine optimization didn't exist as a discipline three years ago. Agencies that bolted GEO onto traditional SEO are playing catch-up; their methodologies are legacy patterns retrofitted with AI vocabulary. WTT Digital was built AI-search-first. The VERDICT Framework is the methodology — designed from scratch to get law firms named in AI-generated answers.
Each of the seven pillars maps to a specific deliverable. Each has a measurable outcome on monthly reporting. And the ordering is intentional: Visibility before Entity, Entity before Relevance, Relevance before Digital PR. Skipping a pillar (or running them out of order) is the mistake that's already cost dozens of agencies meaningful client outcomes.
The seventh pillar — Compliance — is the part nobody else offers. Law firm marketing is regulated by ABA Model Rules 7.1 through 7.5 plus state-specific advertising rules that vary widely. Outputs from generalist GEO agencies routinely violate these rules: "best lawyer" claims, unsubstantiated specialization, predictive language. WTT reviews every output against the rules in the firm's specific jurisdiction. Built in, not bolted on.
Methodology + Deliverables
VERDICT is the methodology — the seven-pillar operating system that runs underneath every engagement. It's how we do the work.
PROOF™ is how you see the work. Every deliverable we ship — every monthly retainer report, every quarterly research paper, every engagement-start audit — follows the PROOF format. Five sections, the same five sections every time, so you always know where to look for what.
VERDICT runs under the hood: Visibility, Entity authority, Relevance engineering, Digital PR for AI, Intent-tuned content, Compliance, and Tracking. The seven pillars below explain each one. PROOF is what lands on your desk: Position, Receipts, Outcomes, Oversight, Forward. Five sections. Two formats. Detailed at the bottom of this page.
Most agencies tell you about their methodology and then ship an unstructured monthly slide deck. We separate the two on purpose. The methodology is what you're hiring. The deliverables are how you'll know it's working.
VERDICT™ — the engine
Seven pillars. Visibility · Entity · Relevance · Digital PR · Intent · Compliance · Tracking. Detailed below.
PROOF™ — the dashboard
Five sections. Position · Receipts · Outcomes · Oversight · Forward. Detailed at bottom.
Pillar 1 of 7
Baseline test of how the firm is currently cited (or absent) across ChatGPT, Gemini, Perplexity, Google AI Overviews, Copilot, and Claude — by practice area, by city, by buyer-intent question.
Baseline first. Without a citation baseline, you cannot measure lift, and the engagement is flying blind. Visibility runs on day one of any retainer and recurs monthly.
We test 50–100 buyer-intent prompts tuned to the firm's practice mix. We log which engines cite the firm vs. competitors, in what position, and with what framing. The output is a citation share-of-voice score and a 90-day quick-win plan.
Sample output
Sample output: Visibility report showing your firm cited in 3/12 PI prompts in your target city, vs. competitor A cited in 9/12. Recommendations: 4 quick wins to close the gap within 60 days.
Pillar 2 of 7
Schema markup, knowledge-graph optimisation, attorney/firm entity work in directories (Avvo, Justia, FindLaw, Martindale-Hubbell), JD Supra, Law360 placements.
AI engines build entity graphs. Your firm is either an entity they recognise (with attorneys, practice areas, locations, languages, accreditations connected) or it's a string they sometimes encounter. Entities get cited; strings get glossed over.
We do the schema markup (LegalService, Person, FAQPage, BreadcrumbList), the directory hygiene, the knowledge-panel claim, and the cross-platform consistency work. Then we verify the entity is recognised across the major engines.
Sample output
Sample output: Schema/entity audit identifying 14 missing schema fields, 6 directory inconsistencies, and 2 attorney entity merges. Implementation completed by end of month 2.
Pillar 3 of 7
Content rewritten and engineered for AI retrieval — clear answers, structured Q&A, passage-level optimisation, citation-friendly formatting.
AI engines pull from passages, not pages. A practice-area page that takes six paragraphs to answer "how much does an immigration consultation cost" will lose to a competitor that answers in one sentence followed by structured detail.
We re-engineer the firm's existing content for passage-level retrieval: clear question-answer pairs, structured lists, attorney-attributed statements, schema-tagged FAQs. Each piece is built to get cited verbatim or paraphrased cleanly.
Pillar 4 of 7
Citation acquisition from sources AI models actually pull from: legal directories, JD Supra, Law360, Above the Law, state-bar publications, Reddit legal subs, Quora.
Off-page authority for AI search isn't backlinks — it's citations from corpora the models trained on or retrieve from at runtime. JD Supra placements, Law360 commentary, state-bar publications, well-positioned Reddit and Quora answers. These are the surfaces AI engines disproportionately pull from for legal queries.
We pitch and place. We don't buy links. Every placement is editorial, attribution to a named attorney at the firm, and structured to be quotable by an AI engine.
Pillar 5 of 7
Practice-area-specific content engineered against the buyer's actual AI-search journey (Who/What/How/Why questions per practice area).
PI buyers ask different questions than immigration buyers. Family law than corporate. We map each practice area's actual buyer-intent prompts — observed across ChatGPT, Gemini, and Perplexity — and produce content tuned to those specific questions.
The output isn't a generic blog. It's a content set engineered against a documented prompt taxonomy: Who (e.g., "Who's the best PI lawyer in [city]"), What ("What's a wrongful-termination case worth"), How ("How do I file an immigration appeal"), Why ("Why was my visa denied"). Each piece targets a specific prompt cluster.
Pillar 6 of 7
Every output reviewed against ABA Model Rules and the relevant state-bar advertising rules. No misleading specialisation claims, no testimonial violations, no rule-breaking comparisons.
This is the structural moat. No generalist GEO agency offers ABA-rule review as part of the deliverable. Most don't even know it's required.
We review every public-facing output — hero copy, practice-area pages, FAQ answers, social-pitched commentary, attorney bio language — against ABA Model Rules 7.1 through 7.5 and the firm's state-specific advertising rules. We flag (and rewrite) language that risks violation: "best," "top-rated," predictive language, testimonial-handling, fee-quote phrasing, results-based marketing.
Two outcomes: the firm doesn't get a state-bar inquiry, and the content actually gets cited (because compliant content is also more accurate, which is what AI engines reward).
Sample output
Sample output: Pre-publish review of 12 practice-area FAQ answers; 3 flagged for revision under California Rule 7.1; revised, re-reviewed, published.
Pillar 7 of 7
Monthly dashboards covering AI citation share-of-voice, question coverage, qualified-lead attribution, consultation-booked attribution, case-value attribution.
Tracking ties the engagement to a business outcome the managing partner cares about. Citation lift is the input; consultations booked and case value are the output.
Monthly dashboards. First-Friday delivery. Always include: citation share-of-voice trend by engine, prompt coverage trend, attribution to consultation requests, attribution to closed cases (where measurable). Annual review covers full pipeline impact.
PROOF™ — the deliverable format
PROOF is the format every deliverable follows. Five sections, every time. Two formats, depending on context.
Recurring · Monthly · For retainer clients
The format every retainer client gets, the same way every month. Built so a managing partner can read it in ten minutes and trust the work without needing to translate marketing language.
Where your firm stands in AI search today. Citation share by engine, coverage on tracked prompts, entity-recognition status, branded vs. unbranded query performance.
Example line
Cited in 12% of tracked immigration prompts on ChatGPT (up from 4% at baseline). 31% on Perplexity. 0% on Google AI Overviews — entity disambiguation in progress.
Every asset published, every schema deployment, every mention earned, every citation placed during the period. No black box. The work shown.
Example line
This period: 4 practice-area pages published, 7 schema enhancements deployed, 3 third-party citations earned, 11 entity-graph improvements.
What changed in the metrics and which prompts drove the change. Citation lift, engine coverage delta, AI-attributed traffic, AI-attributed consultations.
Example line
+47% citation lift on "best immigration lawyer Irvine." New entity recognition on Perplexity. Two consultations attributed to AI-search referral this period.
Compliance review log. Every output reviewed against ABA Model Rule 7.1 and the relevant state-bar advertising rules. What was flagged, how it was resolved, who reviewed it (named, by hand). The structural difference between WTT and a generalist agency.
Example line
Compliance review: 14 outputs reviewed by Shawn Lai. Two flagged for California Rule 7.1 (testimonial language); revised before publication. Zero outputs published without review.
Hypothesis-driven plan for the next period. What we'll test, why, what we expect to move, what risk flags we're watching.
Example line
Next 30 days: testing entity disambiguation against the 6 firms with similar names in the target metro. Expected: 15-25% lift in branded-query citations. Risk flag: GBP listing pending verification.
Quarterly · Public · Peer-style research
The format every research paper we publish follows. Empirical, falsifiable, written so other practitioners can reproduce the methodology and check the math. The work that earns inclusion in AI-engine training and citation corpora — and the work that distinguishes WTT from agencies that publish opinions instead of evidence.
The hypothesis being tested. What the paper claims and why it matters. Falsifiable from the first page.
The methodology. Sample size, prompt corpus, time window, control conditions, data sources. Reproducible by anyone with the same tools.
The raw data. Charts, citation tables, prompt-by-prompt results. What was actually measured, before interpretation.
The interpretation, with confidence intervals and dissenting takes. What the data does — and doesn't — support.
The bounded, actionable conclusion. What practitioners should do with this. What the next experiment should test.
PROOF Series № 1 · forthcoming · Q3 2026
A quantitative study of 200 California law firms across 4 AI engines, 60 days, 50 buyer-intent prompts.
Premise (publishing): California firms whose public-facing content materially complies with California State Bar Rule 7.1 are cited more frequently in AI-search answers than non-compliant firms — controlling for domain authority, content velocity, and practice-area mix. We test the claim against 200 firms (100 compliant, 100 with documented 7.1 violations per public bar records) across ChatGPT, Perplexity, Claude, and Google AI Overviews.
Why we publish either way: If the hypothesis holds, compliance becomes a quantifiable AI-search lever, not just a regulatory cost. If it doesn't, that's also publishable — and it changes how we'd advise firms. Either result is useful. Both are publishable. That's the discipline.
Sample PROOF Report
Most agencies guard their reports behind a sales call. We publish a real one. Client name and identifying details redacted with permission. Numbers, prompts, and observations are unaltered.
Why two formats, one acronym
A monthly client report and a peer-reviewed research paper are different artifacts with different audiences — but they answer the same five questions in the same order. Where do things stand. What was done. What changed. Was it sound. What's next. We use one acronym for both because the discipline is the same: don't write a section the reader has to translate, and don't ship a deliverable without all five.
FAQ
VERDICT™ and PROOF™ are filed trademarks of WTT Consulting LLC (DBA WTT Digital). USPTO clearance in progress; both marks are filed and used as separate trademarks protecting distinct assets — VERDICT for the methodology, PROOF for the deliverable format.
Ready when you are
Free AI Visibility Audit. 60 seconds. See your citation gap before your competitors do.