When someone asks ChatGPT, Claude, or Perplexity for "the best personal injury lawyer in Chicago," which firm gets recommended isn't random. It's not purely about who has the biggest verdicts or the most Super Lawyers awards. And it's definitely not about who has the best-looking website.
It's about which firm's website is technically readable by AI systems.
We know this because we tested it. In April 2026, Faneros conducted a comprehensive technical AI readiness audit of 10 of the most competitive personal injury and accident law firms in the Chicago market. We queried 7 major AI platforms with 17 real client questions, counted every mention, and then crawled every firm's live website to diagnose why some firms dominated AI recommendations while others — including some of the most prestigious firms in the state — were nearly invisible.
The findings challenge a fundamental assumption about online visibility: that authority and reputation are enough.
analyzed
Methodology
We selected 10 Chicago-based personal injury and accident law firms that appear repeatedly in AI responses for high-intent client queries. These are the firms that prospective clients encounter when they ask AI platforms questions like "best car accident lawyer Chicago," "what should I do after a truck crash in Illinois," or "Chicago nursing home negligence attorney."
We tested 17 queries of this type across 7 major AI platforms: ChatGPT, Claude, Perplexity, Gemini, Grok, Copilot, and Google AI Overview. This produced 119 total platform responses, and we recorded every firm mention in every response.
Then we conducted a full technical inspection of each firm's website. No marketing creative or branding elements were evaluated — only how the websites communicate with AI systems. Specifically, we audited:
Presence and quality of llms.txt — the lightweight file that helps LLMs parse site content. Robots.txt rules for AI user agents (GPTBot, ClaudeBot, PerplexityBot, etc.). Schema markup validity — FAQPage, Article/BlogPosting, Organization, Attorney/Person, LegalService. Parse errors and duplicate blocks. Content extractability — whether AI crawlers can reach the actual content or get blocked by JavaScript fragments. Indexable page count and content depth.
Every finding was verified by crawling the live site and validating output programmatically. All data points are independently reproducible.
The Full Competitive Comparison
The following table represents the complete dataset. Total AI mentions ranged from a high of 82 to a low of 12 — a nearly 7× gap between firms competing in the same market for the same clients.
| Firm | AI Mentions | llms.txt | AI Bot Rules | FAQ Schema | Article Schema | Schema Errors | Content URLs | Authority Signal |
|---|---|---|---|---|---|---|---|---|
| Salvi, Schostok & Pritchard | 82 | ✓ | Standard | ✓ 17 Qs | ✗ | 0 | 1,780 | $363M verdict, Top 3 IL |
| Clifford Law Offices | 34 | ✗ | Ultra-open | ✗ | ✗ | 0 | ~500+ | #1 IL lawyer 13 yrs |
| Malman Law | 31 | ✓ | Explicit AI | ✓ 3 Qs | ✓ | 0 | 1,172 | Moderate |
| Power Rogers | 29 | ✓ | Standard | ✗ | ✗ | 0 | ~400+ | Strong verdicts |
| Levin & Perconti | 22 | ✗ | Standard | ✗ | ✗ | 0 | ~300+ | Nursing home leader |
| Ankin Law | 16 | ✗ | Standard | ✗ | ✗ | 0 | ~500+ | Workers comp focus |
| Corboy & Demetrio | 13 | ✗ | Honeypot | ✗ | ✗ | 0 | ~200+ | #1 IL 15+ years |
| Romanucci & Blandin | 13 | ✗ | Honeypot | ✗ | ✗ | 0 | ~300+ | #2 IL, Tier 1 national |
| Staver Accident Injury | 13 | ✓ | Standard | ✗ | ✗ | 1 | 1,430 | Moderate |
| Disparti Law Group | 12 | ✗ | Standard | BROKEN | ✗ | 3+ | 1,359 | $2B recoveries |
Look at the bottom of this table. Corboy & Demetrio — whose lead attorney Thomas Demetrio has been ranked the #1 lawyer in Illinois for over 15 consecutive years, whose firm has recovered billions in settlements — has 13 AI mentions. The same as Staver, a firm with a fraction of their prestige. Disparti Law Group, with $2 billion in total recoveries and 1,359 indexable pages, has 12.
Now look at the top. Salvi leads with 82 mentions, driven by a combination of genuine courtroom authority ($363 million Sterigenics verdict, ITLA presidency) and clean technical infrastructure — llms.txt, 17 validated FAQ questions in schema, page-specific structured data.
The question this data forces is: what explains the gap?
The Core Correlations
| Technical Factor | Firms With | Avg. Mentions | Firms Without | Avg. Mentions | Multiplier |
|---|---|---|---|---|---|
| llms.txt file present | 4 | 38.75 | 6 | 18.3 | 2.1× |
| Working FAQPage schema | 2 | 56.5 | 8 | ~20 | ~2.8× |
| Zero schema parse errors | 8 | ~31 | 2 | 12.5 | 2.5× |
| Explicit AI bot allow rules | 1 | 31 | 9 | ~22 | 1.4× |
| Article/BlogPosting schema | 1 | 31 | 9 | ~23 | 1.3× |
The presence of an llms.txt file showed the clearest and most consistent correlation with higher AI visibility. Firms that provided this dedicated instruction file for LLMs received more than double the AI mentions of firms without one.
Working FAQ schema produced the highest multiplier (~2.8×), though with only two firms in the sample, the signal is strong but narrow. Schema errors produced the clearest penalty: the two firms with parse errors averaged just 12.5 mentions — the bottom of the entire ranking.
These factors compound. Malman Law, which combined llms.txt + explicit AI bot rules + Article schema + FAQ schema + zero errors, achieved 31 mentions despite moderate authority. It outperformed firms with objectively stronger courtroom credentials.
Case Study: Malman Law — How Technical Optimization Beats Authority
Malman Law vs. Disparti Law Group
1,172 pages · Moderate authority
1,359 pages · $2B recoveries
Malman Law is the single most important data point in this audit. Disparti Law Group has 16% more indexable content, a $2 billion recovery track record, a recognizable consumer brand (#LARRYWINS), and broader practice area coverage. By every traditional authority measure, Disparti should outperform Malman.
Instead, Malman receives 2.5× more AI mentions. The difference is entirely technical.
What Malman does that Disparti does not: A fully implemented llms.txt file with content summaries for every blog post, generated by their SEO plugin. Explicit AI bot allow rules in robots.txt — individually naming GPTBot, ClaudeBot, ChatGPT-User, and CCBot. Article schema with named author attribution ("Steven J. Malman") on every blog post. Working FAQPage schema on the homepage. Clean Organization schema with multiple office locations. Zero schema parse errors across all pages tested.
What's blocking Disparti: Two FAQPage schema blocks on the homepage that both fail to parse due to invalid control characters. A leading space character in the "@type": " LegalService" field that repeats globally. Identical schema blocks on every page — no page-specific structured data. No Article schema on any of their 556 blog posts. No llms.txt. A keyword-stuffed Organization name field. The first extractable "paragraph" on key practice pages is JavaScript code from a mobile menu toggle.
Malman doesn't have a $2 billion track record. What they have is a website that tells AI exactly who they are, what they do, who writes their content, and what questions they can answer — in every format AI systems check.
Case Study: Corboy & Demetrio — World-Class Authority, Bottom-Tier Visibility
If authority alone drove AI visibility, Corboy would dominate every ranking. They don't.
Thomas Demetrio has been ranked the #1 lawyer in Illinois for over 15 consecutive years. Corboy & Demetrio has recovered billions in total settlements. Their courtroom record is arguably unmatched in Chicago. If E-E-A-T — Experience, Expertise, Authoritativeness, Trustworthiness — were the only factor in AI recommendations, this firm would appear at the top of every response.
They received 13 AI mentions. The same as Staver, a firm with a fraction of their prestige.
What's blocking them: Zero schema blocks on their medical malpractice practice area page. Only 2 meaningful paragraphs of content on that page. Honeypot security folders (gazebo17, trellis19, balustrade37) that may confuse legitimate AI crawlers. All PDFs blocked except those containing "media" in the filename — hiding case results and press releases from AI training data. No llms.txt file. No FAQ schema anywhere on the site. No explicit AI bot rules.
Corboy & Demetrio is living proof that authority alone is not sufficient for AI visibility. Their prestige is real but technically invisible. This is the future that any authority-rich, technically-neglected brand faces.
The Framework: E-E-A-T vs. Machine Readability
There's an ongoing debate in the GEO industry about what drives AI visibility. One school argues that AI platforms prioritize E-E-A-T signals above all else: peer-reviewed awards, record verdicts, institutional leadership, third-party validation. The other school argues that machine readability — clean schema, llms.txt, AI bot rules, extractable content — is the primary driver.
The data from this audit resolves the debate. Both matter, but they operate on different axes.
E-E-A-T determines your maximum potential visibility. Salvi's 82 mentions are driven by a $363 million Sterigenics verdict, a $148 million O'Hare verdict, the ITLA presidency, 20+ Super Lawyers designations in a single year, and consistent placement in the Top 3 Illinois lawyers. No amount of technical optimization alone would produce 82 mentions for a firm without that authority foundation.
Machine readability determines how much of your existing authority actually gets surfaced. Corboy has world-class authority but only 13 mentions because their site is technically opaque to AI. Malman has moderate authority but 31 mentions because their site is optimized for every AI signal that matters.
The practical implication is clear: if you already have genuine authority — real credentials, real results, real expertise — and your AI visibility doesn't reflect it, the problem is almost certainly technical. The content exists. The reputation exists. The wrapper is broken.
Enterprise Validation: The Adobe Precedent
While this audit examined a localized 10-firm sample, the underlying mechanics scale to the highest levels of enterprise technology.
In late 2025, Adobe used its own generative optimization tooling internally on Adobe.com — one of the top 100 most-visited websites globally with 18 billion annual page views. By resolving technical visibility gaps — ensuring AI could access product descriptions, schema, and reviews — Adobe achieved a 5× increase in AI citations for Adobe Firefly within one week, a 200% increase in LLM visibility for Adobe Acrobat, and a 41% increase in LLM-referred traffic to Adobe.com pages.
Adobe's internal finding was striking: 80% of their early-access enterprise customers had critical content visibility gaps preventing AI systems from accessing key information. The content existed. It was technically hidden from the machines.
The micro-data from this legal sector audit mirrors that macro-level enterprise reality precisely. The same class of problems — broken schema, missing structured data, crawler-unfriendly infrastructure — produces the same class of outcomes: genuine authority that AI platforms cannot see.
Common Technical Failures Blocking AI Visibility
Across the 10 firms audited, several technical patterns emerged that consistently correlated with lower visibility:
Broken Schema (Parse Errors)
High-value Q&A content written clearly in HTML but wrapped in broken JSON-LD — invalid control characters, unterminated strings, malformed objects. When an AI parser encounters corrupted structured data, it abandons the extraction entirely. One firm had 20 question-format headings on a practice area page — excellent content that AI could cite — but none of it was wrapped in working FAQ schema. The content exists. The wrapper fails.
Duplicate Schema Across All Pages
Sites deploying identical Organization, LegalService, and Attorney schema on every page — homepage, practice areas, blog posts — without any page-specific markup. The car accident page has the same schema as the wrongful death page, which has the same schema as a blog post about nursing homes. AI systems cannot differentiate the pages. Compare this with Salvi, which serves different schema on different pages: FAQPage with 17 case-specific questions on the car accident page, Review schema on practice pages, VideoObject schema where video content exists.
Content Buried Behind JavaScript
When an AI crawler reads a practice area page and attempts to extract the first meaningful paragraph, the first content block it encounters is JavaScript code from a mobile menu toggle function. The actual helpful content only appears after the crawler navigates past multiple script fragments. This directly reduces the page's extractability score for AI citation.
Missing Author Attribution
Blog posts without Article or BlogPosting schema — no author name, no publish date, no headline in structured data. When AI encounters these pages, it sees anonymous web content rather than authored legal articles. Contrast this with Malman, where every post has "author": "Steven J. Malman" and headline markup. This tells AI: this is an authoritative article written by a named attorney.
Entity Identity Confusion
Attorney profile pages where the schema "name" field contains the firm name instead of the person's name. Organization schema where the "name" field contains a keyword-stuffed page title instead of the actual company name. When AI tries to build an entity profile, it gets noise instead of signal.
What You Should Do
Based on the correlations in this audit, organizations seeking to improve AI visibility should prioritize these actions roughly in order of impact:
1. Deploy an llms.txt file at the root of your domain with clear content summaries, key pages, and topic overviews. This showed the strongest and most consistent correlation with higher AI mentions (2.1×). It can be implemented in an afternoon. For more on why the AI platforms themselves use llms.txt — even while their chatbots downplay it — see our companion research: OpenAI Crawls llms.txt Every 15 Minutes. ChatGPT Says It Doesn't Matter.
2. Audit and repair schema markup. Validate every JSON-LD block. Deploy page-specific FAQPage schema on high-intent pages with real Q&A content. Add Article or BlogPosting schema with named authors and dates on every content piece. Replace duplicate site-wide blocks with page-appropriate structured data. Eliminate parse errors — the penalty for broken schema (2.5× fewer mentions) is severe.
3. Update robots.txt to explicitly name and allow major AI user agents — GPTBot, ClaudeBot, OAI-SearchBot, ChatGPT-User, PerplexityBot, CCBot. This is a direct signal to AI platforms that your site welcomes AI crawling. In this audit, the single firm that did this (Malman) dramatically outperformed its authority class.
4. Improve content extractability. Lead with concise, question-based copy. Don't bury core content behind JavaScript fragments. When an AI crawler hits your page, the first readable paragraph should be your most valuable one.
5. Build authority signal density. Ensure About pages and practice area pages contain specific, structured authority signals — verdict amounts, awards, rankings, leadership positions, bar memberships. AI platforms weight these signals when deciding which entity to recommend. Salvi's About page contains 9 distinct authority signals. Some bottom-tier firms have one.
Most of these fixes take days, not months. Some take minutes. The gap between 12 mentions and 31 mentions is not a branding overhaul or a multi-year content strategy. It's a technical cleanup.
Limitations and Future Research
This study examined a focused sample of 10 firms within one practice area and one geographic market. The sample size is intentionally tight to control variables — all 10 firms compete for the same queries in the same market, which allows us to isolate technical factors from geographic or practice-area differences.
While the 2.1× correlation for llms.txt is clear and consistent within this dataset, broader cross-industry and cross-geography studies are needed to establish generalizability and to separate correlation from causation. Ongoing monitoring of AI platforms will help quantify actual traffic lift from technical changes.
We intend to publish follow-up research as more data becomes available.
Conclusion
AI search is not a future channel. It is the present. The firms — and businesses of any kind — that treat their websites as infrastructure for both human visitors and AI systems are pulling ahead, even when starting from a position of moderate authority.
The presence of an llms.txt file — a simple, low-effort file that takes an afternoon to deploy — emerged as the single strongest predictor of AI visibility in this audit. Combined with clean schema, explicit crawler permissions, and structured content, it turns existing authority into actual AI recommendations.
The message from this data is straightforward: optimize the machine layer first. The content and reputation already exist. Technical readiness is what makes them findable in the AI era.