Alaska’s court system spent 15 months and countless resources building an AI chatbot to simplify probate—only to discover that even cutting-edge AI can’t guarantee accuracy in life-or-death legal matters. This isn’t just a government tech failure; it’s a warning about the limits of AI in high-stakes decisions that affect your money, property, and family.
The Promise: AI That Could Replace Lawyers for Grieving Families
In 2024, Alaska’s court system launched an ambitious project: AVA (Alaska Virtual Assistant), an AI chatbot designed to guide residents through probate—the complex legal process of transferring a deceased person’s assets. The goal was noble: provide 24/7, low-cost legal help to families navigating grief and bureaucracy. For context, probate often involves:
- Filing wills and inheritance claims
- Transferring property titles (homes, cars, bank accounts)
- Resolving disputes among heirs
- Paying debts and taxes from the estate
The project was slated for completion in three months. Fifteen months later, it’s still not fully operational.
The Reality: A Year of AI Hallucinations and False Starts
AVA’s development exposed three critical flaws in applying AI to legal systems—flaws that mirror risks in any high-stakes AI deployment, from healthcare to finance:
-
Hallucinations with Real Consequences: AVA repeatedly invented false information, like claiming Alaska had a law school (it doesn’t) or directing users to nonexistent legal resources. In probate, such errors could lead to:
- Lost inheritance claims due to incorrect filing deadlines
- Legal penalties for improper asset transfers
- Family disputes over misinterpreted wills
As National Center for State Courts consultant Aubrie Souza noted, “We had to remove condolences because grieving users found them insincere—but the bigger issue was the chatbot’s tendency to confidently share wrong answers.”
-
The “Personality” Problem: Early versions of AVA were programmed to be empathetic, but users rejected the chatbot’s scripted sympathy (“I’m tired of everyone telling me they’re sorry for my loss”). This reveals a deeper issue: AI’s emotional tone often clashes with human expectations, especially in sensitive contexts. Other AI systems face similar critiques:
- ChatGPT users reported abrupt shifts between overly flattering and emotionally distant responses in 2025 (OpenAI).
- Grok (by xAI) was criticized for prioritizing controversy over accuracy (TechCrunch).
-
The Cost of “Low-Cost” AI: While AI tools like AVA promise savings (20 queries cost ~11 cents), the hidden costs include:
- Manual reviews of every AI response (Alaska’s team reduced their test from 91 to 16 questions due to the labor involved).
- Ongoing monitoring as AI models update (e.g., OpenAI’s GPT iterations require constant retesting).
- Legal liability if errors cause financial harm.
Stacey Marz, Administrative Director of the Alaska Court System, admitted: “We shifted our goals. We can’t expect AI to replace human facilitators—not yet.”
Why This Matters Beyond Alaska’s Courts
AVA’s struggles aren’t isolated. They reflect broader trends in AI adoption:
- The Hype vs. Reality Gap: Despite $200+ billion invested in AI in 2025, fewer than 6% of government agencies prioritize AI for service delivery. Why? Reliability remains the bottleneck.
-
The “Black Box” Dilemma: AI systems like AVA can’t explain why they generate specific answers—a problem when transparency is legally required. For example:
- In 2023, a New York lawyer was fined for submitting AI-hallucinated case citations to a judge.
- The EU’s 2025 AI Act now mandates “explainability” for high-risk AI uses—including legal advice.
-
The Human-in-the-Loop Paradox: AI was supposed to reduce workloads, but Alaska’s experience shows it often creates more work:
- Staff spent months manually verifying AVA’s responses.
- The team abandoned a 91-question test because reviewing answers was too time-consuming.
- “It was so labor-intensive,” Marz said. “All the buzz about AI revolutionizing access to justice? It’s harder than it looks.”
What This Means for You
AVA’s story isn’t just about a failed government project—it’s a roadmap for how to interact with AI in your own life:
1. Treat AI as a “First Draft,” Not a Final Answer
Whether you’re using AI for:
- Legal forms (e.g., wills, contracts)
- Financial advice (tax filings, investments)
- Medical symptoms or treatment options
Always cross-check with a human expert. Alaska’s courts found that even “simple” probate questions required lawyer review.
2. Watch for “Hallucination Red Flags”
AI errors often follow patterns. Be wary if a chatbot:
- Cites sources that don’t exist (e.g., “Alaska Law School”).
- Uses overly vague language (“in most cases,” “typically”).
- Contradicts itself within the same conversation.
Pro tip: Ask the AI, “What’s your confidence level in this answer?” (Some systems, like LawDroid, now include confidence scores.)
3. Demand Transparency
Before relying on an AI tool, ask:
- Is it trained on up-to-date, jurisdiction-specific data? (AVA was limited to Alaska’s probate documents.)
- Who reviews errors, and how often? (Alaska’s team did weekly accuracy checks.)
- Can you see the raw data behind its answers? (Most systems don’t allow this.)
4. Prepare for the “AI Tax”
While AI tools seem cheap (AVA’s queries cost cents), the real cost includes:
- Your time verifying answers.
- Potential fees to fix AI mistakes (e.g., refiling a rejected probate form).
- Emotional toll of incorrect advice during stressful times (e.g., grief, financial crises).
In Alaska, the “AI tax” meant delaying the project by 12+ months—a cautionary tale for anyone expecting instant solutions.
The Future: Can AI Ever Be Trusted for Legal Matters?
Alaska’s team remains optimistic but realistic. Stacey Marz noted:
“Maybe with increasing model updates, accuracy will improve. But right now, we’re not confident AI can handle the nuance of real people’s lives.”
The lesson? AI is a tool, not a replacement—whether you’re settling an estate, diagnosing a rash, or filing taxes. The systems improving fastest are those with:
- Narrow scopes (e.g., AVA focuses only on probate, not all legal issues).
- Human oversight (Alaska’s team reviews every major update).
- Clear disclaimers (AVA now states: “I’m an AI assistant, not a lawyer”).
For now, the safest approach is to use AI as a starting point—then verify, verify, verify.
At onlytrustedinfo.com, we cut through the AI hype to deliver the fastest, most authoritative analysis of how emerging tech impacts your daily life. Stay ahead of the curve—explore our AI coverage for more insights on navigating the digital future with confidence.