7 AI initiatives that failed and mistakes to avoid (case studies)

AI initiatives that failed

AI initiatives that failed

Recent data shows that nearly 95% of AI initiatives failed to reach full-scale production or show a positive Return on Investment (ROI). AI is a multiplier, not a miracle worker. With any initiative, mistakes are going to happen. However, when mistakes occur with AI, they multiply if left uncorrected. Here are examples of AI initiatives that failed, along with mistakes to avoid.

1. Zillow: “Data Drift” results in $881 million mistakes

Zillow launched “Zillow Offers,” using its AI to buy and flip homes. The problem? The model was trained on historical data that didn’t account for the rapid, unpredictable market shifts of the post-pandemic era. AI couldn’t sense “real-world” friction. It kept buying houses at high prices even as the market cooled down. The algorithmic “iBuying” model failed to predict market volatility, leading to an $881 million write-down, a 25% staff layoff, and the closure of a division.

2. IBM Watson Health: Bad diagnosis data leads to a $4 billion loss

IBM touted Watson as the “future of oncology,” promising it could diagnose cancer better than doctors. However, Watson struggled to “read” messy, unstructured clinical notes. That’s because it was trained on hypothetical patients rather than real-world longitudinal data. Doctors found it clunky and redundant, and after billions in investment, IBM failed to revolutionize oncology and sold off the assets at a “fire sale” price to Francisco Partners at a $4 billion loss.

3. Amazon: “Hallucination” halts recruitment

Amazon built an AI tool to rank job applicants. Because the tech industry had been male-dominated for decades, the AI “learned” that being male was a success factor. However, AI had a “hallucination” or what you and I might call a mistake that females are not relevant. The system began penalizing resumes that included the word “women’s” (e.g., “women’s chess club captain”). Amazon couldn’t guarantee the AI wouldn’t discriminate, so they scrapped the project before it could deliver any ROI.

4. MD Anderson Cancer Center: “Scope Creep” costs $62 million

MD Anderson Cancer Center partnered with IBM Watson. Initially, they had a $5 million budget for leukemia research. The scope expanded seven times to cover other diseases without proper IT oversight. After spending $62 million, an audit found the system hadn’t treated a single patient. The project was shuttered due to administrative and financial mismanagement. While AI could be programmed for clinical research, humans are responsible for bypassing IT Governance & Scope Creep.

5. Hertz: Outsourcing decision leads to $32 million lawsuit

Hertz hired Accenture to transform its online business. A key component was AI-driven booking features. The project was plagued by delays and technical debt. Hertz treated AI as a “plugin” they could buy, rather than an architectural shift. They spent millions on a system that never went live, eventually leading to Hertz suing Accenture for $32 million and a messy legal battle.

6. Google: Lack of transparency loses trust

Google Duplex was designed to make phone calls (like booking hair appointments) using a hyper-realistic AI voice. However, there was a backlash over “deceptive” AI, and the need for mandatory disclosures slowed the rollout. Originally marketed as a fully autonomous AI caller, Google had to invest heavily in “human-in-the-loop” verification and transparency features following ethical backlash and technical limitations. Return on Investment (ROI) was negative.

7. Apple: Siri AI customer experience is too complex

Apple invested heavily in “Shortcuts” to make Siri more “intelligent” through user-defined workflows. The “Average Joe” found the interface too complex to set up. AI that requires users to be “prompt engineer” or “workflow architect” rarely sees mass adoption, leading to low engagement and wasted dev resources. The complexity led to low user adoption. Apple is now reinvesting heavily in “Apple Intelligence” to simplify these workflows and restore the feature’s utility.

AI initiatives that failed for these big brands led to major mistakes. However, the pitfalls of poor data, project mismanagement, a lack of ethical transparency, bad customer experience, and inefficient piloting and testing are common mistakes across any business using AI. Get an assessment of your AI initiative to avoid these traps when you begin.

One Comment

  1. Rich Gee

    Rob – this is a killer breakdown, and you nailed the real headline: AI doesn’t fail because it’s “bad,” it fails because people treat it like magic instead of a governed system with real data, real oversight, and real ownership. The examples make it painfully obvious where the landmines are – drift, bias, scope creep, and sloppy implementation. – Rich

Leave a Reply

Your email address will not be published. Required fields are marked *