Case Study 2: GE's Predix — The AI Strategy That Failed
The Ambition
In 2015, Jeffrey Immelt, CEO of General Electric, made a bold declaration: GE would become a "top 10 software company" by 2020. The vehicle for this transformation was Predix, an industrial Internet of Things (IoT) platform that would use AI, machine learning, and big data analytics to optimize the performance of industrial equipment -- jet engines, gas turbines, locomotives, medical imaging devices, and oil drilling equipment.
The vision was seductive. GE had installed over $1 trillion worth of industrial equipment worldwide. That equipment generated enormous volumes of sensor data -- temperature, pressure, vibration, performance metrics -- that was largely unused. If GE could collect that data, analyze it with AI, and offer predictive insights to its customers ("your turbine will need maintenance in 14 days"), it could create a new, high-margin, recurring-revenue business on top of its existing hardware empire.
Immelt described this as "the Industrial Internet" -- a term GE coined -- and Predix as the "operating system for the Industrial Internet." The company invested approximately $4 billion in GE Digital, the business unit that housed Predix and related software initiatives. It hired thousands of software engineers and data scientists. It established a software development center in San Ramon, California -- far from GE's industrial heartland in Boston and Cincinnati -- and recruited talent from Silicon Valley.
The bet was enormous, the rhetoric was grandiose, and the failure was spectacular. By 2018, GE Digital was hemorrhaging money, Predix had gained minimal market traction, Immelt had been replaced as CEO, and GE's stock price had lost over 50 percent of its value. GE's AI strategy became one of the most studied corporate failures of the decade.
What went wrong?
The Strategic Errors
Error 1: Technology-Driven Strategy, Not Problem-Driven Strategy
GE's Predix strategy started with the technology -- an IoT platform powered by AI and analytics -- and then searched for problems to solve. This is the technology-driven strategy pitfall described in Section 31.11, and GE committed it on a massive scale.
The core strategic question should have been: "What are the most valuable problems our industrial customers face, and can AI solve them?" Instead, the question was: "We have access to enormous amounts of industrial data. What can we build with it?"
The distinction matters enormously. Customer problems are specific: "My turbine's unplanned downtime costs me $200,000 per incident, and I have 12 incidents per year." Technology capabilities are general: "We can analyze sensor data at scale." The gap between a general capability and a specific, valuable solution is where most platform strategies die.
GE's customers -- airlines, utilities, oil companies, hospitals -- did not wake up each morning wishing for an industrial IoT platform. They had specific operational problems: reducing unplanned downtime, optimizing fuel consumption, extending equipment life, improving maintenance scheduling. Some of these problems could be solved with AI. Many could not, or could be solved more cheaply with simpler approaches (better maintenance procedures, more experienced technicians, upgraded components).
By building a horizontal platform rather than solving specific problems, GE created a solution in search of customers -- the strategic equivalent of Tom Kowalski's pricing engine from Chapter 6, but at a $4 billion scale.
Connection to Chapter 31. GE's Predix strategy is the canonical example of the technology-driven strategy pitfall. The company invested billions in building AI capabilities before confirming that its target customers valued those capabilities enough to pay for them. Compare this to Ping An's approach (Case Study 1), where every AI investment was tied to a specific competitive or customer problem.
Error 2: The Moonshot Trap
GE did not attempt a measured, portfolio-based approach to industrial AI. It attempted a single, massive bet: build the dominant horizontal platform for the entire Industrial Internet. This is the AI Moonshot trap described in Section 31.11.
The Predix platform was designed to be everything to every industrial company: a data collection layer, a storage and processing layer, an analytics and AI layer, an application development layer, and a marketplace for third-party industrial applications. It would support jet engines and gas turbines and locomotives and medical devices and oil rigs. The scope was staggering.
This breadth created three fatal problems:
Engineering complexity. Building a horizontal platform that supports dozens of industrial verticals, each with different data formats, safety requirements, regulatory constraints, and customer expectations, is orders of magnitude harder than building a vertical solution for a single industry. GE's software teams were overwhelmed by the engineering challenge.
Customer confusion. Potential customers could not understand what Predix did. Was it a predictive maintenance tool? A data visualization platform? An application development environment? A marketplace? The answer was "all of the above," which is the same as "nothing in particular." Customers who needed a predictive maintenance solution for gas turbines did not want to learn a general-purpose platform -- they wanted a turnkey product that solved their specific problem.
Resource dilution. By pursuing every industrial vertical simultaneously, GE spread its resources thinly across too many fronts. No single vertical received the focused investment needed to develop a truly compelling, market-leading product. The result was a platform that was mediocre at many things and excellent at none.
A portfolio approach -- starting with one or two verticals where GE had the deepest customer relationships and the strongest data position, building focused solutions, proving value, and then expanding -- would have been dramatically lower risk. But portfolio discipline is not as exciting as platform ambition, and GE's leadership chose ambition over discipline.
Error 3: Organizational Misalignment
GE's decision to locate GE Digital in San Ramon, California, reflected a belief that building a software company required Silicon Valley culture, talent, and practices. The choice was understandable but created a profound organizational rift.
Cultural disconnect. GE's industrial businesses operated with long sales cycles (months to years), rigorous safety standards, and deep customer relationships built on trust and reliability. San Ramon's software culture emphasized rapid iteration, "move fast and break things," and venture-capital-style metrics (user growth, platform adoption). These cultures clashed violently. Industrial customers did not want their jet engine analytics platform to "move fast and break things."
Talent mismatch. The software engineers hired in San Ramon had experience building consumer and enterprise software, not industrial AI. They understood web applications and cloud infrastructure but not the physics of gas turbines, the regulatory requirements of aviation, or the operational realities of oil drilling. The domain expertise that makes industrial AI valuable -- the ability to translate sensor data into actionable operational insights -- was in Cincinnati and Schenectady, not San Ramon.
Sales channel conflict. GE's existing sales force sold hardware: turbines, engines, medical imaging equipment. They were not equipped to sell software subscriptions. GE Digital created its own sales team, but this team lacked the deep customer relationships that GE's industrial sales force had built over decades. The two sales channels competed rather than cooperating, confusing customers and creating internal friction.
Leadership churn. GE Digital went through multiple leadership changes in its first few years. Each new leader brought a different vision, different priorities, and different organizational restructuring. The instability made it impossible to execute a coherent strategy.
Connection to Chapter 31. GE's organizational design for AI violated the principle of business alignment. By isolating its AI capability in a separate business unit with its own culture, talent pool, and sales channel, GE ensured that its AI efforts would be disconnected from the industrial businesses they were supposed to serve. The hub-and-spoke model -- with AI talent embedded in industrial business units but connected to a central platform team -- would have been far more effective.
Error 4: Skipping the Capability Ladder
Ping An's success was built on sequential capability building: optimize, then differentiate, then innovate, then transform. GE attempted to skip directly to transformation -- building a platform business -- without first demonstrating that AI could optimize its existing industrial operations.
Before building Predix, GE could have:
-
Deployed predictive maintenance for its own equipment. GE operated and serviced enormous fleets of its own equipment under long-term service agreements. Using AI to reduce unplanned downtime for GE-serviced equipment would have generated direct cost savings, proven the technology, and built case studies for future customers.
-
Built vertical solutions for specific customer segments. Rather than a horizontal platform, GE could have built a focused predictive maintenance product for airline customers, leveraging its deep relationships and domain expertise in aviation. A successful vertical product could have been expanded to other industries over time.
-
Invested in data infrastructure. Many GE-installed equipment lacked the sensors and connectivity needed to generate the data that AI requires. Upgrading the equipment to generate data -- the equivalent of building data readiness before building models -- should have preceded the platform play.
By skipping these foundational steps, GE built a platform on top of capabilities that did not yet exist. The AI models lacked the training data they needed (because many machines were not instrumented). The customers lacked the data infrastructure to benefit from the platform. And GE's own organization lacked the experience to know which AI applications created real value and which were technically interesting but commercially irrelevant.
Error 5: Ignoring Competitive Reality
GE's Predix strategy assumed that GE's position as the world's largest industrial equipment manufacturer gave it a natural advantage in industrial AI. This assumption proved wrong for several reasons.
Cloud platforms were better platforms. Amazon Web Services, Microsoft Azure, and Google Cloud Platform were spending tens of billions of dollars building world-class cloud and AI platforms. GE could not match their investment in platform infrastructure. Customers who wanted to build industrial AI applications increasingly preferred to use AWS or Azure -- which were proven, well-documented, and supported by large developer ecosystems -- rather than Predix, which was unproven, poorly documented, and supported by a small community.
Startups were more focused. Specialized startups like Uptake, C3.ai, and SparkCognition built focused industrial AI products for specific use cases -- predictive maintenance for specific equipment types, energy optimization for specific industries. These startups moved faster than GE, charged less, and offered products that were easier to deploy.
Customers distrusted vendor lock-in. Airlines, utilities, and oil companies operated equipment from multiple manufacturers -- not just GE. A GE-built platform created uncomfortable lock-in: would a United Airlines, which operated both GE and Rolls-Royce engines, want its analytics platform controlled by one of its engine suppliers? The answer was generally no. Customers preferred vendor-neutral platforms that could analyze equipment from multiple manufacturers.
GE failed to conduct the competitive analysis that a sound AI strategy requires. It assumed that domain expertise in industrial equipment would translate into competitive advantage in industrial software. It did not. The competencies required to build great hardware (metallurgy, aerodynamics, precision manufacturing) are different from the competencies required to build great software (user experience, platform architecture, developer ecosystem management, rapid iteration).
The Financial Reckoning
The financial consequences of GE's Predix failure were severe:
- $4 billion invested in GE Digital between 2015 and 2018, with minimal revenue return
- GE Digital revenue peaked at approximately $1.2 billion in 2017 -- well short of the $15 billion target Immelt had projected for 2020
- Massive write-downs as GE acknowledged the impairment of goodwill and assets related to its digital investments
- CEO replacement. Immelt retired in 2017 and was replaced by John Flannery, who was himself replaced by Larry Culp in 2018. Culp dramatically scaled back GE Digital and sold parts of it.
- Stock price decline. GE's stock price fell from approximately $30 in early 2016 to under $7 in late 2018, destroying over $200 billion in market capitalization. While GE's decline had many causes beyond Predix (including problems in its power and financial services businesses), the failed digital strategy was a significant contributor.
By 2019, GE Digital had been restructured into a much smaller, more focused entity. Predix was repositioned as a set of specific industrial applications rather than a horizontal platform. The "top 10 software company" aspiration was quietly abandoned.
Lessons for AI Strategy
Lesson 1: Start with the Problem, Not the Platform
GE built a platform and hoped customers would find problems to solve with it. The inverse approach -- identify specific, high-value customer problems, solve them with focused AI applications, and gradually build platform capabilities -- is dramatically lower risk and more likely to succeed.
Application. Before investing in AI infrastructure, catalog the specific problems your customers or operations face. Rank them by value and solvability. Build solutions for the top three. Let the platform emerge from the solutions, not vice versa.
Lesson 2: Beware the Moonshot
GE's Predix was a $4 billion moonshot that bypassed the sequential capability building that successful AI transformations require. A portfolio approach -- multiple smaller bets across different time horizons, with rigorous portfolio governance -- would have limited the downside while preserving the upside.
Application. Resist the urge to make a single, bet-the-company AI investment. Build a portfolio. Start with Horizon 1 projects that generate quick wins and build capability. Graduate to Horizon 2 and 3 as capabilities mature.
Lesson 3: Organizational Design Matters as Much as Technology Design
GE's decision to isolate its AI capability in a separate business unit, with a different culture, different talent, and different sales channel, ensured disconnection from the industrial businesses it was supposed to serve. AI capability must be tightly coupled with business context.
Application. Embed AI talent in business units, or use a hub-and-spoke model that maintains both technical consistency and business alignment. Never isolate AI capability in a separate organization unless you are genuinely building a standalone business.
Lesson 4: Competitive Analysis Is Not Optional
GE assumed its domain expertise gave it a competitive advantage in industrial software. It did not assess the competitive strength of cloud platforms (AWS, Azure), focused startups (Uptake, C3.ai), or the customer preference for vendor-neutral solutions. This blind spot was fatal.
Application. Conduct rigorous competitive analysis before committing to an AI strategy. Ask: Who else is solving this problem? What are their advantages? Why will customers choose us? If the answers are not compelling, revise the strategy before investing.
Lesson 5: Honest Assessment Beats Ambitious Narrative
GE's internal and external narrative about Predix was consistently more optimistic than reality. Immelt's public declarations ("top 10 software company by 2020") created expectations that the organization could not meet, generating both internal pressure and external skepticism.
Application. Communicate AI strategy with disciplined specificity, not grandiose ambition. Set expectations that can be met and exceeded. The antidote to hype is not pessimism -- it is honesty about what has been achieved, what remains uncertain, and what the realistic timeline looks like.
The Contrast with Ping An
Comparing GE and Ping An illuminates the principles of effective AI strategy:
| Dimension | GE (Predix) | Ping An |
|---|---|---|
| Starting point | Technology ("We have data; let's build a platform") | Competitive threat ("How do we survive platform disruption?") |
| Scope | Horizontal platform for all industrial verticals | Sequential: optimize core, then differentiate, then innovate |
| Organizational model | Isolated business unit (San Ramon) | AI integrated into business units with central coordination |
| Capability building | Skipped foundational phases; went directly to platform | Built capabilities sequentially over a decade |
| Investment discipline | $4B concentrated bet on a single platform | $15B invested sequentially, with each phase funded by the previous phase's returns | |
| Competitive analysis | Assumed domain expertise = competitive advantage | Continuously assessed and responded to competitive threats |
| Outcome | Failure; billions in write-downs | Transformation; new businesses worth tens of billions |
Both companies invested billions in AI. The difference was not the size of the investment -- Ping An invested far more. The difference was the strategic discipline with which the investment was made.
Discussion Questions
-
Technology-driven vs. problem-driven. Could GE have succeeded with Predix if it had started by solving specific problems for specific industries, rather than building a horizontal platform? Design an alternative AI strategy for GE's industrial business that follows the problem-driven approach.
-
Organizational design. GE located its software organization in San Ramon, far from its industrial operations. Was this decision inherently flawed, or could it have worked with different leadership and integration? What organizational model would you recommend for a company trying to build AI capability in an industrial context?
-
CEO role. Immelt's public commitment to the "top 10 software company" narrative created enormous internal and external pressure. How should a CEO communicate an ambitious AI strategy without creating expectations that constrain strategic flexibility?
-
Portfolio vs. moonshot. Some argue that GE's mistake was not the Predix concept but the execution -- that a more competent team and better technology could have made it work. Others argue that the strategy itself was flawed. Which position do you find more persuasive, and why?
-
Fast-follower opportunity. After GE's failure, companies like Siemens, Honeywell, and several cloud providers built industrial AI solutions that gained traction. What did they learn from GE's failure? Did GE's investment in "customer education" about industrial AI actually benefit its competitors as a fast-follower advantage?
-
Parallels to Athena. In what ways does Athena Retail Group's AI strategy (as presented to the board in Section 31.13) avoid the mistakes GE made? Are there any elements of Athena's strategy that carry similar risks?
GE's Predix failure is not a story about AI being overrated. The value of industrial AI has been validated by companies that executed more disciplined strategies. It is a story about what happens when a company confuses technology ambition with strategic clarity -- when it builds a platform before it solves a problem, when it invests billions before it validates demand, and when it organizes for Silicon Valley culture rather than industrial reality. The lesson is not "don't be ambitious." The lesson is "be ambitious about the right things, in the right sequence, with the right organizational design."