Table of Contents
OpenAI has crossed a threshold that no amount of benchmarks or capability demonstrations could approach: it now wants access to your medical records. On January 7, 2026, the company announced ChatGPT Health, a dedicated experience within its flagship chatbot that allows users to connect their electronic health records, wellness apps, and wearable data to receive personalized health insights. The announcement came alongside ChatGPT for Healthcare, an enterprise product rolling out to major hospital systems including Boston Children’s Hospital, Cedars-Sinai Medical Center, Memorial Sloan Kettering Cancer Center, and HCA Healthcare. Five days later, OpenAI acquired Torch, a healthcare startup building “unified medical memory,” for approximately $60 million. The company that defined how hundreds of millions of people interact with artificial intelligence is now positioning itself as the hub for how those same people manage their health.
The scale of existing demand validates OpenAI’s ambition. According to the company’s analysis of deidentified conversations, more than 230 million people ask health and wellness questions on ChatGPT every week—one in four of the platform’s 800 million regular users. More than 40 million turn to ChatGPT every day with healthcare questions, representing over 5% of all global messages on the platform. These users are not waiting for permission or FDA clearance; they are already treating an AI chatbot as their first line of medical guidance. ChatGPT Health transforms that organic behavior into a structured product, adding data connectivity and purpose-built privacy protections to conversations that were happening regardless of whether OpenAI built dedicated infrastructure for them.
The healthcare industry represents one of the largest addressable markets for AI technology. According to Grand View Research, the global AI in healthcare market is expected to reach $110.61 billion by 2030, growing at a compound annual growth rate of 38.6% from 2025. The U.S. market alone is projected to expand from $11.57 billion in 2025 to $194.88 billion by 2034. OpenAI is not merely launching a feature—it is staking a claim in a market where the stakes involve not just revenue and market share but the fundamental question of who mediates between individuals and their most personal data.
The implications extend far beyond OpenAI’s business model. ChatGPT Health represents the first major consumer AI product designed explicitly to aggregate sensitive health information outside traditional healthcare regulatory frameworks. It tests assumptions about privacy, trust, and the appropriate role of technology companies in medical decision-making. It positions AI as a potential equalizer in a healthcare system plagued by cost and access barriers—or as a new vector for misinformation and harm. The future of healthcare AI is being written in real time, and OpenAI has appointed itself as chief author.
The architecture of your AI medical companion
ChatGPT Health arrives as a dedicated tab within the existing ChatGPT interface, architecturally separated from regular conversations to prevent health context from bleeding into unrelated interactions. The separation is more than organizational—it reflects OpenAI’s recognition that health data requires distinct handling. Information and memories from ChatGPT Health will not flow outside of that protected space, and those conversations will not be used to train OpenAI’s foundation models. Users can revoke medical record access at any time, with multi-factor authentication available for additional security. These layered protections represent OpenAI’s attempt to build trust infrastructure for a category of data more sensitive than anything the company has previously managed.
The feature’s connectivity layer integrates with an impressive roster of health platforms. Users can link Apple Health, Function, MyFitnessPal, Weight Watchers, AllTrails, Instacart, and Peloton to receive tailored responses incorporating their activity levels, nutrition patterns, fitness routines, and biometric data. The medical records integration runs deeper: OpenAI partnered with b.well, a health data connectivity platform, to enable users to share electronic health records from participating healthcare systems. Lab results, medication lists, visit histories, and clinical notes can flow into ChatGPT Health, creating a comprehensive view that most patients have never had access to in a single interface.
The intended use cases span the full spectrum of health engagement. OpenAI positions ChatGPT Health as a tool to help users understand test results, prepare for doctor visits, identify patterns in their health data, and navigate questions related to diet, exercise, and insurance options. Fidji Simo, CEO of Applications at OpenAI, described the product as addressing healthcare’s structural problems: “cost and access barriers, overbooked doctors, and a lack of continuity in care.” Simo shared a personal anecdote about ChatGPT flagging a potentially dangerous antibiotic interaction in her own medical history, demonstrating the kind of value proposition the product aims to deliver.
The development process reflects a seriousness of intent that distinguishes ChatGPT Health from OpenAI’s more rapid feature releases. The company collaborated with more than 260 physicians across 60 countries and dozens of medical specialties over a two-year period. Those clinicians provided feedback on model outputs more than 600,000 times, shaping how ChatGPT responds to health queries. In May 2025, OpenAI unveiled HealthBench, a benchmark developed with physician input to evaluate AI systems’ performance in real-world clinical scenarios. The Torch acquisition adds another layer of capability: the startup had built infrastructure to unify lab results, medications, and visit recordings into a coherent medical memory, precisely the kind of data architecture ChatGPT Health needs to deliver on its promise.
The enterprise companion product, ChatGPT for Healthcare, takes a different but complementary approach. Designed as a workspace for researchers, clinicians, and administrators, it is powered by GPT-5 models that went through additional doctor-led testing. Early adopters already rolling out the enterprise product include AdventHealth, Baylor Scott & White Health, Boston Children’s Hospital, Cedars-Sinai Medical Center, HCA Healthcare, Memorial Sloan Kettering Cancer Center, Stanford Medicine Children’s Health, and UCSF. For healthcare organizations, the enterprise tier offers patient data controls including data residency options, audit logs, customer-managed encryption keys, and a Business Associate Agreement with OpenAI to support HIPAA-compliant use. Content shared with ChatGPT for Healthcare is not used to train models. This dual-track strategy—consumer health management and enterprise clinical support—positions OpenAI to capture value on both sides of the healthcare relationship.
The rollout timeline is measured by OpenAI’s recent standards. ChatGPT Health launched initially via waitlist for early users, with full web and iOS availability expected within weeks. The feature is available to users on ChatGPT Free, Go, Plus, and Pro plans outside the European Economic Area, Switzerland, and the United Kingdom—regulatory constraints that reflect Europe’s more restrictive approach to health data and AI. EHR integrations are limited to U.S. markets where b.well has connectivity. The geographic constraints underscore that ChatGPT Health is not a global product launch but a controlled expansion into markets where OpenAI can manage regulatory and operational complexity.
The trillion-dollar race for healthcare’s AI layer
OpenAI’s healthcare push does not occur in a vacuum. The company is entering a competitive landscape where Google, Microsoft, and a growing ecosystem of startups have been building healthcare AI capabilities for years. The difference is that OpenAI has a distribution advantage none of its competitors can match: 800 million people already use ChatGPT regularly, and 230 million of them ask health questions every week. Converting that existing behavior into a structured product creates a pathway to market dominance that pure technology advantages cannot replicate.
The competitive response is already materializing. Google announced a partnership with b.well in October 2025—the same health data connectivity platform OpenAI partnered with—potentially setting the stage for health-specific features in Gemini. Google’s vast data resources from Fitbit and other wearables provide raw material for a healthcare AI offering that could rival or exceed ChatGPT Health’s capabilities. Microsoft, OpenAI’s primary partner and infrastructure provider, has expanded its own healthcare AI offerings through Nuance, the clinical documentation company it acquired for $19.7 billion in 2022. The two companies increasingly find themselves in a complex “co-opetition” as Microsoft builds competing healthcare products on top of OpenAI’s foundation models.
The strategic logic behind OpenAI’s healthcare investment becomes clearer when viewed through the lens of M&A activity and leadership hires. In 2025, OpenAI hired Nate Gross, co-founder of Doximity, to lead healthcare strategy, followed by Ashley Alexander, former co-head of product at Instagram, to run health products. Karan Singhal leads health AI development internally. The Torch acquisition brings domain-specific data expertise, regulatory knowledge, and a consumer-facing product that accelerates ChatGPT Health’s capabilities. According to reporting from The Information, the Torch deal was valued at approximately $100 million in equity, though CNBC reported a lower figure of $60 million. Either way, Torch team members including CEO Ilya Abyzov are joining OpenAI to build out the health product vision.
The healthcare AI market’s growth trajectory explains why every major technology company is fighting for position. AI investment in healthcare exceeded $18 billion in 2025, with AI healthcare companies representing 46% of total spending. M&A activity hit 195 deals in 2025, a 61% increase over 2024, driven by both growth-stage acquisitions and distressed exits. The market research varies in its specific projections—estimates range from $110 billion by 2030 to $543 billion by 2035—but all analysts agree the compound annual growth rates exceed 35%. Healthcare represents perhaps the largest vertical opportunity in AI, combining massive existing spending, clear pain points amenable to automation, and a regulatory environment that, while complex, has not prohibited innovation.
OpenAI’s pharmaceutical partnerships extend the healthcare strategy beyond consumer products. The company has partnered with Eli Lilly and Sanofi on drug discovery and development, with ongoing collaborations with Retro Biosciences, Amgen, Thermo Fisher, and Moderna leveraging AI to accelerate scientific discovery and precision medicine initiatives. In global health, OpenAI partnered with Penda Health, a primary care provider in Nairobi, Kenya, to develop an AI-powered clinical copilot providing real-time recommendations during patient consultations. Consulting partnerships with Boston Consulting Group, Bain, McKinsey, and Accenture help healthcare organizations develop AI strategies and implementation roadmaps. The portfolio approach suggests OpenAI views healthcare not as a single product opportunity but as an entire industry vertical to dominate across consumer, enterprise, pharmaceutical, and clinical segments.
The enterprise healthcare traction provides validation that extends beyond consumer adoption. When institutions like Memorial Sloan Kettering Cancer Center and Stanford Medicine Children’s Health deploy ChatGPT for Healthcare, they signal to the broader market that OpenAI’s technology meets the scrutiny of organizations with existential stakes in getting AI right. These reference accounts matter enormously for enterprise sales cycles where trust and credibility often outweigh technical benchmarks. OpenAI’s $12.7 billion projected revenue for 2025 depends increasingly on enterprise deals; landing elite healthcare customers accelerates negotiations with every other health system evaluating AI partnerships.
When the diagnosis might be a hallucination
For all its promise, ChatGPT Health operates in the shadow of a fundamental limitation: large language models do not know what is true. They predict statistically likely responses to prompts, which means they can generate plausible-sounding medical advice that is completely wrong. The phenomenon, known as hallucination, is not a bug to be fixed but an architectural property of how these systems work. OpenAI’s own disclaimer states explicitly that ChatGPT Health is “not intended for use in the diagnosis or treatment of any health condition.” The question is whether 230 million weekly health queries will respect that distinction.
Research on AI hallucinations in medical contexts suggests the risks are not hypothetical. A study published in npj Digital Medicine analyzing 12,999 clinician-annotated sentences from AI-generated clinical summaries found a 1.47% hallucination rate and a 3.45% omission rate. A pilot study prompting GPT-4o and Llama-3 to create summaries of detailed medical notes found that GPT-4o had 21 summaries with incorrect information out of 50, while Llama-3 had 19. Studies estimate hallucination rates in clinical decision support systems range from 8% to 20%, depending on model complexity and training data quality. These error rates might be acceptable in low-stakes applications, but medicine operates in a domain where a single wrong answer can cause harm or death.
Dr. Danielle Bitterman, a physician researcher who has studied AI accuracy in medical contexts, found that AI models prioritize being helpful rather than medically accurate. The models “always supply an answer,” even when information proves inaccurate. In one example, systems trained on the equivalence between acetaminophen and Tylenol still provided unsafe comparisons in edge cases. The research highlights a troubling dynamic: the same conversational fluency that makes ChatGPT engaging also makes it overconfident. Users receive responses that sound authoritative regardless of whether the underlying information is correct.
Data quality compounds the hallucination problem. Dr. Bitterman notes that incomplete medical records increase hallucination risks—and the fragmented U.S. healthcare system means most records lack completeness. A patient who has seen multiple specialists across different health systems may have records scattered across incompatible EHR platforms, none of which capture the full picture. When ChatGPT Health receives partial data, it fills gaps with probabilistic inference rather than acknowledging uncertainty. The result is that users with the most complex medical histories—precisely those who would benefit most from an AI health companion—face the highest risks of receiving misleading guidance.
The privacy implications add another layer of concern. Unlike healthcare providers, OpenAI is not covered by HIPAA. The Health Insurance Portability and Accountability Act establishes protections for health information held by covered entities—hospitals, doctors, insurers—but technology companies operating consumer products fall outside its scope. As Bradley Malin of Vanderbilt University explained to TIME, the arrangement between users and OpenAI is “a contractual agreement between the individual and OpenAI.” Users have no specific HIPAA rights if a data breach occurs. Potential recourse exists through FTC intervention or direct lawsuits, but protections remain unclear. Dr. Bitterman advises users to “assume that any information you upload into these tools…will no longer be private.”
The accountability question remains unresolved. When an LLM suggests a flawed diagnosis that a physician follows, who bears responsibility? When a patient acts on ChatGPT Health’s advice without consulting a doctor, and that advice proves harmful, what recourse exists? Professional guidelines and regulatory frameworks are beginning to adapt, but the process remains in its infancy. The current FDA draft guidance on AI-enabled medical devices does not explicitly reference the term “hallucination,” though it acknowledges that erroneous AI outputs can compromise device reliability and user trust. ChatGPT Health occupies a gray zone—neither a regulated medical device nor a mere search engine—that existing frameworks were not designed to govern.
Physician perspectives on AI health tools reflect cautious engagement rather than wholesale rejection. Dr. Robert Wachter, a prominent voice in healthcare technology, expresses qualified trust based on OpenAI’s brand interest in security, yet recommends withholding sensitive information like prior drug use from uploads. Dr. Adam Rodman emphasizes that users should avoid having AI “make autonomous medical decisions” while supporting supplementary health understanding. The consensus among clinicians who have studied these tools is that ChatGPT Health can be valuable as a complement to professional care but dangerous as a substitute for it. Whether 230 million weekly users will maintain that distinction is an open question.
Navigating the new terrain of AI-mediated healthcare
The launch of ChatGPT Health marks an inflection point that healthcare executives, clinicians, regulators, and patients cannot ignore. OpenAI has demonstrated that the market for AI health guidance is not speculative—it already exists at a scale of hundreds of millions of weekly queries. The question is not whether AI will play a role in healthcare but who will control that role and under what safeguards.
For healthcare organizations evaluating AI partnerships, the decision matrix has grown more complex. OpenAI’s enterprise offering provides HIPAA-compliant infrastructure with a Business Associate Agreement, but adopting ChatGPT for Healthcare means accepting OpenAI as a strategic dependency in clinical workflows. The reference accounts—Memorial Sloan Kettering, Stanford Children’s, HCA Healthcare—suggest the technology is mature enough for elite institutions, but each organization must evaluate whether the benefits of GPT-5-powered clinical support outweigh the risks of vendor lock-in and the unknown trajectory of OpenAI’s business model. Competitors including Google, Microsoft, and specialized healthcare AI startups offer alternatives that may provide better fit for specific use cases or risk profiles.
Regulatory frameworks will need to evolve rapidly to address the novel challenges ChatGPT Health presents. The current patchwork of federal and state laws was designed for a healthcare system where data flowed through covered entities bound by HIPAA. Consumer AI health products operating outside that framework require new thinking about consent, data protection, liability, and disclosure. New York’s recent AI transparency bills represent early steps, but comprehensive federal legislation remains absent. The No Fakes Act championed by SAG-AFTRA addresses synthetic media but not the health-specific concerns that ChatGPT Health raises. Healthcare regulators, patient advocates, and policymakers face a choice: allow consumer AI health products to proliferate in a regulatory vacuum or develop frameworks that protect patients while enabling innovation.
The competitive dynamics of the AI healthcare market will intensify throughout 2026 and beyond. Anthropic, Google, and Microsoft will likely pursue their own healthcare acquisitions to match OpenAI’s capabilities, potentially driving up valuations for remaining healthcare AI startups significantly. Google’s Fitbit data assets and Microsoft’s Nuance capabilities provide raw materials for competing offerings that could challenge ChatGPT Health’s early mover advantage. Chinese AI companies including ByteDance and Tencent may launch localized health products for their massive user bases. The market for AI health guidance is global, and OpenAI’s geographic restrictions—excluding the EEA, Switzerland, and the UK—create opportunities for competitors to establish positions in markets OpenAI cannot currently serve.
For individual users, the calculus is personal and consequential. ChatGPT Health offers genuine value: the ability to understand lab results, track health patterns, prepare for appointments, and receive guidance that was previously available only to those with medical training or expensive concierge care. The 40 million people asking ChatGPT health questions daily are demonstrating demand for accessible health information that the traditional healthcare system has failed to provide. But the technology’s limitations are real. Users must approach ChatGPT Health as a supplement to professional care, not a substitute. Sensitive information shared with the platform may not remain private. Advice that sounds authoritative may be statistically plausible but medically wrong. The burden of critical evaluation falls on users who may lack the expertise to recognize when AI guidance diverges from reality.
Healthcare operators considering ChatGPT Health integration should establish clear guardrails. Clinicians interfacing with patients who arrive with AI-generated health interpretations need frameworks for validating or correcting that information without dismissing patient engagement. Health systems should develop policies on whether and how to incorporate consumer AI health products into care workflows. IT and security teams must evaluate the data flows and privacy implications of patients connecting their EHR data to third-party AI platforms. Legal and compliance functions need to assess liability exposure when AI-mediated health decisions lead to adverse outcomes. The operational complexity of AI healthcare extends far beyond deploying a chatbot.
The future OpenAI is building—where an AI system holds a unified view of your medical records, wellness data, fitness patterns, and nutrition habits—represents either the democratization of health information or its concentration in the hands of a single technology company. Perhaps both simultaneously. ChatGPT Health could empower millions of people to take greater control of their health, understanding conditions and navigating systems that have historically excluded them. It could also create new dependencies on a company whose priorities, business model, and long-term trajectory remain uncertain. Sam Altman himself has stated that he would not trust AI with his “medical fate” without involving real doctors. The 230 million people asking health questions every week are making their own calculations about how much trust to extend. OpenAI has built the product. The judgment calls belong to everyone who uses it.