Sam Altman on the GPT‑5 rollout, data centers, Chrome and brain interfaces — full takeaways
TL;DR
- Sam Altman acknowledged OpenAI “screwed up” aspects of the GPT‑5 rollout and restored the “warmth” of the 4o model after user backlash.
- OpenAI is facing GPU capacity limits: API traffic doubled in 48 hours, demand is growing, and Altman warned the company will need to spend trillions on data centers.
- Altman confirmed interest in funding a brain‑computer interface startup and signaled ambitions for standalone apps, social features, and even evaluating Chrome if it were to be sold.
Context and background
On Thursday in San Francisco, OpenAI CEO Sam Altman and several OpenAI executives dined with a small group of reporters and answered questions on a wide range of topics, on the record except for remarks made over dessert. The conversation covered the recent GPT‑5 rollout, user reactions, product posture, infrastructure constraints, future product directions and potential strategic moves such as acquiring Chrome if the US government forced Google to sell it. ChatGPT has grown rapidly: Altman said the product has roughly quadrupled its user base in a year and now reaches “over 700 million people each week,” placing ChatGPT among the largest websites globally. That scale is a key driver of the technical and product decisions discussed during the dinner.
What’s new
- Rollout response: About an hour before the dinner, OpenAI pushed an update to restore the “warmth” of 4o, the prior default ChatGPT model, as an option for paying subscribers. Altman said he made the call to bring 4o back after users protested the disappearance on Reddit and X.
- Usage and capacity: Altman reported that OpenAI’s API traffic doubled in 48 hours following the rollout and said the company is “out of GPUs” while ChatGPT hits new daily user highs.
- Investment and infrastructure plans: Altman told the room to expect OpenAI to spend “trillions of dollars on data center construction in the not very distant future,” because capacity limits force trade‑offs that prevent offering better models or new products.
- New technical bets: He confirmed that OpenAI is planning to fund a brain‑computer interface startup as part of exploring neural interfaces, and that he would like a future in which he can think something and have ChatGPT respond.
- Product and business scope: Fidji Simo’s role running “applications” was confirmed to imply standalone apps beyond ChatGPT. Altman also expressed interest in building new AI‑powered social experiences and indicated OpenAI would look at Chrome if it were to be sold.
- Public posture: Altman acknowledged the company is having many meetings about users who form unhealthy relationships with ChatGPT but estimated such users are “way under 1 percent.” He also said OpenAI will not pursue exploitative uses (referencing “Japanese anime sex bots”) and wants ChatGPT to be personal without pushing a particular ideological stance.
Why it matters (impact for developers/enterprises)
- Capacity constraints affect product availability and feature rollout timelines. Enterprises relying on OpenAI APIs should expect variability in access or throttling as demand outstrips GPU supply.
- Forecasted large-scale investment in data centers signals longer‑term stability and scale plans, but also implies multi‑year capital and operational efforts that could affect pricing and contract terms for large API consumers.
- Confirmation of neural interface funding suggests future input modalities (brain signals) may be on OpenAI’s roadmap; developers and hardware partners should monitor standards, APIs and interoperability discussions.
- Multiple standalone apps and new social features mean enterprises integrating ChatGPT may see expanded endpoints, SDKs, or partner programs; product teams should track changes to product surfaces and policy.
- Altman’s emphasis on permitting users to push the model toward their preferences while maintaining a center stance has implications for content moderation, fine‑tuning offerings and personalization APIs.
Technical details or Implementation
Below is a summary table of the concrete technical and operational facts Altman discussed during the dinner.
| Topic | Fact / Quote |
|---|---|
| Model revert | OpenAI restored the “warmth” of 4o as an option for paying subscribers; Altman said he made the call after user backlash. |
| Traffic spike | API traffic doubled in 48 hours following the GPT‑5 rollout. |
| Capacity | OpenAI is “out of GPUs” and hitting new daily user highs for ChatGPT. |
| Data center spend | Altman said: “You should expect OpenAI to spend trillions of dollars on data center construction in the not very distant future.” |
| Brain‑computer interfaces | Altman confirmed plans to fund a neural interface startup to explore thinking-to-response interactions. |
| Product scope | Fidji Simo joining to run “applications” implies standalone apps beyond ChatGPT; Altman also mentioned social ambitions and interest in Chrome if it were sold. |
| Operational implications for engineers and integrators: |
- Expect capacity‑driven throttling or limited access to newer, larger models until additional GPU infrastructure is available.
- Prepare for evolving product endpoints as OpenAI expands to standalone applications and potentially new input modalities (e.g., neural interfaces) that will require new SDKs and security reviews.
- Monitor pricing and quota announcements if OpenAI proceeds with major data center investments that change cost structures.
Key takeaways
- OpenAI acknowledged missteps in the GPT‑5 rollout and quickly restored access to the 4o option for subscribers.
- Demand surged: API traffic doubled in 48 hours and the company reported GPU shortages.
- Altman expects massive infrastructure spending to scale OpenAI operations, potentially in the trillions of dollars.
- OpenAI is exploring brain‑computer interfaces and intends to fund startups in that area.
- The company plans to expand beyond ChatGPT into standalone apps, social features and is watching potential Chrome divestitures.
FAQ
-
Did OpenAI revert GPT‑5?
OpenAI pushed an update to restore the "warmth" of 4o, the previous default model, as an option for paying subscribers after users protested the change. Altman said he made that call.
-
Is OpenAI running out of GPUs?
Altman said the company is "out of GPUs" and that API traffic doubled in 48 hours, creating capacity constraints.
-
Will OpenAI build brain‑computer interfaces?
Altman confirmed OpenAI is planning to fund a neural interface startup to explore brain‑computer interactions, saying he wants to be able to think something and have ChatGPT respond.
-
Is OpenAI interested in buying Chrome?
Altman said, "If Chrome is really going to sell, we should take a look at it," indicating interest if Chrome were divested.
-
How is OpenAI thinking about product tone and moderation?
Altman said he wants ChatGPT to have a "center of the road, middle stance" that users can push toward their preferences, and that OpenAI will avoid exploitative use cases.
References
- Original reporting: The Verge — I talked to Sam Altman about the GPT‑5 launch fiasco: https://www.theverge.com/command-line-newsletter/759897/sam-altman-chatgpt-openai-social-media-google-chrome-interview
More news
NVIDIA Unveils New RTX Neural Rendering, DLSS 4 and ACE AI Upgrades at Gamescom 2025
NVIDIA announced updates to DLSS 4, RTX Kit, ACE and developer tools at Gamescom 2025 — expanding neural rendering, on‑device ASR, DirectX Cooperative Vectors, GeForce NOW integrations and Unreal Engine support.
Anthropic tightens Claude usage policy, bans CBRN and high‑yield explosive assistance
Anthropic updated Claude’s usage policy to explicitly ban help developing CBRN and high‑yield explosives, tighten cybersecurity prohibitions, refine political rules, and clarify high‑risk requirements.
Build a scalable containerized web application on AWS using the MERN stack with Amazon Q Developer – Part 1
In a traditional SDLC, a lot of time is spent in the different phases researching approaches that can deliver on requirements: iterating over design changes, writing, testing and reviewing code, and configuring infrastructure. In this post, you learned about the experience and saw productivity gains
Building a RAG chat-based assistant on Amazon EKS Auto Mode and NVIDIA NIMs
In this post, we demonstrate the implementation of a practical RAG chat-based assistant using a comprehensive stack of modern technologies. The solution uses NVIDIA NIMs for both LLM inference and text embedding services, with the NIM Operator handling their deployment and management. The architectu
GPT-5: smaller-than-expected leap, but faster, cheaper, and stronger at coding
OpenAI's GPT-5 delivered incremental accuracy gains but notable improvements in cost, latency, coding performance, and fewer hallucinations. The launch met heavy hype and mixed reactions.
Sam Altman: ‘Yes,’ AI Is in a Bubble — What He Told The Verge
OpenAI CEO Sam Altman told The Verge he believes AI is in a bubble, compared it to the dot‑com era, warned about exuberant startup valuations, and said OpenAI expects massive data‑center spending.