Inside OpenAI: An Engineer’s Honest Reflections on Startup Culture, Growth, and Challenges

What’s It Really Like to Work at OpenAI? A Former Engineer Shares Insights
OpenAI has become synonymous with rapid innovation in artificial intelligence, but what’s it actually like behind the scenes at one of the world’s most influential AI companies? Calvin French-Owen, a former engineer who recently left OpenAI, provides a rare, first-hand look at the company’s culture, growth pains, and daily realities.
Lightning-Fast Growth Brings Chaos—and Opportunity
French-Owen joined OpenAI as it was accelerating its efforts to launch Codex, the company’s much-publicized AI coding agent. During his year at OpenAI, he witnessed the team expand from 1,000 to 3,000 employees—a pace that brings both excitement and considerable challenges.
- Communication & Structure: With rapid hiring, traditional processes often struggled to keep up. Reporting structures, project management, and company-wide communication were frequently in flux.
- Startup Spirit: Despite its size, OpenAI still operates with a startup mentality. Engineers are empowered to pursue ideas with minimal bureaucracy, but this can lead to duplicated efforts and inconsistency in code quality.
Building Codex: Speed Over Sleep
French-Owen’s team, consisting of a handful of engineers, researchers, designers, and go-to-market staff, built and launched Codex in just seven weeks. The product’s success was immediate, thanks in part to OpenAI’s reach through ChatGPT’s established user base.
"I’ve never seen a product get so much immediate uptick just from appearing in a left-hand sidebar, but that’s the power of ChatGPT," he reflected.
Engineering Realities: Flexibility and Friction
The fast-paced environment means that code quality can vary widely. From veteran engineers with experience at major tech firms to recent PhDs, the range of skills and approaches can lead to a "back-end monolith" that’s sometimes unwieldy and prone to breaking. Top managers are aware and actively working to improve these systems.
Cultural DNA: Slack, Secrecy, and Social Media
OpenAI, despite its scale, still relies heavily on tools like Slack and retains a "move fast and break things" ethos reminiscent of early Facebook. The culture is also shaped by a high degree of secrecy, partly to prevent leaks and manage public scrutiny. Social media plays a key role—as one insider joked, "this company runs on Twitter vibes." Viral posts can prompt immediate responses from the team.
AI Safety: Myths and Realities
A common misconception, according to French-Owen, is that OpenAI isn’t sufficiently concerned with AI safety. In practice, he says, the company is highly focused on real-world safety issues such as hate speech, abuse, political manipulation, and prompt injection risks. While long-term existential risks are studied by dedicated researchers, the day-to-day emphasis is on securing the technology for its millions of current users.
High Stakes, High Pressure
With hundreds of millions of users and growing government and industry scrutiny, the stakes at OpenAI are enormous. Competitors are watching closely, and the pace shows no sign of slowing.
Conclusion
French-Owen’s reflections paint a picture of OpenAI as an organization balancing breakneck growth, a startup mindset, and the weighty responsibility that comes with building transformative AI. For those considering a career in AI or interested in the realities behind the headlines, his experience offers valuable perspective.