Former OpenAI engineer on the culture at the ChatGPT-maker

In the ever-evolving world of artificial intelligence, OpenAI stands out as a leader and innovator. From releasing ChatGPT to pushing the boundaries of machine learning, the company has changed how we interact with technology. But what is it really like to work inside this AI powerhouse? A former senior engineer, Calvin French-Owen, recently shared a rare, first-hand account of his year at OpenAI — lifting the curtain on the culture that powers the ChatGPT-maker.
A Year of Growth and Chaos
Calvin joined OpenAI in May 2024 and left in June 2025. In that single year, the company exploded in size, growing from about 1,000 employees to over 3,000. This massive expansion created challenges across the board — from team structure and communication to software infrastructure and hiring.
According to Calvin, the culture retained a strong “startup feel” despite the ballooning headcount. Employees had incredible freedom to pursue ideas, experiment, and move fast. But that same freedom often led to chaos: multiple teams built the same tools without coordination, and software libraries overlapped unnecessarily. There was no central playbook. Everyone was moving fast, but not always in sync.
No Email. All Slack.
Perhaps the most surprising detail was how little traditional communication was used inside OpenAI. Calvin estimates he received fewer than 10 emails in an entire year. Instead, everything — from technical debates to team meetings — happened in Slack.
This Slack-heavy approach had pros and cons. On the one hand, it allowed for faster communication and collaboration. On the other, the constant stream of messages could be overwhelming. Calvin describes it as “incredibly distracting,” though manageable with careful filtering of channels. For an engineer, staying focused in such an environment was a daily challenge.
Extreme Speed, High Expectations
The most intense period of Calvin’s time at OpenAI came when he helped lead the team launching Codex — the model that powers GitHub Copilot. They shipped the product in just seven weeks. The effort demanded long nights, early mornings, and weekend work. One team member even returned early from paternity leave to meet the deadline.
Calvin admits it was one of the most demanding stretches in his decade-long career. There were days when he’d wake up early to care for his newborn child, then head straight into 14-hour workdays. Yet, despite the burnout, he found the pace thrilling. “You get hooked,” he wrote. “Everything feels important. Urgent. World-changing.”
This fast-paced, high-impact culture is not uncommon in top tech companies. But at OpenAI, it’s intensified by the sense of working on something deeply important — something with global implications.
A Culture of Secrecy and Sensitivity
Inside OpenAI, information is tightly controlled. Calvin notes that teams often don’t know what others are working on. Financial details, future product plans, and research directions are rarely shared beyond the leadership circle. This secrecy ensures security and protects intellectual property, but it also fosters silos and internal confusion.
OpenAI is also highly sensitive to public perception. According to Calvin, the company keeps a close eye on viral social media posts — especially on Twitter (now X). A popular tweet or thread can trigger immediate internal discussion or changes. In some cases, product teams respond within hours to public criticism.
This kind of real-time responsiveness keeps OpenAI agile, but it also means engineers and researchers are under constant scrutiny. Every update, every experiment, every bug can become the subject of viral debate.
Safety: Not Just Science Fiction
Much has been written about AI safety, especially at OpenAI. Critics often accuse the company of focusing too much on hypothetical threats, like superintelligent machines or AI takeovers. But Calvin pushes back on that narrative.
From his perspective, OpenAI does take safety seriously — but in a more grounded way. Teams work daily on addressing misuse risks, such as:
- Hate speech and toxic content
- Political manipulation
- Prompt injections and jailbreaks
- Self-harm and mental health responses
- Biological and chemical weapon generation
These are not abstract concerns — they are real challenges OpenAI engineers face every day. Calvin makes it clear that while “existential risk” is discussed, it doesn’t dominate the day-to-day work. The company is more focused on keeping today’s models safe, fair, and reliable.
A Mixed Engineering Ecosystem
OpenAI’s engineering culture, according to Calvin, is a mix of brilliance and messiness. Some of the world’s smartest minds work there — many with deep academic backgrounds or experience at Meta, Google, or top AI labs. But not everyone is prepared for the scrappy, startup-like environment.
New hires, especially fresh PhDs, sometimes struggle with the practical demands of building products. Calvin describes the main codebase as a “dumping ground,” where conflicting versions of tools, libraries, and utilities make development difficult. Documentation is often lacking. Yet despite these issues, the engineering team continues to ship high-impact products at record speed.
The Takeaway: A Place Like No Other
Working at OpenAI isn’t for everyone. It demands speed, stamina, and a tolerance for ambiguity. The culture is built on intensity, autonomy, and ambition. You might go days without a meeting but still feel the pressure of delivering world-changing tools.
For Calvin, the experience was both exhausting and exhilarating. He left after a year, but not with regret. Instead, he offered gratitude for the lessons learned, the people he worked with, and the mission he supported — building safe and beneficial AI.
Final Thoughts
The insights from this former OpenAI engineer offer a rare, behind-the-scenes look into a company that’s shaping the future of technology. OpenAI may be driven by grand visions of artificial general intelligence (AGI), but its day-to-day operations reflect the messy, human reality of building products at scale.
It’s a place where chaos meets genius — and where the future is built, one Slack message at a time.