TwinMind Raises $6M to Launch AI 'Second Brain' App

TwinMind: Former Google X Scientists Launch AI App to Be Your Second Brain
Three ex-Google X researchers have set out to help users capture, remember, and organize their daily lives—without the need for a brain implant. Their startup, TwinMind, has just secured $5.7 million in seed funding and launched its AI-powered app for both Android and iPhone users.
What Is TwinMind?
TwinMind is an AI assistant that runs quietly in the background, recording ambient speech (with user permission) and turning your spoken words—whether from meetings, lectures, or daily conversations—into organized notes, tasks, and answers. Instead of just capturing meeting notes like many alternatives, TwinMind continuously transcribes and processes audio directly on your device, prioritizing privacy and efficiency.
- Real-time voice transcription—even offline, with up to 17 hours of continuous capture without draining your phone’s battery.
- Personal knowledge graph—automatically organizes your conversations and thoughts.
- Supports 100+ languages—with real-time translation capabilities.
- Data privacy—audio recordings are automatically deleted after transcription, and backups are optional.
How TwinMind Is Different
Most AI note-taking tools like Otter or Fireflies require cloud processing and can’t run all day in the background—especially on iOS due to Apple’s restrictions. TwinMind’s technical team, led by co-founders Daniel George, Sunny Tang, and Mahi Karim, built their core audio engine in pure Swift, allowing native, background operation on iPhone and Android. This makes TwinMind a seamless, always-on companion for users who want to passively capture information throughout their day.

Beyond Mobile: Chrome Extension and Vision AI
TwinMind also offers a Chrome extension that lets users capture and interpret content across browser tabs, email, Slack, Notion, and more. This helps build a richer context for work or study, and the team even used it to shortlist candidates from over 850 intern applications this summer.
AI Model Innovations: Meet Ear-3
The company has debuted its new Ear-3 AI speech model, supporting over 140 languages with a word error rate of just 5.26%. It can distinguish different speakers in a conversation and will soon be available via API, priced at $0.23 per hour. While Ear-3 runs in the cloud, the app automatically switches to the on-device Ear-2 model if offline, ensuring continuous transcription for users.

Who’s Using TwinMind?
TwinMind has already attracted over 30,000 users worldwide, with a strong presence in the US, India, Brazil, and several other countries. Its audience is mainly professionals (50–60%), students (25%), and users with personal projects (20–25%). Notably, one user is even writing their autobiography by dictating to TwinMind.
Privacy by Design
User privacy is a core promise: TwinMind does not use user data to train its models, does not store audio recordings, and keeps only transcribed text locally. Users can opt out of data backups entirely. This approach addresses common concerns around AI and personal data security.
From Google X to Startup
TwinMind's founders bring deep experience from Google X, where they worked on innovative projects such as AI-powered earbuds. Their background enabled them to prototype and launch TwinMind quickly, supported by investors like Streamlined Ventures, Sequoia Capital, and Stephen Wolfram—the latter making his first-ever startup investment here.

What’s Next?
The team of 11 plans to expand their design and business development teams, and grow the API business. TwinMind now offers a Pro subscription at $15/month (with a larger context window and premium support), but the free version retains all the core features, including unlimited transcriptions and on-device recognition.