Samsung Invests in Memories.ai: Revolutionizing Long-Form Video Analysis with AI

Samsung Invests in Memories.ai: AI That Analyzes Millions of Hours of Video
As the volume of video data explodes, businesses face a major challenge: how to efficiently analyze and extract value from thousands—even millions—of hours of footage. Most existing AI tools can summarize a single video or answer questions about short clips, but they struggle when it comes to handling extended, multi-hour or multi-source video content. This limitation is especially acute for security and marketing companies that need to review vast amounts of footage across multiple cameras or campaigns.
Memories.ai: Pushing the Boundaries of Video Intelligence
Memories.ai is tackling this challenge head-on. The startup has developed an advanced AI platform capable of processing up to 10 million hours of video, offering businesses a powerful contextual layer with features like searchable indexing, tagging, segments, and aggregation.
The founding team brings deep AI expertise: Dr. Shawn Shen, formerly of Meta’s Reality Labs, and Enmin (Ben) Zhou, a machine learning engineer also with a Meta background. They recognized a gap in how existing AI models understand long-form video content. As Dr. Shen explains, traditional models struggle to extract meaning from more than an hour or two of footage, whereas humans naturally sift through large amounts of visual data to form context and memories.

$8 Million Seed Round Led by Susa Ventures and Samsung Next
Memories.ai recently raised $8 million in a seed funding round, led by Susa Ventures and backed by Samsung Next, Fusion Fund, Crane Ventures, Seedcamp, and Creator Ventures. The round was oversubscribed, reflecting strong investor interest in the startup’s unique approach to long-context video AI.
Misha Gordon-Rowe, partner at Susa Ventures, noted the company’s potential to unlock valuable first-party visual intelligence data for enterprises. Samsung Next’s Sam Campbell highlighted the platform’s ability to perform on-device video analysis, which enhances privacy for security-conscious consumers by eliminating the need to send footage to the cloud.
How Memories.ai Works
Memories.ai utilizes its proprietary tech stack and models for video analysis. The system first eliminates noise from the footage, then compresses and stores only the most relevant data. An indexing layer makes video searchable using natural language queries, while segmentation and aggregation features help users generate detailed reports and summaries from their video libraries.
- Security companies use the platform to detect unusual or dangerous behaviors across extensive surveillance archives.
- Marketing teams analyze social media trends and previous campaigns, helping them decide what type of video content to create next.

Future Vision: Smarter Video Queries and AI Assistants
Currently, companies upload their video libraries to Memories.ai’s platform for analysis. Looking ahead, the goal is to enable seamless syncing via shared drives and empower clients to ask complex questions like, “Tell me all about people I interviewed in the last week.”
Dr. Shen envisions AI assistants that gain context from users’ visual data—whether through photos, smart glasses, or even by training humanoid robots and self-driving cars with memory of past experiences.
Competition and Market Outlook
While competing startups such as mem0 and Letta are building memory layers for AI models, their video capabilities remain limited. Larger players like Google and TwelveLabs are also working on video understanding, but Memories.ai aims to remain the most flexible, horizontal solution for enterprises needing deep, long-context video insights.
With a 15-person team and fresh funding, Memories.ai is set to expand its engineering and search capabilities, helping more businesses turn massive video archives into actionable intelligence.