Introducing Memories.ai: An AI Research Lab Pioneering the World’s First Large Visual Memory Model
Having already indexed 1 million hours of video, Memories.ai is providing 100x more video memory than previously possible to companies that include Samsung, Aosu, PixVerse and Viggle
Memories.ai, an AI research lab building visual memories for AI, today announced $8 million in seed funding led by Susa Ventures, along with Samsung Next, Crane Venture Partners, Fusion Fund, Seedcamp and Creator Ventures. Founded by former Meta Reality Labs researchers, Dr. Shawn Shen and Ben Zhou, Memories.ai has pioneered the world’s first Large Visual Memory Model (LVMM) which is driving advancements across physical security, media production, marketing, and robotics.
AI’s Problem: Short and Long Term Memory Loss with Video
Most AI systems today can analyze short clips in real time but quickly forget what they’ve seen after more than 15-60 minutes. This lack of visual memory makes it impossible for AI to understand context, spot recurring patterns, or track changes over time. As a result, even the most advanced models can’t answer simple questions like “Have we seen this before?” or “What’s changed since yesterday?” This limits their usefulness in video-rich industries like security, retail, media, and consumer tech, where long-term visual understanding is essential. Without memory, AI can’t truly learn from video, it can only react to it.
“Large-scale video understanding has become essential in today’s fast-evolving social media landscape. Memories.ai’s technology has provided us with valuable insights - from surfacing emerging trends and identifying key topics, to analyzing long-tail conversations across TikTok and other video platforms. The precision and depth of their video analysis capabilities offer important support for maintaining our competitive edge.” - from Jaden Xie, co-founder of PixVerse
Memories.ai Solution: Giving AI Unlimited Visual Memories
Memories.ai solves this problem by giving AI a memory layer for video with the world’s first Large Visual Memory Model (LVMM). Instead of processing clips in isolation, Memories.ai captures, stores, and structures visual data over time, allowing AI models to retain context, recognize patterns, and compare new footage against past events. Its platform turns raw video into a searchable, contextual database that AI systems can continuously learn from. Memories.ai gives AI the foundation it needs to understand video like humans do across time, and becomes infinitely more powerful as it gets integrated into larger video libraries.
Key applications include:
- Security & Safety: Search through months of surveillance footage in seconds
- Media & Entertainment: Instantly find scenes or visual elements across decades of content
- Marketing Analytics: Analyze sentiment and mentions across millions of social videos
- Consumer Devices: Bring visual memory capabilities to next-gen mobile experiences, starting with Samsung
“Human intelligence isn’t just about processing information. It’s about building a rich, interconnected web of visual memories that inform every decision we make”, said Dr. Shawn Shen, co-founder and CEO of Memories.ai. “Our mission is to bring that level of contextual awareness to AI to help build a safer and smarter world.”
“The market opportunity for temporal video intelligence is massive, touching everything from robotics and enterprise software to consumer applications, self-driving cars, and eventually AGI,” said Chad Byers, General Partner at Susa Ventures. “As AI systems move from static analysis to dynamic decision-making, understanding video over time becomes foundational. Memories.ai is building the critical infrastructure to power that future.”
Memories.ai’s technology is currently available via API as well as a chatbot web app in which users can upload videos or connect with their own libraries.
Founders and companies working with video can learn more at https://memories.ai/.
About Memories.ai
Memories.ai is the developer of the world’s first Large Visual Memory Model (LVMM). Its technology builds human-like visual memories for AI, enabling machines to see, understand, and recall visual experiences across unlimited timeframes. Its platform delivers persistent, searchable memory for video at scale. Founded in 2024 by former Meta researchers, the company is backed by Susa Ventures, Samsung Next, Crane Venture Partners, Fusion Fund, Seedcamp and Creator Ventures. Learn more at https://memories.ai/.
( Press Release Image: https://photos.webwire.com/prmedia/81468/341611/341611-1.jpg )
WebWireID341611
- Contact Information
- Riley Munks
- PR Advisor
- Activate PR - https://www.activate-pr.com/
- riley@activate-pr.com
This news content may be integrated into any legitimate news gathering and publishing effort. Linking is permitted.
News Release Distribution and Press Release Distribution Services Provided by WebWire.