Former Google engineers Aza Kai and Hiraku Yanagita have founded InfiniMind, a startup building AI infrastructure to transform petabytes of unused video and audio data into structured, queryable insights for enterprises.
The Tokyo-based company, now relocating its headquarters to the U.S., addresses the massive "dark data" challenge from sources like store cameras, broadcast archives, and production footage that businesses struggle to analyze.
Revolutionizing Video AI with Proven Products
InfiniMind's first product, TV Pulse, launched in Japan in April 2025 and provides real-time analysis of television content to track brand exposure, customer sentiment, and PR impact for media and retail firms.
The flagship DeepFrame platform processes up to 200 hours of long-form video to pinpoint specific scenes, speakers, or events using advanced vision-language models.
$5.8 Million Seed Funding Fuels Growth
InfiniMind recently secured $5.8 million in seed funding led by UTEC, with participation from CX2, Headline Asia, Chiba Dojo, and an a16z AI researcher.
This capital will accelerate DeepFrame's development, expand engineering teams, and acquire more customers in Japan and the U.S.
Founders' Decade at Google Drives Innovation
Kai and Yanagita, who collaborated for nearly a decade at Google Japan on cloud, machine learning, ad systems, and video recommendations, spotted the video AI inflection point years ago.
Key enablers include rapid progress in vision-language models from 2021-2023, falling GPU costs, and consistent 15-20% annual performance gains over the past decade.
Unlike fragmented competitors, InfiniMind offers no-code, cost-efficient solutions integrating audio, visuals, and unlimited video lengths for enterprise monitoring, security, and content intelligence.
With DeepFrame beta in March 2026 and full launch in April, the company eyes global expansion and views video understanding as a path toward AGI to better comprehend reality.