Signal Hub logoSignal Hub

Machine Learning news and articles

Machine Learning
Updated just now
Date
Source

2 articles

VentureBeat
~9 min readMay 6, 2026

The app store for robots has arrived: Hugging Face launches open-source Reachy Mini App Store with 200+ apps

There's an app for nearly every imaginable user and use case these days, but one thing they all have in common is that they're centered around one device: the smartphone. That changes today as Hugging Face, the 10-year-old New York City startup best known for being the go-to place online to host and use cutting-edge, open-source AI models, agents and applications, launches a new App Store for Reachy Mini, its low-cost ($299) open-source physical robot that debuted back in July 2025 (itself the fruit of Hugging Face's acquisition of another startup, Pollen Robotics). The new Hugging Face Reachy Mini App Store already hosts a library of over 200 community-built applications, and Reachy Mini owners will be able to download any of these free of charge to start (unlike smartphone apps, there's no monetization option for app creators on this store — yet). The Reachy Mini App Store will also offer Reachy Mini owners — around 10,000 units have been sold so far since last year — an easy means of building their own custom apps for the tiny, stationary desktop robot with built-in camera eyes, speaker, and microphone, via Hugging Face's existing, AI-powered agent called "ML Intern." The significance lies not just in the hardware, but in the removal of the "roboticist" barrier; for the first time, individuals without a background in engineering or coding are shipping functional robotics software in under an hour. "Anyone can build the apps," said Clément Delangue, CEO and co-founder of Hugging Face, in a video interview with VentureBeat. "My intuition is that more and more [AI] model builders will release on Reachy Mini as a way to test the robotics ability of new models." Make robots as accessible to laypeople as PCs and smartphones The technical bottleneck in robotics has historically been the scarcity of high-quality training data. While Large Language Models (LLMs) have mastered general-purpose coding by training on massive repositories like Microsoft's GitHub, the volume of code specific to robotics remains "tiny" by comparison (though Github does contain likely the largest existent, publicly accessible library of robotics code to date, with more than 17,000 different repositories or "repos" dedicated to the field). This lack of data has meant that, until now, AI agents were relatively poor at understanding the physical abstractions and firmware requirements of hardware. Hugging Face’s solution is an agentic toolkit that acts as an intermediary. Rather than forcing a user to learn a specific robotics SDK or master the nuances of a robot's firmware, the toolkit allows a user to describe a desired behavior in plain English—for instance, "wave when someone says good morning". An AI agent then handles the heavy lifting: it writes the code, tests it against the robot's specific constraints, and ships the final package "Historically, it’s been extremely hard," Delangue told VentureBeat of building robotics applications. "But we’ve worked really hard on the topic with a mix of open sourcing everything we do, working on the right abstractions for robotics, and making it easier for agents to understand and use it." The platform is model-agnostic, supporting a wide range of leading intelligence engines. Users can build apps using Hugging Face’s own ML Intern agent or leverage external models including GPT-5.5, Claude Opus 4.6, Kimmy 2.6, Mini Max GM5, and Deep Sig V4 Pro. For real-time interaction, the official conversation apps utilize OpenAI Realtime and Gemini Live. By providing these high-level abstractions, Hugging Face has collapsed the traditional "integration weeks" of robotics work into a process that takes minutes. Low-cost Reachy Mini is a hit In order to take advantage of the new Hugging Face Reachy Mini App Store, users are encouraged to purchase Reachy Mini, a cute desktop robot Hugging Face launched back in July 2025 as an affordable, open-source alternative to the existing, commercially available robots from the likes of Boston Dynamics, whose infamous Spot robot dog retails for around $70,000. Even Chinese competitors start at $1,900+. In contrast, the Reachy Mini is accessibly priced for hobbyists and developers. It comes in two variants: Reachy Mini Lite ($299 plus shipping): A tethered version that connects via USB and uses an external computer for processing. Reachy Mini Wireless ($449 plus shipping): A standalone version featuring an on-board Raspberry Pi CM 4 and Wi-Fi connectivity. Delangue said that of the 10,000 Reachy Mini units sold so far, 3,000 were sold in just the past two weeks. Hugging Face expects to ship another 1,000 units within the next 30 days. Even those who don't own a Reachy Mini can still develop apps for it, however, using the Reachy Mini App Store and the Reachy App, which contains a 3D simulation of the robot and its responses. The App Store itself is hosted on the Hugging Face Hub. It functions much like a standard software repository but for hardware behaviors: Search and Install: Users can find apps, click a button, and install them directly to their robot. Forkability: Every app is "forkable," meaning a user can duplicate an existing app and ask an AI agent to modify it (e.g., "make it answer in French"). Simulation Mode: Crucially, the store includes a browser-based simulator. This allows users who do not own a physical Reachy Mini to build, test, and play with the catalog in a virtual environment. Both are part of Hugging Face's ongoing "Le Robot" effort — a project that began in 2024 with Hugging Face researchers specializing in robotics and AI developing and publishing on the web their own open-source code, tutorials, and hardware to make robotics development more accessible to a wider audience. And unlike Github, which is designed for a developer audience, the Hugging Face Reachy Mini App Store is designed for robot owners and users who may have no technical experience or training whatsoever. Continuing with the open-source ethos and practice Hugging Face’s strategy is rooted in the belief that closed-source hardware and software are "almost impossible" to build for at scale. Delangue notes that closed systems prevent the training of agents and limit the ability of the community to innovate. Consequently, the entire Reachy Mini platform is open-source. This open licensing model has two primary implications for the ecosystem: Accelerated Development: Because the code is public and integrated with the Hugging Face ecosystem via "Spaces," Hugging Face's feature for hosting AI-powered web apps launched in 2021, agents can more easily learn how to interact with the hardware. Community Sovereignty: Apps are not locked behind a proprietary wall. Currently, all 200+ apps on the store are free, though the platform's foundation on "Spaces" provides the flexibility for creators to potentially monetize their work in the future. "For the moment, all the apps are free," Delangue noted. "It’s flexible, it’s built on [Hugging Face] Spaces, so at some point maybe people are going to make them paid." Robotics enters its accessible hobbyist era Hugging Face's Reachy Mini App Store is launching with 200 apps already available. So who built them, and how did they do it without this platform existing prior? Delangue told VentureBeat that more than 150 different creators have contributed to the store, most of whom had never written a line of robotics code before. Yet, they have been able to do so thanks to Hugging Face's ML Intern and Github. The new Hugging Face Reachy Mini App Store now puts the tools and existing apps into one place for easier accessibility. Delangue was keen to highlight one of the early Reachy robotics app developers in particular to VentureBeat: Joel Cohen, a 78-year-old retired marketing executive. Cohen, who is colorblind and has no technical background, spent two weeks assembling his Reachy Mini Lite (a task that usually takes three hours). Despite these physical challenges, he used an AI agent to build a "VP of Future Thinking" facilitator for his Zoom-based CEO peer groups. The app enables the robot to: Greet 29 members by name. Fact-check discussions in real-time. Summarize key themes and push back on surface-level answers. "I built this by describing what I needed in plain English," Cohen stated in a press release provided to VentureBeat ahead of the launch. "No SDK. No robotics background. No developer experience". Other community-driven applications include: Emotional Damage Chess: A robot that plays chess and mocks the user’s blunders. Reachy Phone Home: An anti-procrastination tool that detects when a user picks up their phone and tells them to get back to work. Language Tutor: A physical companion that listens to speech and corrects accents. F1 Race Commentator: A desk companion that calls Formula 1 races live as they happen. Delangue himself related to VentureBeat that in only a few hours, he built an app for his own Reachy Mini robot at the Hugging Face Miami office to have the robot act as a receptionist. “It basically does face recognition to detect when you arrive in the office, and then it looks at you and onboards you," Delangue related. "It says, ‘Hey, welcome to the office. Who are you here to see?’ Then it sends me a message: ‘Carl just arrived at the office. He’s here to meet you, and for these reasons.’ It works a little bit as my welcoming booth at the office, and it took me less than two hours to build that.” Even for an experienced founder and developer as Delangue, building apps for a robot was out of the question until the combination of Reachy Mini and ML Intern. “For me, it would have been impossible," the Hugging Face CEO said. "If you weren’t a robotics developer, it probably would have been impossible, or it would have taken a few months." Democratizing robotics The launch of the agentic App Store signals a fundamental shift in how we interact with machines. For sixty years, the field was gated by the requirement for deep technical expertise. By combining low-cost open hardware with the reasoning capabilities of modern AI agents, Hugging Face is moving toward a future where the hardware is a commodity and the behavior is limited only by what a user can describe. As Delangue noted during the launch, the goal was to provide a platform for people who "want to get into robotics but don’t have the hardware or the skills". With nearly 10,000 robots now "in the wild" and a burgeoning store of agent-written apps, the Reachy Mini has become the most widely deployed open-source desktop robot in history. The question is no longer how to build a robot, but what—now that the gate is open—we will ask them to do.

VentureBeat
~7 min readMay 5, 2026

AI agents are missing all the discussions your team is having. SageOX has an answer: agentic context infrastructure

As AI model providers increasingly move downstream, launching products and agents for specific enterprise applications and sectors like finance, one big question still remains: how will said AI agents be equipped with the proper context surrounding a task — who assigned it, which other stakeholders are involved, what data or discussions have taken place about it and how it should be done? This practice of "context engineering" remains one of the great unsolved problems of the AI era. But SageOx, a Seattle-based startup founded by the veterans who built the original AWS EC2 and EBS infrastructure, believes it has the answer: a new systems layer it calls "agentic context infrastructure." Using a combination of small hardware recording devices and the existing applications enterprises already rely on — Slack, email, documents, files — and applying new, open-source frameworks and instructions atop it all, SageOX has developed a system by which enterprises can keep agents as "in-the-loop" and updated on the enterprise's tasks as their human employees are, and prevent them from "drifting" off their assigned tasks and the firm's larger goals. “We are capturing all of this context where it happens," said Ajit Banerjee, founder and CEO of SageOX and a former Hugging Face, Meta, Amazon and Apple engineer said in a recent video call interview with VentureBeat. "Product development is a team sport, and the context doesn’t just come from people typing on a keyboard. It happens in conversations.” By capturing the "why" behind the "what"—the intent that lives in Slack threads, whiteboarding sessions, and water-cooler conversations—SageOx aims to provide a "hivemind" that ensures agents don't drift and humans stay in flow. "The way people have to work is not old-school coordination, where I write down an issue and then it goes through a sequence. It has to be almost like playing jazz," Banerjee added. Today, the company emerged from stealth to announce its $15 million seed round led by Canaan and participation from A.Capital, Pioneer Square Labs, and Founders’ Co-op. The architecture of team memory Today’s AI agents operate in isolated sessions, lacking a shared memory of prior decisions or architectural intent. Every task effectively starts from scratch, forcing developers to manually recap context—a process that undermines the very speed agents are meant to provide. SageOx addresses this through a multi-surface product suite designed to capture context wherever it naturally occurs. At the center of this ecosystem is the Ox Dot. A customized hardware device designed for the shared office, the Dot captures meetings, standups, and design reviews with a single touch. Its most distinctive feature is "Auto Rewind"—a fail-safe for the spontaneous brilliance of a team. If a breakthrough happens during an unrecorded conversation, Auto Rewind allows the team to "go back" and capture the discussion after the fact. This audio is transcribed, speaker-identified, and distilled into team memory, where it becomes accessible to both humans and agents. For the developer, the open-source, MIT-licensed Ox CLI provides the bridge. Commands like ox agent prime allow coding assistants—including Claude Code and Codex—to consult the team's shared history before writing code. This ensures that if a team decided in a meeting to use a specific authentication pattern, the agent knows it without being explicitly told in a prompt. As Dr. Rupak Majumdar, Scientific Director, Max Planck Institute for Software Systems, noted after seeing the team’s development speed, they are effectively "treating code like assembler." Agentic engineering: moving Beyond "clean" code The shift to an agent-first workflow has forced the SageOx team to reconsider nearly every principle of modern software management. SageOX CTO Ryan Snodgrass, formerly of Amazon, notes in a blog post transcript that traditional branch management and "clean" commit histories are often "bad for the agents." In the old world, humans preferred large PRs that were easy to read during a single code review. In the agentic era, 10,000-line PRs spread across the codebase make it impossible for an agent to reason about intent. Instead, SageOx advocates for smaller, high-volume, and highly focused commits. This "agent-readable" history allows the machine to look back and understand exactly why a specific change was made. The team is even re-evaluating repo structures; while they currently utilize a monorepo for their 750,000 lines of code, they are exploring a future where agents manage a constellation of micro-repos, as agents can "get lost" when a codebase grows too large for their context window. This philosophy of "speed-over-stasis" allowed the team to build their own firmware for the Ox Dot in less than two weeks, despite having no recent hardware experience. By feeding technical PDFs and documentation into AI models, they bypassed months of traditional research. CEO Ajit Banerjee calls this the "unlearning" of old habits—realizing that the "undifferentiated heavy lifting" of knowledge work can now be offloaded to a system that remembers everything the team knows. Radical transparency: beyond open source to an "open work" model Perhaps as significant as the technology is SageOx’s commitment to "Open Work." Moving beyond traditional open-source software, the company is practicing a form of radical transparency in an effort to foster the acceleration of development across the entire open source community and any enterprises who wish to learn from the way they work. SageOx's team openly shares their internal prompts, their planning sessions, and even their unfiltered internal debates with the public. Users can sign in to the SageOx console and watch the team build SageOx in real-time. This "open kimono" approach was an intentional decision to lead by example. Banerjee argues that since they are asking teams to change how they work, they must be willing to show the "WTF" moments and the course corrections as they happen. "The revolution is not going to be televised," Banerjee says. "It's going to be SageOxed." This transparency is intended to prove that a small, lean team—"yoking up lean"—can outpace massive organizations by leveraging a shared context layer. As for how SageOx plans to monetize and become profitable, Banerjee said the revenue path is modeled on the AWS EC2 playbook: start with early adopters, especially small AI-native startups, then expand toward enterprises as the need becomes obvious. The pedigree of infrastructure The technical foundation of SageOx is rooted in the early days of cloud infrastructure. Banerjee was an original member of the AWS EC2 team, and Snodgrass was one of Amazon's first engineers, leading the transition from monolithic architectures to microservices. This background is reflected in the company’s name: the "Ox" represents the "Yeoman work" they aim to do—a dependable animal that handles the heavy lifting of data and context so the team can move forward. The SageOx vision is one where humans are no longer the manual assemblers of context. Instead, they act as the directors of a "parallel processing" engine. In a recent demonstration, a feature request moved from a verbal discussion to a completed implementation in under seven minutes. By priming coding agents with the recorded context of the original discussion, the team bypassed the need for formal specs or Jira tickets. The new way of work SageOx is currently focusing its efforts on "AI-native" startups—teams that operate primarily through prompts and rely heavily on agentic coworkers. Their suite of tools, from the open-source Ox CLI to the hardware-enabled Ox Dot, is designed to solve the immediate problem of alignment drift. As AI moves from being a tool to a teammate, the most valuable asset a company possesses is no longer its raw source code, but its shared context. SageOx suggests that the way forward is not to hoard information behind "private fences," but to create a communal ground where intent is visible to every teammate—human or machine. In this new epoch, the teams that win will be the ones that can remember as fast as they can execute.