UG Labs raises $7M to infuse conversational AI into games for kids – VentureBeat

17 minutes, 30 seconds Read

UG Labs has raised $7 million in funding to infuse conversational AI and voice interactivity into games for kids.


World’s Leading High-rise Marketplace

Tel Aviv, Israel-based UG Labs has raised the round led by MoreVC, with participation from deep tech VCs Amiti Venture and Mediatek, along with private angels.

UG Labs has introduced a novel approach to game design, incorporating voice interactivity and conversational AI using proprietary algorithms and data.

The technology, embedded in UG’s game engine, enables open-ended conversations that dynamically affect gameplay, offering unscripted and on-the-fly exploration with non-player characters. The primary focus is on younger users, providing them with a unique and engaging gaming experience while allowing deeper immersion for the entire family.

GB Event

GamesBeat Summit Call for Speakers

We’re thrilled to open our call for speakers to our flagship event, GamesBeat Summit 2024 hosted in Los Angeles, where we will explore the theme of “Resilience and Adaption”.

Apply to speak here

The core technology developed by UG Labs revolves around automated speech recognition (ASR) and large language models (LLMs).

“We created the speech recognition solution that learns you over time and becomes better at understanding how you speak. We believe the most interesting place for this to happen is when you’re talking about children,” said CEO Ariel Leventhal, in an interview with GamesBeat.

With younger kids, the speech patterns aren’t set yet and the command of language isn’t perfect.

“By personalizing the spiritual permission model, we can do a significantly better job at understanding kids. And this is in its essence a form of inclusivity for kids that have less access to speech recognition models. Those who are less understood will now be better understood,” Leventhal said.

The company recognizes curse words, sexism and racism, and it tries to derive what a conversation is about to get to what it calls “narrative safety.”


Ug Labs CEO Ariel Leventhal (right) and CTO Shachar Mendelowitz.

Founded in 2022 by CEO Ariel Leventhal and CTO Shachar Mendelowitz, UG Labs aims to inject original, unscripted, and age-appropriate conversation and personality into games, primarily those for kids.

Leventhal is a former general manager and VP of R&D at K Health. Before K Health, Leventhal was head of R&D at Fraud Sciences, leading the development of its AI system until its acquisition by PayPal for $169 million.

Mendelowitz was previously cofounder at Digital Healthcare, head of data science at TowerSec Automotive Cybersecurity (which was acquired by Harman, owned by Samsung), first wave senior engineer at the SpaceIL founding team, a privately-held Israeli team that participated in the Google Lunar X-Prize.

“We were looking at what was happening with the field of AI. And in the beginning of 2023, you could already see where this was going. And we were excited about conversational AI,” said Leventhal in an interview with GamesBeat. “We started seeing similar things happening in the field of speech recognition. Both Facebook and Google were coming with new exciting technologies.”

Leventhal added, “But all of them had in common was that while they were good, they were not adaptive in the sense that they didn’t learn how to speak with you. That means your Alexa, your Siri, your Google Home — they might be good, or they might be good at understanding you. Or they might be challenged, if your accent is not perfect or English is perfect. But they don’t necessarily become better over time.”

Elinor Schops joined the company a year ago for business development after hearing about the technology and what it could mean for children’s video games.

“It was very clear to me that we should start integrating it in video games,” Schops said. “To be able to integrate their SDK into video games and be able to see kids immediately get more engaged is great. The retention is longer because they’re actually interested in exploring the different opportunities. Every session looks completely different. Every conversation leads to a new location in the game. So the abilities are endless. And when we started talking to game publishers, we saw that they’re hooked.”

The company has 20 people and it has raised $23 million to date.

How it works

UG Labs is creating LLMs and ASR for kids games.
UG Labs is creating AI LLMs and ASR for kids games.

The ASR and LLM models are trained on data collected by UG, allowing the engine to understand children of varying literacy levels, accents, and impediments. The ASR models are designed to scaffold conversations, providing assistance to users during predefined moments and ensuring the conversation’s originality by referencing past interactions.

The voice interaction and flexibility enhance user engagement and retention, making each session unique and putting the player in the center. The content is tailored to fit the user’s language abilities, age, vocabulary, and tastes. This core technology, can also be integrated into connected toys, breathing life and personality into playsets, dolls, and robots.

“With the emergence of additional technologies around conversational AI, we started understanding that it’s not just true for speech recognition. It’s true for everything. Kids play differently and interact differently and think differently, and have a different way of expressing themselves,” Leventhal said. “And we went from ‘let’s build a speech recognition solution for children’ to ‘let’s work on all the fronts that are needed to create what we call full interactivity for children.’”

He added, “So let’s build the safety solution, let’s build a conversational solution. Let’s build a safe speech recognition solution. Let’s build the LLM solution. And it brought us to a place where we don’t just understand kids significantly better, we also know how to interact with them how to create a conversation.”

Advancing game conversations through safety

UG Labs: A new game design paradigm

One of the challenges in implementing voice interactivity has been the quality of conversation, speech recognition, and voice generation.

UG Labs addresses these challenges by combining tech solutions, creating a safe and engaging environment for users. The game conversations adhere to predefined safety guardrails set by the game publisher, ensuring a neutral continuation that isn’t scripted from either the player or the game engine.

UG’s ASR technology not only understands children but also supports progression between game levels, maintaining a conversation’s memory and delivering authentic responses based on the user’s past input.

Putting voice conversations in the core of games shifts from solely touch-base gameplay to a richer, more active interaction. While the UG tech can be integrated into many existing categories, the UG platform will create a new genre of mobile games where voice dominates the gameplay. An adaptive and inclusive voice solution opens a whole category of new opportunities for pre-literacy and early literacy users.

The main obstacle to voice interactivity has always been the quality of the conversation, speech recognition, and voice generation. From clunky voices to unsafe experiences to engines that didn’t understand what you said. By combining state-of-the-art solutions on all those fronts, UG finally manages
to break that barrier.

These game conversations will be bound by safety and by the guardrails that the game publisher defined (a list of do’s and don’ts), allowing neutral continuance of conversation that isn’t scripted, not from the player’s side, nor from the game engine.

ASR that leads to better conversation design UG’s core technology is based on proprietary ASR, trained on data collected by UG to improve the quality and make it production-ready. The younger the kids are, the harder it is to understand them.

Using UG’s trained ASR models, the engine understands children and overcomes “objective” speech imperfections such as literacy level, impediments, accent, etc. This allows UG to converse with the user at any level.

The engine is also trained to “scaffold” the conversation, i.e., to lend a helping hand by repeating questions and cues and assisting this child in resuming the conversation. It’s like non-intrusive FTUE, which is set to come into play during pre-defined moments where the user gets stuck. Scaffolding also guarantees originality in the conversation: the user’s verbal reaction leads to an authentic and unscripted response from the engine, referencing what the user said beforehand. This also applies to progression between game levels, where the conversation has memory, and it can chime the user’s previous answers.

UG’s Kids Labs and ongoing iterations

UG Labs’ anonymous solutions.

In 2022, UG launched Kids Labs in the U.S., where it tested voice-led conversational designs and experiences with young users. The insights gained from this initiative have been crucial for ongoing iterations and improvements to UG’s ASR models. The company emphasizes the importance of real-user interactions in training and optimizing models for better conversation designs.

These types of iterations are the foundation of how UG plans to continue developing new conversation designs and experiences: All models need to meet real users as soon as possible to train and optimize the models.

Kids can speak in a natural way, change their minds in the middle of a sentence, and UG Labs will meet them where their minds are. To do that, they set up a user experience research group in the United States to consult with families on the best way to use the technology with children and to train its models in a safe and private way.

The company did not keep the data from the kids, but it used it to create better and better models.

Leventhal said, “We built our first set of conversational models, from understanding to talking to generating text and images. That took us to the next step of of using this technology and this platform to power up the next generation of games and of toys.”

“If we have the ability to not just understand kids but to adapt to their level, meet them at their conversational ability, and also to maintain a conversation with them,” then the company can create content that works for them and is personalized for them, Leventhal said.

“Let’s use that not just for ourselves, but also to help game designers and designers to build the next generation of interactive toys and games,” Leventhal said.

It works with people who have English as a second language, and with those whose voices are less clear. The company is working with a regulator kidSAFE (under the Federal Trade Commission) so the company can be compliant with COPPA, the U.S. law that protects kids in online communications, Mendelowitz said.

Language model playgrounds and personalization

UG Labs is introducing Language Model Playgrounds (LLM) as a second component of its engine. These LLMs serve as safe spaces for kids, filtering out unsuitable topics and behaviors. The technology allows UG to inject personality into the machine, defining the AI’s temperament, characteristics, tone of voice, prosody, and support level based on the user’s age and understanding.

UG’s LLM playgrounds not only contribute to a more engaging and personalized experience but also enable the company to avoid dependency on third-party AI tools, ensuring scalability, better service level agreements (SLAs), and lower costs.

For a character using the LLM, UG Labs can control the personality and how the character speaks in a game. It can know some of the context of the world in the game.

“We define all the guard rails,” Mendelowitz said.

UG showed a demonstration of the difference between an external LLM, set for a general audience of +13, and UG’s playground LLM, which is suitable for users of all ages).

“Most of us have young kids and we wanted to create to ways to shift them from a position where they’re passive consumers of content to a place where they are leading the game themselves or where they’re navigating it in a way that requires more creativity, more agency, using a conversational interface that allows the child to take the game to non-scripted places but in a way that is safe. It changes the relationship between the child and the game,” Leventhal said.

“We have a lot of focus on not using IP content and branded content. So that way we know we don’t harm the intellectual property of other brands, large or small. Simply to do that, you have to just do your own kind of your own LLM,” Leventhal said.

Mendelowitz acknowledged there is competition out there, and UG Labs worked with Nvidia’s Inception program for AI startups. The startup will work with Nvidia on solutions for toys or games. But there aren’t as many competitors focused on large language models and speech recognition for children. That requires special attention and focus, as it’s harder to recognize the words of younger kids.

“We are mobile first. We’re aiming a mixed audience. And the very core building is based on our own data generation, created with our language models that, in essence, are different from language models,” Mendelowitz said.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts