Think outside the bot: Emotive AI will come to redefine the customer experience – Computing

author
11 minutes, 55 seconds Read

Emotive AI (or Emotion AI) is a branch of GenAI which learns and senses (and eventually prompts) human emotions based on several tools such as Voice AI, facial motion detection and analysis, Natural Language Processing and sentiment analysis. Emotive AI reduces the interaction gap between humans and machines. By doing this, companies hope to build AI that can respond to customer emotions effectively – and in time, to prompt them. 

Ads


World’s Leading High-rise Marketplace

Nayan Jain, Executive Director of AI at digital design studio ustwo explains how developments in GenAI are reframing expectations of Emotive AI and vice versa. 

“The first thing that we see a lot of clients ask for is around a chatbot or conversational AI. They want to know how to make it more approachable, more usable? How do they increase adoption? It’s scary to some employees especially where they think that it might replace what they’re doing, and replace them. Internally, we believe that it’s going to replace the work and free up individuals to be more creative and spend time on higher order tasks.”

Nayan Jain, ustwo

Jain also thinks that GPT-4o is a gamechanger in terms of emotional impact.  

“There are a few subtle differences between GPT-4 and GPT-4o that have a big impact on making the model more capable, conversational and emotive,” he says. “The latency has dropped. It can respond, on average, in around 300 milliseconds which is similar to a typical human conversation. This is 17 times faster than GPT-4.

“Translation has become a first-class feature, so it can understand multiple languages and multiple speakers. The model has moved away from turn-by-turn question and response to be more dialogue-based. It can also differentiate between multiple speakers and join a much larger conversation and really be multimodal.”

Jain also raised the point that as AI becomes more mobile, it becomes more emotive. 

“Even just giving the AI agent a name makes more of a connection. I don’t mean a name like Siri or Alexa, where it’s tied to a device. This can show up on any device as you would talk to any person. I can text someone from my computer, my phone, my wallet and I can do the same now with AI as well. The AI itself is becoming more mobile.”

Intuit case study in Emotive AI 

Intuit has been in the AI and data space for more than a decade now. Joe Preston, VP of Product and Design explains how the breakthroughs in GenAI two years ago, opened up a whole new world of possibilities. 

“In late 2022 when we started to see large scale releases of some of these applications like GPT 3.5 we saw this as like a once in a generation moment to reimagine our customer experiences in a completely new way where the old interaction paradigms could be broken.”

It’s precisely because Intuit has been in the data space for such a long time that Intuit was able to launch a digital assistant called Intuit Assist across many of its products last year. The product is embedded across a range of Intuit products including TurboTax, Credit Karma, QuickBooks and Mailchimp. Assist was developed using Intuit’s generative AI-based proprietary operating system named GenOS. Emotive AI is embedded in the concept of because from the outset, the goal was for users of Intuit products to not realise that the assistant answering their tax questions or helping them with accounting queries is powered by AI. 

Joe Preston, Intuit

“I spent the majority of last year being part of a mission team to launch Intuit’s version of a copilot across across all of our products simultaneously in six months. That was quite the ride. But seeing in a short window of time since, the progression is incredible.”

What Intuit’s treasure trove of data has enabled it to do is provide a hyper personalised experience. So personalised in fact, that it doesn’t always feel like you’re interacting with an AI.  

“It’s used in various contexts to solve different use cases,” Preston explains. “For consumer finance products like Credit Karma it’s largely used to recommend how you up your credit score for example in a very personalised manner. It gets to know you. It knows your data if you’ve agreed to share it so it learns your preferences so it’s very much a personalised experience. 

“In the TurboTax product the personalisation is different because people doing their taxes, are full of fear, uncertainty and doubt moments, and there is great difference in financial literacy, tax literacy from one user to another so there we find the context really different. It’s super advantageous for us to have this ability now to personally tailor information and questions. The same answer can be just spoken to different people in a completely different manner. We use voice when we’re using every all the modalities available on all the licensable models on the market.”

Governance and ethics 

Any organisation building a GenAI model has a responsibility to consider not just the privacy of their customers data but the quality of data feeding the model and the ethics of likely outcomes. If your goal is to build an AI so emotive that users don’t realise they’re interacting with AI, that responsibility is even greater. 

How does Intuit bake ethics and integrity into its LLMs?

“We have a cross functional team for responsible AI practices and governance,” Preston says. We have a set of principles, and we had to train all 18,000 employees on these. If you want access to use the latest Chat GPT model in a customer facing experiment, even at the smallest scale you have to go through a lengthy approval process. To scale that beyond you have to do it repeatedly. It’s quite cumbersome. 

“But at the same time, we all acknowledge there’s a need there. Hallucinations were the first challenge and then latency was almost a deal breaker. Every millisecond of latency, you lose the customers confidence. 

“We have a long history of data and privacy stewardship. I have to go through a required training on data and privacy every year and every single employee does. It’s really important to our company. If I think about the single biggest foundational element to our brand it is that.

“We do spend considerable amount of time cleaning the data and then pre-training and training even some of these off-the-shelf models with our own data, and then we have to go through a series of fine tuning [iterations]. It’s a lengthy process, and it’s a big investment.”

What Intuit has already done, in terms of integrating Intuit Assist is impressive, and although not all users love it (Preston acknowledges mixed reviews from US tax season) a majority of the feedback has been broadly positive. However, he emphasises the early stages of these models. 

“We’re not touching things anywhere near what you’re seeing on the bleeding edge of this stuff,” he says. “The spaces that we’re in are highly regulated in many cases by the government. We’re dealing with personal or business finances so we don’t have a lot of room to negotiate in terms of accuracy. Even in terms of emotion we’re trying to use it where we think it’s practical to remove someone’s fear, uncertainty or doubt moments in helping them make a financial decision.”

The early days, experimental nature of Emotive AI is part of what worries regulators and policy makers. Companies of all sorts are deploying models that they acknowledge are very much work in progress. They will improve by means of user feedback but until they do who pays the price for inaccuracies communicated in interactions which will feel progressively more human like?

It’s worth remembering that when Sam Altman approached Scarlett Johnansson for permission to reproduce her voice for the GPT-4o launch, one of the reasons he gave her was that he thought her voice would be trusted by potential consumers unsure of whether they wanted AI enabled devices in their homes and lives. Altman was trying to use Emotive AI to prompt trust without having first earned it.

That sorry episode highlighted the importance of safety and governance. How can we ensure that these considerations are paramount in the race for competitive gain? Nayan Jain thinks brands like Intuit will be reluctant to take risks with their reputations. After all, trust arrives on foot and leaves on horseback. 

“While regulation catches up to the pace of innovation — a major challenge for governments and regulatory bodies — the way brands approach governance and safety is another point of differentiation. Brands want to stay out of headlines for things like hallucinating agents and failed AI implementations, which point to poor testing and lack of an internal company optimisation function for AI.”

As the development of Emotive AI speeds up, and regulation invariably moves at its traditional glacial pace, it looks like brand reputation might be the most efficient means of protecting people from some of GenAI’s less appealing outcomes. 

This post was originally published on 3rd party site mentioned in the title of this site

Similar Posts