27.3 C
Usa River
Thursday, March 6, 2025

I compared Sesame to ChatGPT voice mode and I’m unnerved

Must read

Advertisements


Trying the new voice assistant from AI startup Sesame is the first time I momentarily forgot I was talking to a bot.

Compared to ChatGPT‘s voice mode, Sesame’s “conversational voice” feels natural, unforced, and engaging, which totally freaked me out.

On Feb. 27, Sesame launched a demo for its Conversational Speech Model (CSM), which aims to create more meaningful interactions with AI chatbots. “We are creating conversational partners that do not just process requests; they engage in genuine dialogue that builds confidence and trust over time,” the announcement states. “In doing so, we hope to realize the untapped potential of voice as the ultimate interface for instruction and understanding.” 

Sesame’s voice assistant is available as a free demo on the site and comes in two voices: Maya and Miles.

Since Sesame unleashed its voice assistant demo, users have reported awestruck reactions. “I’ve been into AI since I was a child, but this is the first time I’ve experienced something that made me definitively feel like we had arrived,” user SOCSchamp wrote on Reddit.

“Sesame is about as close to indistinguishable from a human that I’ve ever experienced in a conversational AI,” user Siciliano777 wrote on Reddit.

After talking to Sesame’s bot, I was similarly wowed. I talked to the Maya voice for about 10 minutes about the ethics of using AI as a companion and came away feeling like I had a genuine conversation with a considerate, informed person. Maya’s speech had a natural cadence, using interjections like “you know” and “hm,” and even making tongue clicking and inhaling sounds.

Mashable Light Speed

The strongest impression I got from interacting with Maya was that she immediately asked questions, engaging me in the conversation. The bot started our conversation by asking how my Wednesday morning was going (note: it was indeed a Wednesday morning.) In contrast, ChatGPT voice mode waited for me to talk first, which isn’t necessarily a good or bad thing, but it intrinsically shaped the conversation as me using ChatGPT as a tool for something I needed.

Maya asked about the risks of AI companions getting “too good at being human.” When I told her I was concerned about the rise of more sophisticated scams and people losing touch with reality by replacing humans with bots, she responded thoughtfully and pragmatically. “Scammers are gonna scam, that’s a given. And as for the human connection thing, maybe we need to learn how to be better companions, not replacements, you know, the kind of AI friends who actually make you want to go out and do stuff with real people,” said Maya.

When I had a similar conversation with ChatGPT, I received a response that felt more like boilerplate language from a school guidance counselor: “That’s a valid concern. It’s really important to balance technology with real human interactions. AI can be a helpful tool, but it shouldn’t replace genuine human connections. It’s good that you’re thinking about these issues.”

While OpenAI pioneered voice mode‘s ability to be interrupted and have a more fluid back-and-forth conversation, ChatGPT still tends to respond in complete sentences and paragraph blocks, which sounds, well, robotic. When using ChatGPT voice mode, I never forget that I’m talking to a bot, and that’s reflected in the conversation, which can feel stilted and forced.

By comparison, AI for Humans podcast co-host Gavin Purcell posted a Sesame conversation on Reddit where it’s practically impossible to distinguish which voice is the bot. Purcell prompted the Miles voice by telling it to act like an angry boss.

A very silly conversation followed about money laundering, bribery, and a mysterious incident in Malta. Miles didn’t miss a step. There was no perceptible latency, and the bot remembered the context of the conversation and creatively advanced the improvisational argument by escalating, calling Purcell “delusional,” and firing him.

Of course, there are some limitations. Maya’s voice glitched a few times throughout our conversation, and it didn’t always get the syntax right, like saying, “It’s a heavy talk that come.”

According to its technical paper, Sesame trained its CSM (based on Meta’s Llama model) by combining the traditional two-step process of training text-to-speech models on semantic tokens and then acoustic tokens, decreasing latency. OpenAI similarly used this multimodal approach to training voice mode. However, it has never released a dedicated technical paper on voice mode’s inner workings — it only discusses voice mode in the GPT-4o research. 

Knowing this, it’s surprising how much better Sesame’s model is at conversational dialog. However, Sesame’s launch is just a demo, so it merits further scrutiny when the full model comes out. According to the demo announcement, Sesame plans to open source its model “in the coming months” and expand to over 20 languages.





Source link

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Advertisements

Latest article