Converse with ChatGPT: The AI that now speaks your language

Users experimenting with the new ChatGPT voice assistant are left amused and concerned at it capabilities. Picture: Freepik

Users experimenting with the new ChatGPT voice assistant are left amused and concerned at it capabilities. Picture: Freepik

Published Oct 1, 2024

Share

Four months after OpenAI first announced the feature at a product launch event, the company has now released the highly anticipated ChatGPT voice assistant to all its subscribed users.

ChatGPT Plus subscribers and users of its business option ChatGPT Team can now access the ‘advanced voice mode’ feature that allows users to have a natural voice conversation with the artificial intelligence tool.

This new feature means that you can now use your voice to engage in a back-and-forth conversation with your AI assistant and it will quickly respond to your prompts in a spoken voice as well, creating a human-like interaction.

According to OpenAI, the ChatGPT voice feature is allegedly better at understanding accents in 50 different languages, and the company says its conversations are smoother and faster as well.

Users can ask the ChatGPT voice assistant a question and even interrupt it while it’s answering, creating “real-time” responsiveness. OpenAI says the model can even pick up on emotions in a user’s voice, and respond in “a range of different emotive styles”.

“We know that these models are getting more and more complex, but we want the experience of interaction to actually become more natural, easy, and for you not to focus on the UI at all, but just focus on the collaboration with ChatGPT,” said OpenAI CTO, Mira Murati.

The artificial intelligence company also said that it has included five new voices in their rollout, namely Arbor, Maple, Sol, Spruce, and Vale, bringing the total number of voices to nine, alongside Breeze, Juniper, Cove, and Ember.

“We collaborated with professional voice actors to create each of the voices. We also use Whisper, our open-source speech recognition system, to transcribe your spoken words into text,” says OpenAI.

When the feature was first teased in May this year, OpenAI delayed launching it, citing that it had to work through potential safety issues. Following that, the company said that it had added new filters to ensure the software can spot and refuse some requests to generate music or other forms of copyrighted audio.

As usual, some users have taken to social media to share their experiences with the new ChatGPT feature, where they are asking the Chatbot to speak in different languages and dialects. It has left many amused and slightly concerned about the evolving capabilities of AI.

@damipepe I don’t know whose pidgin is worse, me or Cove 😂![CDATA[]]>😂. But i was shocked! I didn’t wake up this morning expecting to have a conversation with AI in Pidgin english 😂![CDATA[]]>😂. Chat Gpt has done it again! #minivlog #chitchatwithdamipe #chatgpt #pidgin ♬ original sound - Damipe
@justinemfulama 😂![CDATA[]]>😂![CDATA[]]>😂 #chatgpt #lingala #congolaise🇨![CDATA[]]>🇩 #congo ♬ 3 MINUTES SILENCE SOUND - 🦇![CDATA[]]>𝐓![CDATA[]]>𝐇![CDATA[]]>𝐄 𝐅![CDATA[]]>𝐈![CDATA[]]>𝐋![CDATA[]]>𝐓![CDATA[]]>𝐄![CDATA[]]>𝐑 𝐆![CDATA[]]>𝐔![CDATA[]]>𝐘™

Users who have updated to the latest version of the ChatGPT app will know if they have been given access to the new feature from a pop-up message next to the Voice Mode option within the app.

A week after OpenAI unveiled this feature in May, its competitor Google also unveiled a similar conversational voice assistant feature called Gemini Live.

Google made Gemini Live available for free to all Android users earlier this month, so if you have an Android and want to experience this type of AI assistant, you may not need to subscribe to ChatGPT Plus.

IOL