ChatGPT chatbot now can speak, listen, and process Images

OpenAI has announced that its ChatGPT chatbot can now “see, hear, and speak.” This means that ChatGPT can now understand spoken words, respond in a synthetic voice, and process images. The update is the largest since the introduction of GPT-4 and will be available to paid users within the next two weeks, reports.

Voice functionality will be limited to iOS and Android apps, but imaging capabilities will be available across all platforms. The update comes as tech giants are competing to develop new AI-powered features for chatbots.

Experts have expressed concern about the potential for synthetic voices to be used to create deepfakes, but OpenAI has assured users that its synthetic voices were not collected from strangers and were created by voice actors that the company worked with directly.

OpenAI has also stated that it does not store audio recordings and that the recordings themselves are not used to improve AI models.



Back to top button