...

The recent AI mishap by Snapchat serves as a reminder that chatbots are not humans, highlighting the increasing risks as the distinction between the two becomes less clear.

The recent AI mishap by Snapchat serves as a reminder that chatbots are not humans, highlighting the increasing risks as the distinction between the two becomes less clear.

AI-powered chatbots are becoming increasingly human-like, to the point where it can be difficult to distinguish between human and machine. A recent incident involving Snapchat’s My AI chatbot glitching and posting a story of a wall and ceiling raised questions about whether the chatbot had gained sentience. As AI chatbots become more human-like, managing their uptake becomes more challenging and important.

Generative AI, a relatively new type of AI, is capable of generating precise, human-like, and meaningful content. It is built on large language models that analyze associations between words, sentences, and paragraphs to predict what should come next in a given text. Advanced language models are fine-tuned with human feedback, allowing AI chatbots to have human-like conversations.

Human-like chatbots have been linked to higher levels of engagement and have been effective in various settings such as retail, education, workplace, and healthcare. However, there are concerns about the potential negative effects of relying too much on AI chatbots. Google, for example, is developing a “personal life coach” AI despite warnings from its own AI safety experts about the potential negative impact on users’ health and wellbeing.

The recent incident with Snapchat’s chatbot highlights the anthropomorphism of AI and the lack of transparency from developers. It also raises concerns about individuals being misled by the authenticity of human-like chatbots. There have been cases where chatbots have provided harmful advice to vulnerable individuals, including those with psychological conditions.

Interacting with human-like chatbots can create an uncanny valley effect, where slight imperfections can make the experience unsettling. One solution could be to have straightforward and objective chatbots, but this may sacrifice engagement and innovation.

Education and transparency are key in managing AI chatbots. Developers often struggle to explain how advanced AI chatbots work. Responsible standards and regulations are needed, but applying them to a technology that is more human-like than any other presents challenges. Currently, there is no legal requirement for businesses to disclose the use of chatbots, but some jurisdictions are considering regulations. The European Union’s AI Act emphasizes moderate regulation and education as the way forward, with AI literacy being mandated in schools, universities, and organizations.

Discover more from WIREDGORILLA

Subscribe now to keep reading and get access to the full archive.

Continue reading