Welcome to an intriguing episode of Digital Dominoes, titled “AI Chatbots: What Can Developers Do to Protect Users Emotionally?” In this episode, our host, Angeline Corvaglia has a sensitive and timely conversation about the emotional attachments people form with AI chatbots. Accompanying her on this journey is Adam Bolas, the mind behind Tell Sid, a chatbot dedicated to the safety of children and adolescents online.
Delving into AI Emotions
As the episode unfolds, Angeline expresses her concerns about people developing emotional connections with chatbots. Adam, who has firsthand experience with AI through the development of Tell Sid, offers valuable insights. A playful exchange ensues when they experiment with the latest voice functionalities from OpenAI, showcasing the potential and perils of these chatbots.
Adam elaborates on the capabilities and intentions behind chatbots, emphasizing the importance of setting boundaries for healthy interactions. The conversation touches upon key issues like emotional over-attachment and the role of AI in providing companionship, especially to isolated youths.
The Dangers of Personification
Angeline and Adam venture into a crucial discussion on the human tendency to personify AI, evidenced by their inadvertent use of pronouns “he” or “she” for these machines. This, they argue, can reinforce the illusion of humanity in AI, making emotional attachments more probable and potentially more harmful. They advocate for conscious awareness. This is a mindset acknowledging chatbots for what they are: sophisticated programs lacking emotions or consciousness.
AI: A Double-Edged Sword
Throughout the conversation, Adam shares insights from his experiences deploying models like Sid, aimed at promoting safe and responsible AI interactions. He highlights the complexity and responsibility entailed in crafting AI that respects boundaries and fosters well-being without crossing emotional lines.
Angeline voices her apprehensions about AI companies’ intentions, distinguishing between those like Adam’s that prioritize societal good, and others that might exploit emotional attachments for profit. The episode underscores the need for ethical considerations and societal awareness as we navigate these digital waters.
Hope from Within the System
The episode wraps up with a reflective dialogue on the potential of AI as a tool to counteract its own risks. By incorporating safeguards and educating users about potential pitfalls, AI can be harnessed to protect and empower rather than harm. Angeline acknowledges the essential role of ethical AI companies and individuals who strive to address these challenges.
Conclusion: A Call for Tough Love
Angeline and Adam leave listeners with a powerful message: in the evolving digital landscape, tough love is necessary. We must judge and address actions that steer people toward unhealthy attachments, especially as these technologies become more pervasive. With responsible guidance and informed choices about the emotional attachments people form with AI chatbots, we can leverage AI’s capabilities while mitigating its risks, ensuring a safer and more constructive digital future for all.
Join us for more insightful discussions as we continue to explore the multifaceted world of AI in upcoming episodes. Stay curious and keep learning!
More materials on these topics
Follow Adam Bolas on LinkedIn, get and more information about Tell Sid and Data Girl and Friends
List of Data Girl and Friends materials about some of the topics covered:
- Article for young teens about the topics discussed in this episode: https://data-girl-and-friends.com/in-school-how-to-make-sure-ai-assists-not-replaces/
- • Educational workbook, What is AI?
• Educational workbook, Algorithm Insights Adventure
Educational video, What does AI understand?
Educational materials about various aspects of digital citizenship:
- Stories: Podcast Bytes of Digital Adventures
- Educational videos for teens: Digital Navigators
- Educational videos for younger children: Discovery Squad