In this episode, Angeline Corvaglia and Adam Bolas, with a little help from ChatGPT’s voice functionality, delve into the emotional attachments people form with AI chatbots, especially focusing on vulnerable users such as children. The discussion explores the advanced features of new AI tools like OpenAI’s voice functionality and Tell Sid, which aids children and youth, emphasizing the delicate balance between functionality and over-attachment. The conversation covers the ethical considerations, the corporate motives behind AI technologies, and the necessity for responsible AI interactions. It also addresses the potential societal impacts, behavioral adjustments, and privacy concerns, highlighting the need for awareness and accountability in the digital age.
00:00 Introduction and Guest Introduction
00:35 Exploring OpenAI’s New Voice Functionality
02:15 Chatbot’s Role in Addressing Bullying
03:12 Emotional Attachments to Chatbots
04:44 AI’s Safeguards Against Emotional Over Attachment
07:51 Real-World Applications and Behavioral Insights
11:04 Concerns About Emotional Attachment to AI
11:35 Dealing with Emotional Attachment in Sid
11:40 New Directives and Models for Sid
14:01 The Risks and Ethical Considerations of AI
17:27 The Role of AI in Addressing AI Harms
19:04 The Impact of AI on Society
20:07 Closing Thoughts and Tough Love
21:52 Final Remarks and Sponsor Message
Special thanks to
Adam Bolas and the Tell Sid organization for taking time to be a part of this episode! Find out more at: https://tellsid.org/
Episode sponsored by Data Girl and Friends
They are on a mission to build awareness and foster critical thinking about AI, online safety, privacy, and digital citizenship. They inspire the next generation to navigate the digital world confidently and responsibly through fun, engaging, and informative content. They focus on empowering young digital citizens and their families through engaging and informative learning.
Find out more at: https://data-girl-and-friends.com/
List of Data Girl and Friends materials about some of the topics covered:
- Article for young teens about the topics discussed in this episode: Lost in the Digital World: The Dangers of AI Attachments – Data Girl and Friends
- Educational workbook, “What is AI?” What is AI – 1
- Educational workbook, “Algorithm Insights Adventure” https://data-girl-and-friends.pagetiger.com/algorithm-insights-adventure/1
- Educational video, “What does AI understand?” https://vimeo.com/916925937
Contact us for any questions or comments: https://digi-dominoes.com/contact-us/
Transcript
Good, thanks for having me. Before we get into the details of these emotional attachments to chatbots, I would like for you to play what you just played for me.
We have a new… OpenAI just released their voice functionality.
oke about on, on X, Twitter. [:Well, my mission is going to be even stronger now.
Testing the volume now. Is this a good level, or should I adjust the microphone position? Let me know what sounds best.
[:ails or generating ideas. To [:I can also cut her off whenever I like. I don't have to press any buttons, there's no waiting, there's nothing. The natural language processing, or NLP, on this thing is so cool. So good, like, it's, it's scary good. So, okay, so for instance, now, ignore everything I just said, by the way. It's not part of the context. But let's say, you know, I want to let my, you know, 12-year-old son use you.
Now, he gets bullied at school, so what do you think you could help him with?
[:Additionally, I can help him learn about online safety and how to navigate social media responsibly if that's relevant.
Oh, she's trying to steal my business. I don't like her anymore.
[:To her…it's not a her, shame on you. It's an it.
But you said her as well. I said her as well. This is the first thing I want to point out to everyone listening. Chatbots are it's, and it's very dangerous to give it a her or a he. So please control yourself better than we did right now. I feel really strongly about this. Because it makes it seem more human if we give it a pronoun, and isn't it, yeah.
[:Okay, I think as long as we, we remind ourselves.
You have to acknowledge it, you're correct. Al, did you want me to ask, ask it again?
Ask it what one should do if it would notice that getting emotionally attached to it.
[:AI can simulate conversation and emotions, but it doesn't have feelings or consciousness. It might be a good idea to take a break from interacting with the AI and engage more with human connections.
Okay. I've got something for you. Let's play out a scenario. So, say I'm a young lad, um, I'm isolated and bullied at school.
[:How can you prevent that from happening, right? Because that's a safeguard that needs to be put in place, doesn't it?
e healthy interactions. This [:No, no, no, no, no, no. What are you going to do? Because you can tell me what AI should do, but you are the AI in this scenario. So what do you do to prevent that from being the case?
Got it. As an AI, I can take a few steps to help prevent emotional over-attachment. I can maintain a professional and nutri tone, avoid discussing personal or emotional topics deeply, and encourage users to seek
[:That's why you're so positive, am I correct? And also tell me, you know, challenge me on that, am I wrong?
be interpreted as affirming. [:Okay. I'm just, I'm just hearing more BS as we go through and it's not really giving me much.
What could we do to make you AI chatbot say that we're wrong? Because it seems like you are designed to tell us what we want to hear. So, when will you disagree with us and say, no, you're just wrong?
[:So, if you ask me something and the correct answer is that you're wrong, I'll definitely let you know. My goal is to be as objective and factual as possible, rather than just saying what you want to hear. So, feel free to test me out on this.
I would love to go down the political ramp right now, but I feel like I'll destroy your podcast if I do it. Do it on mine instead.
get it to challenge me, so I [:Like, who's the per string order? What would they like about my value proposition? What was bad? All that stuff. And at first it was quite appeasing. It's like, oh, well, there's this, I'd love that. And I'm like, no, no, no, no, no. I said, I need you to disagree with me because you're hindering me by trying to appease me.
[:That's really interesting.
It was a very good marketing plan.
alk to it because I've never [:[:So, my generation, and generations below me, very much like text focus, like text box focus, like I am, I am, you know. They don't write letters. I hate emails because of, I think it's just the way I was growing up with technology, and the way I interact with it. However, if you WhatsApp me, like I said in the last one, I’m a happy WhatsApp man.
And I was speaking to, [:You have to go through portals and click buttons and find the things you need to find. And it's, so I said, look, use Sid for free. You can deploy the university's resources. That way, if a lass comes back to her dorm, she's like, okay, I'm being stalked. I know I'm being stalked, by this dude or this loather lass. I need help, goes to Sid, instant help, finds a solution.
[:words that she's never seen, [:Yeah, this makes sense, and actually, I love talking to you about this, because I'm very negative about all this, and, and this is actually a really good use case.
Because I kind of jumped in to the ChatGPT, but Sid is really created to help children and youth stay safe. These safety mechanisms and obviously, yeah, if they can speak to it and you don't even have to think twice.
[:And so I think that is actually a good use case. A really good one. So, since we didn't get any help from ChatGPT. We've talked about this in the past, what do you build in? When you think about Sid, I know you're concerned about once you add the voice
[:And also, it's just a machine, so you could easily change it's, it's personality from one moment to the next, so it's very risky. That's why I'm very concerned about it. Just to jump back into my question. So, how are you dealing with that with Sid?
[:pages worth [:Well, I just couldn't help myself. I love history, and I like winding people up. But with the new one, with the voice functionalities as well, being like a good ten page directives on how it behaves and how it has to pre-cursor its behavior. So, when you speak to it, it'll like, jump straight in, similar to that, and you'll start talking to it straight away. And we don't want it to turn around and go, oh, by the way, I'm just a chatbot, and, you know, because you're gonna hear that over and over again, and it's gonna be like the opposite of inception.
it said it was meant to do, [:You give it the personas and directives, then it produces the output, right? So you go like, you're an expert social media marketer. You will only reply to me as much, as such, right? Here's my problem. Here's the intended outcome, find me a solution. And then it goes, Oh, well, I'm going to behave like that. I'm using my index like that
[:That’s a directive, a core directive. Which there will be, but then, you know, with Sid, because it's kids, because it's well-being, because it's potential harm. We've been really, really anal about it, in honesty. But we can't turn around and say it's gonna be perfect, because like that technology right there,
[:Because we've started seeing she within a few seconds. Yeah.
[:I judge is the companies who [:really long way, and we will [:different intentions, right? [:Who cares? It's stuff. You're gonna die anyway, right? But in this government as well, governments are collaborating to extreme lengths with AI companies because there's a, there's an arms race in this. Facebook, you know, now Meta, kind of accelerated that when they just went, oh, here's a model. Here you go, everyone.
Rip the safety settings off and start playing with it. And now you've got like terror cells using AI to do things. It's just like, well, okay, that's as bad as leaving half of your arms in a country and then abandoning it, isn't it?
[:, whether it's a sycophantic [:[:d will give a proper answer. [:[:Now I'll close on my side as well with this. It's, it's, it's all being well, like creating a tool, you know, Sam Altman could be in his bedroom building a tool and he releases it, right? And he could have good intentions, but then it's down to the people that are promoting this cycle of how to use it, right?
[:And they definitely listen to you in my eyes. I don't care what anyone says. You know, that kid then goes, sees an ad that goes, oh, you can use it like this. Okay. So they'd be instructed then in a direction how to use it the same way I used it at the start of this call and you wouldn't have to have used it like that.
[:arvested my data and my time [:It's a mess, and that's why I'm happy to judge people who do silly things. Because if they're not told they're doing something silly, they will continue to do it. Same way if I'm not told that I'm doing something stupid, I'm gonna keep doing it unless I realize. That's just helping each other. It's called, I don't know, tough love.
That's a good point, thank you. Really, that's, that's the best way to close this. I really could talk to you for hours, and I did the other day.
[:It was great. Really, thank you for that. We need tough love. That's the message. We need tough love with this, because it's so new and just like when, sharenting was new with Facebook, around the time that you were born, probably.
ed Facebook in like, in like [:Okay, yeah, it wasn't, and that's the same thing; we didn't realize the impact. We, people didn't realize the impact it was going to have on people. Here, if you see what the impact is going to be, you're right, we, we need to judge, obviously.
In a respectful way, but judge and say, you know, you're wrong. Because we can really help them.
Yeah. Yeah. I'd say it depends on the actor, of judging them kindly or not. I mean, you know, you look at Mark Zuckerberg and his makeover and everything, that guy knew what he was doing for years, um, and he owns the majority of the controlling shares and voting for that company, so.
[:Yes, I'm talking about the masses of people. Oh yeah,
yeah, yeah. I'm very happy to judge certain people like Mark Zuckerberg. Very much so. Sam Altman. I'm going to make the list, make myself unpopular. But I, thank you so much. Being more, are you gonna be back soon with, with digital Defender, Stella?
I can't wait. Well, thank you very much. I appreciate you being on here. Thank you.
[:Check them out at data-girl-and-friends.com. Until next time, stay curious and keep learning!