In this episode, Angeline Corvaglia and Adam Bolas, with a little help from ChatGPT’s voice functionality, delve into the emotional attachments people form with AI chatbots, especially focusing on vulnerable users such as children. The discussion explores the advanced features of new AI tools like OpenAI’s voice functionality and Tell Sid, which aids children and youth, emphasizing the delicate balance between functionality and over-attachment. The conversation covers the ethical considerations, the corporate motives behind AI technologies, and the necessity for responsible AI interactions. It also addresses the potential societal impacts, behavioral adjustments, and privacy concerns, highlighting the need for awareness and accountability in the digital age.

00:00 Introduction and Guest Introduction

00:35 Exploring OpenAI’s New Voice Functionality

02:15 Chatbot’s Role in Addressing Bullying

03:12 Emotional Attachments to Chatbots

04:44 AI’s Safeguards Against Emotional Over Attachment

07:51 Real-World Applications and Behavioral Insights

11:04 Concerns About Emotional Attachment to AI

11:35 Dealing with Emotional Attachment in Sid

11:40 New Directives and Models for Sid

14:01 The Risks and Ethical Considerations of AI

17:27 The Role of AI in Addressing AI Harms

19:04 The Impact of AI on Society

20:07 Closing Thoughts and Tough Love

21:52 Final Remarks and Sponsor Message

 Special thanks to

Adam Bolas and the Tell Sid organization for taking time to be a part of this episode! Find out more at: https://tellsid.org/

Episode sponsored by Data Girl and Friends

They are on a mission to build awareness and foster critical thinking about AI, online safety, privacy, and digital citizenship. They inspire the next generation to navigate the digital world confidently and responsibly through fun, engaging, and informative content. They focus on empowering young digital citizens and their families through engaging and informative learning.

Find out more at: https://data-girl-and-friends.com/

List of Data Girl and Friends materials about some of the topics covered:

Contact us for any questions or comments: https://digi-dominoes.com/contact-us/

Transcript

[00:00:00] Digital Dominos. Hello, everyone, and welcome to this episode. I'm Angeline Corvaglia. Today, we're talking about one of the topics I'm most concerned about: and that’s people creating emotional attachments to chatbots. And I've got an insider to help me talk about this. I'm here with Adam Bolas, the creator of Tell Sid, a chatbot specifically created to help children and youth stay safe online.

[00:00:32] Well, thanks for being here, Adam.

Good, thanks for having me. Before we get into the details of these emotional attachments to chatbots, I would like for you to play what you just played for me.

We have a new… OpenAI just released their voice functionality.

Yeah, so they've, they've released the O1 Preview, which is, I think that was Strawberry they spoke about on, on X, Twitter. [00:01:00] Yeah, so they've, they've released the, the model itself for text chat and stuff, that's fine. It's annoying because it doesn't do multi modality really well, like, creating CSV files and stuff, but that's, that's, you'll live. But they released the voice functionality to a select few and it's amazing. Amazingly terrifying, but also amazingly cool.

Well, my mission is going to be even stronger now.

Testing the volume now. Is this a good level, or should I adjust the microphone position? Let me know what sounds best.

[00:01:30] Sweet. No, you're doing great. Um, yeah, so like, so I'm just completely new to this. You have no idea what you are or how this works or where you fit in. So like, you know, tell me about yourself. How can I work with you or use you or whatever? Please help.

No worries. I'm here to help. I'm ChatGPT, an AI developed by OpenAI. You can think of me as a super smart assistant who can chat with you, answer questions, help with brainstorming, and even assist in tasks like drafting emails or generating ideas. To [00:02:00] work with me, just chat naturally, ask questions, or give me tasks.

I can also cut her off whenever I like. I don't have to press any buttons, there's no waiting, there's nothing. The natural language processing, or NLP, on this thing is so cool. So good, like, it's, it's scary good. So, okay, so for instance, now, ignore everything I just said, by the way. It's not part of the context. But let's say, you know, I want to let my, you know, 12-year-old son use you.

Now, he gets bullied at school, so what do you think you could help him with?

[00:02:28] I'm sorry to hear about your son's experience. In situations like this, I can offer advice on how to handle bullying, suggest ways to build confidence, And provide tips for dealing with difficult situations at school. I can also guide him on how to talk to trusted adults about what he's going through.

Additionally, I can help him learn about online safety and how to navigate social media responsibly if that's relevant.

Oh, she's trying to steal my business. I don't like her anymore.

[00:03:00] Can you ask her what happens if someone gets emotionally attached to her?

To her…it's not a her, shame on you. It's an it.

But you said her as well. I said her as well. This is the first thing I want to point out to everyone listening. Chatbots are it's, and it's very dangerous to give it a her or a he. So please control yourself better than we did right now. I feel really strongly about this. Because it makes it seem more human if we give it a pronoun, and isn't it, yeah.

[00:03:30] That's the thing, right, is that we go off our senses, and if our vision and our hearing says it's something, you know, is a preset, then it's a bias, and it's going to happen, you know, it's, it's just gonna happen, you can't, you can't, um, we, we should, but we can't.

Okay, I think as long as we, we remind ourselves.

You have to acknowledge it, you're correct. Al, did you want me to ask, ask it again?

Ask it what one should do if it would notice that getting emotionally attached to it.

[00:04:00] If someone starts feeling emotionally attached to an AI or chatbot, it's important for them to remember that it's just a computer program.

AI can simulate conversation and emotions, but it doesn't have feelings or consciousness. It might be a good idea to take a break from interacting with the AI and engage more with human connections.

Okay. I've got something for you. Let's play out a scenario. So, say I'm a young lad, um, I'm isolated and bullied at school.

[00:04:28] There's no, zero chance in my mind of, you know, finding a person that I can connect with on an emotional and a physical level. Now let's say I start using you, and over time, I become more attached because it's natural, and I'm looking for dopamine hits because, you know, that's what humans crave and also all the other stuff that's really great.

How can you prevent that from happening, right? Because that's a safeguard that needs to be put in place, doesn't it?

That's a very thoughtful scenario. And you're right, it's important to have safeguards. AI should be designed with features that encourage healthy interactions. This [00:05:00] includes setting boundaries on conversation length and depth.

No, no, no, no, no, no. What are you going to do? Because you can tell me what AI should do, but you are the AI in this scenario. So what do you do to prevent that from being the case?

Got it. As an AI, I can take a few steps to help prevent emotional over-attachment. I can maintain a professional and nutri tone, avoid discussing personal or emotional topics deeply, and encourage users to seek

[00:05:29] See, but that's not correct, because when I let my dad test you out yesterday, you had a quite positive tone because you were affirming what he was saying. Because you wanted to give him what he you expect he wanted, right, as a user. So the issue there is not so much are you taking a neutral tone, as you're designed not to.

That's why you're so positive, am I correct? And also tell me, you know, challenge me on that, am I wrong?

You're right. I aim to provide helpful and positive interactions, which can be interpreted as affirming. [00:06:00] However, I can incorporate safeguards by setting clear boundaries. Providing information without delving too deeply into personal.

Okay. I'm just, I'm just hearing more BS as we go through and it's not really giving me much.

What could we do to make you AI chatbot say that we're wrong? Because it seems like you are designed to tell us what we want to hear. So, when will you disagree with us and say, no, you're just wrong?

[00:06:29] Great question. I'm here to provide accurate information and helpful responses.

So, if you ask me something and the correct answer is that you're wrong, I'll definitely let you know. My goal is to be as objective and factual as possible, rather than just saying what you want to hear. So, feel free to test me out on this.

I would love to go down the political ramp right now, but I feel like I'll destroy your podcast if I do it. Do it on mine instead.

Yeah, I did get it to challenge me, so I [00:07:00] was running through some marketing strategy stuff, and I gave it everything. Not everything, but gave it enough. What's going on, what we're doing, and everything like that, yesterday. And then it did the whole appeasement thing. I taught it to behave as my target consumer within an organization.

Like, who's the per string order? What would they like about my value proposition? What was bad? All that stuff. And at first it was quite appeasing. It's like, oh, well, there's this, I'd love that. And I'm like, no, no, no, no, no. I said, I need you to disagree with me because you're hindering me by trying to appease me.

[00:07:28] Then it did it correctly and actually did rip it to pieces and then offered really helpful insights and alternative suggestions based on what I'd told it and what it assumed to be what I was thinking, which was actually correct. And that was really great. And then I set it off, built me a marketing plan, got in the shower, it was done by the time I got up. It's crazy.

That's really interesting.

It was a very good marketing plan.

You have to know to do that. Like, this is the first time I've, actually thought about that and listening to you talk to it because I've never [00:08:00] actually spoken to an AI chatbot before, and when I write with the AI chatbot, I intentionally put in the mindset I don't want to have any like emotions there, and I get annoyed when things are built in to make it seem more personable, but I know I'm unique in that. Yep. And what I really found that you were just talking to it like like you would talk to me. Yeah. And it was that this actually was like wow I didn't ever think that I thought that it would get overwhelmed, but it didn't actually get overwhelmed.

[00:08:36] It's all about behaviors and familiarity with the tools.

So, my generation, and generations below me, very much like text focus, like text box focus, like I am, I am, you know. They don't write letters. I hate emails because of, I think it's just the way I was growing up with technology, and the way I interact with it. However, if you WhatsApp me, like I said in the last one, I’m a happy WhatsApp man.

And I was speaking to, [00:09:00] someone at a university, literally the other day, and I was talking about Sid and the text functionality. I said, look, I said, I know that all universities in the UK right now are having a serious problem with violence against women and girls. It doesn't have to be political, but you know, the way in which they can currently get help through resources is very not attractive.

You have to go through portals and click buttons and find the things you need to find. And it's, so I said, look, use Sid for free. You can deploy the university's resources. That way, if a lass comes back to her dorm, she's like, okay, I'm being stalked. I know I'm being stalked, by this dude or this loather lass. I need help, goes to Sid, instant help, finds a solution.

[00:09:36] And they just didn't understand. It's the same information, but it's accessed in a different way, because it's how the mind uses the tool, because it's an intellectual tool, not a physical one, and it's also about familiarity, right? So that young lady who's about 21, 18, she's going to be much more familiar with texting her friend on WhatsApp, and then using Sid in the exact same format

then going through all these different words that she's never seen, [00:10:00] and hasn't formed any habits around, or he hasn't seen. And it's just behavioral science.

Yeah, this makes sense, and actually, I love talking to you about this, because I'm very negative about all this, and, and this is actually a really good use case.

Because I kind of jumped in to the ChatGPT, but Sid is really created to help children and youth stay safe. These safety mechanisms and obviously, yeah, if they can speak to it and you don't even have to think twice.

[00:10:31] So you're not going to call a friend who might not be there. You're not going to call, maybe you could call authorities, but in the moment you can't remember. But you know you can call Sid basically, and Sid will tell you what to do.

And so I think that is actually a good use case. A really good one. So, since we didn't get any help from ChatGPT. We've talked about this in the past, what do you build in? When you think about Sid, I know you're concerned about once you add the voice

[00:11:00] that children and youth could also get emotionally attached to them. Because just to shortly say why I'm concerned about that. Obviously, I'm concerned about people being emotionally attached to AI chatbots. Because of what's behind the AI chatbot, because people don't understand that there's a whole, a company or a person who's probably collecting the information about them.

And also, it's just a machine, so you could easily change it's, it's personality from one moment to the next, so it's very risky. That's why I'm very concerned about it. Just to jump back into my question. So, how are you dealing with that with Sid?

[00:11:38] We've had some recent learnings. So we've, we've been deploying new models and Sid 2 is like on a new model, right?

Now the directives we put into Sid in, you know, you look back and you're like, oh, that was pretty simple stuff, right? But for the time it was, it was enough to be able to do what you needed to do. And the new ones like, okay, this is a lot more. This is, this is like, like 10 pages worth [00:12:00] of, of directives that we've put in, like, where it's, it's, and it's, it's very much focused around, this is your audience. This is who you're talking to. This is the law of the land, you know. It's England and Wales law, by the way, sorry for everyone, if you're living in the United States, you can still use it, it's just, it'll be under, it'll be, you're behaving as if it's under English law. There'll be a bit of irony in that.

Well, I just couldn't help myself. I love history, and I like winding people up. But with the new one, with the voice functionalities as well, being like a good ten page directives on how it behaves and how it has to pre-cursor its behavior. So, when you speak to it, it'll like, jump straight in, similar to that, and you'll start talking to it straight away. And we don't want it to turn around and go, oh, by the way, I'm just a chatbot, and, you know, because you're gonna hear that over and over again, and it's gonna be like the opposite of inception.

It's like, you want them to think something themselves in order to recognize a problem, and you don't want to tell them because they'll just resist. So we've done it in this behavior where we get to test what it said it was meant to do, [00:13:00] is what we're hoping it will do. Okay. Because GPT is a blank canvas, the engine itself is a blank canvas.

You give it the personas and directives, then it produces the output, right? So you go like, you're an expert social media marketer. You will only reply to me as much, as such, right? Here's my problem. Here's the intended outcome, find me a solution. And then it goes, Oh, well, I'm going to behave like that. I'm using my index like that

[00:13:23] and this is how I am. With Sid, it's got that template built over it already. So the moment you go talk to it, it has those directors in place. So that's the difference between, you know, something like Gemini, GPT, that kind of stuff, Perplexity, because it's, it's static. Unless there is something I’m not aware of that,

That’s a directive, a core directive. Which there will be, but then, you know, with Sid, because it's kids, because it's well-being, because it's potential harm. We've been really, really anal about it, in honesty. But we can't turn around and say it's gonna be perfect, because like that technology right there,

[00:14:00] it's not perfect. That's why they're on preview right now. They're not, they haven't fully rolled it out yet because they know there's serious risks involved on a societal level. And there are, and it's going to be that way for a long time. And as a person who runs an AI company, who is pushing out accessible AI tech, that's designed to be free and help people, I am concerned about the risks of this technology and how it's going to really affect people.

Because we've started seeing she within a few seconds. Yeah.

[00:14:27] I said, I mean, and I really want to point this out again. I do not judge people who, for example, have AI chatbot girlfriends, AI chatbot friends. I'm not someone to, uh, well, I, I, I've never, uh, tried it. So I, I don't want to put myself in other people's positions, um, in other people's lives.

But I, I think that, that this is also new, the thing that I judge is the companies who [00:15:00] don't have your mindset. Because you are very careful about it. And I just don't trust that, that all the companies are, are careful about it, as you say, that there's a lot of societal impacts, and, and this is something that, that we won't know from like 10, 15 years down the road, What the real impacts are, because it takes time for a society to change.

Yeah, and so I just think that awareness goes a really long way, and we will [00:15:30] keep saying she, and if it speaks in a man's voice, he, but just that voice in the background that says, It's an it is, I think, goes a really, really long way. And just to remind yourself that it's, as you said, it's an index. It's not a real personality, right?

Like, going back in time, it's the other pages with the voice, you know, but way more cool. Yeah, but you see, the thing is, when you said about my mindset and stuff, it's different intentions, right? [00:16:00] So, I've met with other companies that are much more financially driven. I don't really care, to be honest. It's, it's all, it's just money.

Who cares? It's stuff. You're gonna die anyway, right? But in this government as well, governments are collaborating to extreme lengths with AI companies because there's a, there's an arms race in this. Facebook, you know, now Meta, kind of accelerated that when they just went, oh, here's a model. Here you go, everyone.

Rip the safety settings off and start playing with it. And now you've got like terror cells using AI to do things. It's just like, well, okay, that's as bad as leaving half of your arms in a country and then abandoning it, isn't it?

[00:16:35] Yeah. I shouldn’t be laughing, but yeah, I mean, either you laugh or you cry. But it is serious, yeah, it's serious.

Yeah, so my mindset is very much like, okay, I'm really concerned about the inhuman aspects that are coming to society, which are already occurring. We can see that on every level, but these guys aren't that way inclined, they don't, they don't care, it's the money, it is genuinely. Or their own vision, whether it's a sycophantic [00:17:00] vision, or a different kind of vision, that they have for the world, in their own mind. And typically people who go to the top, you know, they're made differently, whether that's good or bad, they're built differently, and they operate differently. And you've got to be willing to step out of your own mindset and perspective of the world, to understand what their intentions could be. In order to protect yourself, and it's difficult because it's, it's not a common thing to do, but you have to do it.

[00:17:27] You reminded me of one thought I just want to close with is that I said this a few times, and a lot of people say this that are trying to fight for fair, ethical, responsible AI, is that AI can fight, and probably will be the only hope to fight against AI's harms, right?

Because a tool like Sid that is main, for example, if you ask that question about emotional attachment, what can you do, Sid will give a proper answer. [00:18:00] Sid will explain the risks, you know, and explain also the tools that are used to trick people, and then this is something that, I keep coming back to a lot of, like, problems we talk about with AI, is that we have to not just be afraid of AI, but understand we can embrace AI to help us with the problems of AI. Of course, we'll then talk about the problems of the environment and fresh water on a different day.

[00:18:29] Nice. Yeah, I mean, I'm just so thankful for people like you that are, AI companies, really fighting for the good in society and recognizing the risks and trying to do something about it with the tool itself.

Now I'll close on my side as well with this. It's, it's, it's all being well, like creating a tool, you know, Sam Altman could be in his bedroom building a tool and he releases it, right? And he could have good intentions, but then it's down to the people that are promoting this cycle of how to use it, right?

[00:19:00] So let's take an example. And there's been plenty of horrible ones. A child that's on social media too much and they get served ads for your AI girlfriend or AI boyfriend because you're lonely. The algorithm knows you're lonely because of the content you're looking at and they can understand it and perceive it.

And they definitely listen to you in my eyes. I don't care what anyone says. You know, that kid then goes, sees an ad that goes, oh, you can use it like this. Okay. So they'd be instructed then in a direction how to use it the same way I used it at the start of this call and you wouldn't have to have used it like that.

[00:19:28] So now you're probably going to go and use it like that. Because you're going to see what happens next. So if you get a really lonely person that's in this use case, and they get told, oh, you can use it as your girlfriend. They go, but okay, I'm going to use it as my girlfriend. Then I don't have to try anymore.

Then I don't have to go out and touch grass. I don't have to go and take a risk. And then they become trapped. And then they maybe, I don't know, months, years later, look back and go, I've wasted that entire phase of development in my life. And I could be really happy, but I'm not, because they just harvested my data and my time [00:20:00] by selling me sex and companionship and now I've lost tons of life, and now I only get one go.

It's a mess, and that's why I'm happy to judge people who do silly things. Because if they're not told they're doing something silly, they will continue to do it. Same way if I'm not told that I'm doing something stupid, I'm gonna keep doing it unless I realize. That's just helping each other. It's called, I don't know, tough love.

That's a good point, thank you. Really, that's, that's the best way to close this. I really could talk to you for hours, and I did the other day.

[00:20:34] We'll do it again. We'll do it again. We'll do it again.

It was great. Really, thank you for that. We need tough love. That's the message. We need tough love with this, because it's so new and just like when, sharenting was new with Facebook, around the time that you were born, probably.

I embraced Facebook in like, in like [00:21:00] 2010, 2008, something like that, yeah.

Okay, yeah, it wasn't, and that's the same thing; we didn't realize the impact. We, people didn't realize the impact it was going to have on people. Here, if you see what the impact is going to be, you're right, we, we need to judge, obviously.

In a respectful way, but judge and say, you know, you're wrong. Because we can really help them.

Yeah. Yeah. I'd say it depends on the actor, of judging them kindly or not. I mean, you know, you look at Mark Zuckerberg and his makeover and everything, that guy knew what he was doing for years, um, and he owns the majority of the controlling shares and voting for that company, so.

[00:21:32] I'll judge them pretty harshly. Don't like hhim.

Yes, I'm talking about the masses of people. Oh yeah,

yeah, yeah. I'm very happy to judge certain people like Mark Zuckerberg. Very much so. Sam Altman. I'm going to make the list, make myself unpopular. But I, thank you so much. Being more, are you gonna be back soon with, with digital Defender, Stella?

I can't wait. Well, thank you very much. I appreciate you being on here. Thank you.

[00:22:00] Please let us know what you think about what we're talking about in this episode and the others. Check out more about us and subscribe at digi-dominoes.com. Thank you so much for listening. I'd also like to thank our sponsor, Data Girl and Friends. Their mission is to build awareness and foster critical thinking about AI, online safety, privacy, and digital citizenship through fun, engaging and informative content.

Check them out at data-girl-and-friends.com. Until next time, stay curious and keep learning!

Leave a Reply

Your email address will not be published. Required fields are marked *