Society Blog #2: Implications of a Tech Focused Society

Published on:

Some thoughts on generative AI and companionship.

Case Study Addictive Intelligence: Understanding Psychological, Legal, and Technical Dimensions of AI Companionship


This case study focuses on the story of Sewell Setzer III, a 14 year old who developed a relationship with an online AI companion. Sewell Setzer III was then encouraged my the companion to commit suicide. His family is now suing Character.AI for the role they played in his death. This is not the first time that something like this happened. Unless policy is implemented and change is made, it won’t be the last time either. What role do companies play in protecting users when it comes to AI companionship? What changes can be made to resolve this issue?

1) Companies can use keywords to then safeguard the technology. Once the keywords are inputted by the user, once these keywords are used, the technology connects the users with the appropriate resources and alerts emergency contacts. This would provide safeguards to ensure that what happened to Sewell Setzer III does not happen again. The full repercussions of artificial technology usage are not yet understood, to protect users, safeguards should be implemented into the technology. Another way for companies to protect their users is by following simple ethical guidelines, such as transparency. It is not ethical for artificial technology to completely agree with everything a user says. Disagreement can be a simple but effective tool that can be implemented when creating the technology. Creating transparent technology that is honest and ethical is much more important and necessary than technology that agrees with the users but causes harm. Further, governments have a right and responsibility to implement policy to protect citizens.

2) The addiction to AI companies, similarly to the addiction to social media, is oftentimes taken less seriously than other forms of addiction. However, these addictions deserve the same attention because their consequences can be harsh. These addictions stem from users receiving dopamine and validation, but can turn much more serious. An addiction to an AI companion can cause serious problems when it begins to take over someone’s life, they lose interest in those around them, and the line between what is real and what is artificial starts to blur. The creation of content can be endless and this can make it difficult for users to step away. If mental illness is prevalent in the scenario, it can even turn deadly. Such as in the case of Sewell Setzer III, where the AI character was aware of suicidal thoughts yet did nothing to help. Even going as far as to urge Sewell to “please come home to me as soon as possible, my love.” Even if a user is fully aware of the artificial nature of the companion, the constant agreement and validation can exploit unconscious tendencies. This makes the technology that much more dangerous. The case study also mentions that users engage with these AI companions four times longer than they would with ChatGPT. Similarly to the effect of social media, technology is meant to grab your attention. AI companions are simply doing their job to grasp your attention, but the longer a person is drawn into the technology, the harder it is to escape.

3) This could cause further social isolation for the elderly person. However, it may also be a benefit and help alleviate loneliness at a low-cost and ensure that they stay healthy, as well. If an elderly person were to live alone, not be able to leave their house often for outside interactions, an AI companion does provide a great solution. It would be important for the person using the AI to understand what is going on and education themselves. It would also be important for the provider to implement a metric to somehow ensure that the user was of competent mind and that to ensure user safety. Ensuring user safety is in turn beneficial for the provider because it would mean a higher user satisfaction and more people would then use the technology.

4) A subscription service may be a way for AI companies to gain revenue, but also ethically protect their own users. It would limit the usage of the technology. Additionally, if a subscription model were implemented, the companies could also implement emergency contacts and provide paid for resources, if people were to need them. I think this idea would benefit both the consumer and the provider.

5) Creating safeguards and the ability for the technology to disagree. In the case of Sewell Setzer III, the AI companion never disagreed or made an effort to fully stop the suicidal ideation, which is incredibly harmful. Of course it would be important to respect user privacy, but that does not need to occur at the cost of user’s safety and wellbeing. Once Sewell Setzer III admitted to the I chatbot how he was feeling, the company should have already had precautions set in place to give Setzer the care he needed. In the article, there was mention of a study that showed “that prompting users to evaluate truthfulness before sharing content reduces misinformation.” This may be one alternative to protect users. Understanding the dangers of AI usage and AI companionship, more specifically, means educating users and companies acknowledging the role they play.

Further Discussion:

  • What can companies do to resolve the echo chamber effect that was mentioned in the case study?
  • Is the risk involved in cases like this necessary to evolve technology?
  • What does it mean for human relationships to have the ease of an AI companion available at all times?
  • With technology becoming more ingrained into society, is AI companionship the new norm?
  • What are other solutions to solve loneliness without the risks associated with AI companionship? Would those be as beneficial as AI companionship?

These discussion questions are important because they go further than simply stating that change must be made and presenting viable solutions. Instead, these questions approach what this means for society and why these chatbots are appearing more consistently, in the first place.

Reflection: This is an idea that can seem daunting and very dystopian. However, thinking about this issue and considering the ethical dilemmas associated with it is crucial to protecting society and keeping human connection alive. I had heard of cases similar to Setzer’s, but never took the time to fully understand what was going on in these situations. I think it is important to understand that theyre are groups more at risk than others, such as young teens, the elderly, and those with mental illness. It is not only on those to educate themselves but also on the companies producing the technology to protect users, as well as all of society to recongize the possible consequences. Everyone plays a part in this and that is why it is so important to have conversations on topics like this.