Algorithm Blog #2: How Generative AI Works and How It Fails
Published on:
Some thoughts on Generative AI.
Case Study
How Generative AI Works and How It Fails.
What is generative artificial intelligence (AI)? This is when artificial intelligence is used to create something new, it is trained from massive amounts of data, to learn new patterns and analyze relationships, in the hopes of generating something new. AI can be used to generate content such as texts, videos, art, audio, and more.
To determine how society should go about handling generative artificial intelligence and to determine whether or not generative AI is beneficial, the pros and cons must first be further examined. It first depends on what the machine is trained on, what kind of data is being used. As a general rule of thumb, if AI is fed bad data, the result will be similar. For example, if data used to train the machine were to be biased, the algorithm would produce a biased result. Some real examples of what generative AI looks like include text-prediction, which is trained by different articles and texts found online. This example does raise ethical concerns. Companies such as Google and Meta can and are using user data to train their AI machines. The companies do not explicitly let users know that their data is being used to train their AI, but do not hide it within their privacy policies and terms and conditions. Does that make it right for companies to allow their user’s data to be used to train these generative AI tools, even if it benefits the users in the end?
Generative AI can be used for more than text-prediction, it can also be used to create images and videos. This is where the term deepfakes come into play. The most common type of deepfake is when a person’s face is put onto another person’s body in a video. However, these videos can also replicate the person’s speech, facial expressions, and mannerisms. Deepfakes can be harmful and hurtful, no matter who is effected. Deepfakes can have especially disasterous outcomes when used again people in elected office or positions of power. This is a clear ethical violation.
Generative AI is also often built using the output of journalists, writers, photographers, artists, and so many more. It takes a skilled and well-rounded team of individuals to create a generative AI machine. These workers: especially those that work in a more creative discipline, often go uncredited. Their work may be used the without their consent and/or they may not be compensated. This arises clear ethical concerns and violations. Generative AI machines are able to create art pieces and videos so quickly. As these machines become more popular, there may be a risk of these workers losing their job due to the machines being able to generate art more quickly. AI taking over creative jobs is not the only ethical concern that is brought up in this scenario. Every artist, journalist, etc. deserves to be rightfully credited and compensated for the work that they have put into something. It hurts the creator financially, may hurt their reputation and overall disregards the work put into the end result created. Disrespecting something as innately human as art, which connects people and invokes raw emotions, creates a cycle that ends with a more disconnected and further polarized society. To prevent this, policies protecting artists integrity should implemented and companies should give artists, journalists, and other workers rightful credit.
But how does this issue get solved and how can it be ensured that those that deserve credit, receive it? I think that while the development of generative AI is beneficial to mankind and should be continued, the public’s access to generative AI should be slowed down. I think that because the consequences are not yet fully known, it is important to slow down and consider the full impact that generative AI may have on society. Considering the environmental impact and ethical violation, generative AI is not worth the downfall of man. Until it is known that the benefits outweigh the consequences and negative impact, it is best to allow researchers to continue furthering their understanding and limiting the ways that users are able to interact with it. For the time being, creating policy around AI and the way the general public has access to it is the best course of action to take. Copyright law and other laws that protect artists must be taken seriously and updated to include artificial intelligence within the definitions. Change can seem daunting, but looking at reasonable steps for all actors involved to take can allow real change to occur.
This reading sparked many questions. One of the questions that I had after reading the article was: where is the line drawn for when generative AI is too harmful to society? I think that while there a benefits to the use of AI, it can also be dangerous and the consequences are not yet fully known. When will it be time for policies to be implemented to protect the general public, what is considered too harmful that generative AI will no longer be used? Another question I had was around the creation and implementation of AI policy. Is creating policy around AI or restricting the use of AI for the general public impeding on a freedom owed to citizens of the United States? I then had questions about human bias and the impact it has on human-bias. Going further into the bias of humans. When humans are prone to bias, how can it be built into generative AI for there not to be a bias? Does having a bias within the machine make it more human-like? At what cost? Lastly, I had a question regarding companies building AI machines. Is it a violation for companies building and training generative AI machines to user their user’s data to further the machines? I had read an article in the NYtimes titled, “Worried About Meta Using Your Instagram to Train Its A.I.? Here’s What to Know.” The article touched on the companies such as Google and Meta using user data to train their own AI machines. Reading the article made me curious about whether or not it is ethical if a company users their users personal data.
This assignment really got me to think deeply about the impact that generative AI is having on society. As this technology becomes more and more ingrained into daily society, it is that much more vital that the possible consequences be understood. This article really reminded me of a previous article that I had previously read about different companies using their users personal data to train their AI. I connected this two articles together because they both touch on relevant ethical issues due to the increasing use of artificial technology. I think that to solve this overall issue and to continuing reaping the benefits of AI, it is important to stay fully educated. Understanding the pros and the cons of the use of AI can ensure that it is used safely by more people.
