Algorithm Blog #2: How Generative AI Works and How It Fails

Published on:

Some thoughts on Generative AI.

Case Study
How Generative AI Works and How It Fails.


What is generative artificial intelligence (AI)? This is when artificial intelligence is used to create something new, it is trained from massive amounts of data, to learn new patterns and analyze relationships, in the hopes of generating something new. AI can be used to generate content such as texts, videos, art, audio, and more.

To determine how society should go about handling generative artificial intelligence and to determine whether or not generative AI is beneficial, the pros and cons must first be further examined. It first depends on what the machine is trained on, what kind of data is being used. As a general rule of thumb, if AI is fed bad data, the result will be similar. For example, if data used to train the machine were to be biased, the algorithm would produce a biased result. Some real examples of what generative AI looks like include text-prediction, which is trained by different articles and texts found online. This example does raise ethical concerns. Companies such as Google and Meta can and are using user data to train their AI machines. The companies do not explicitly let users know that their data is being used to train their AI, but do not hide it within their privacy policies and terms and conditions. Does that make it right for companies to allow their user’s data to be used to train these generative AI tools, even if it benefits the users in the end?

Generative AI can be used for more than text-prediction, it can also be used to create images and videos. This is where the term deepfakes come into play. The most common type of deepfake is when a person’s face is put onto another person’s body in a video. However, these videos can also replicate the person’s speech, facial expressions, and mannerisms. Deepfakes can be harmful and hurtful, no matter who is effected. Deepfakes can have especially disasterous outcomes when used again people in elected office or positions of power. This is a clear ethical violation.

Generative AI is also often built using the output of journalists, writers, photographers, artists, and so many more. It takes a skilled and well-rounded team of individuals to create a generative AI machine. These workers: especially those that work in a more creative discipline, often go uncredited. Their work may be used the without their consent and/or they may not be compensated. This arises clear ethical concerns and violations. Generative AI machines are able to create art pieces and videos so quickly. As these machines become more popular, there may be a risk of these workers losing their job due to the machines being able to generate art more quickly. AI taking over creative jobs is not the only ethical concern that is brought up in this scenario. Every artist, journalist, etc. deserves to be rightfully credited and compensated for the work that they have put into something. It hurts the creator financially, may hurt their reputation and overall disregards the work put into the end result created. Disrespecting something as innately human as art, which connects people and invokes raw emotions, creates a cycle that ends with a more disconnected and further polarized society. To prevent this, policies protecting artists integrity should implemented and companies should give artists, journalists, and other workers rightful credit.

How can those who want to change the system go about doing so? Can the market solve the problem, such as through licensing agreements between publishers and AI companies? What about copyright law — either interpreting existing law or by updating it? What other policy interventions might be helpful? Overall, I think that while the development of generative AI is beneficial to mankind and should be continued, the public’s access to generative AI should be slowed down. When the consequences are not yet fully known, it is better to slow down and consider the full impact that generative AI may have on society. Considering the environmental impact and ethical violation, generative AI is not worth the downfall of man. Until it is known that the benefits outweigh the consequences and negative impact, it is best to allow researchers to continue furthering their understanding and limiting the ways that users are able to interact with it. For the time being, creating policy around AI and the way the general public has access to it is the best course of action to take. Copyright law and other laws that protect artists must be taken seriously and updated to include artificial intelligence within the definitions. Change can seem daunting, but looking at reasonable steps for all actors involved to take can allow real change to occur.

Further things to consider when thinking about generative artificial intelligence:

Where is the line drawn for when generative AI is too harmful to society?

Is creating policy around AI or restricting the use of AI for the general public impeding on a freedom owed to citizens of the United States?

When humans are prone to bias, how can it be built into generative AI for there not to be a bias? Does having a bias within the machine make it more human-like? At what cost?

Is it a violation for companies building and training generative AI machines to user their user’s data to further the machines?