Research Brief

AI from AI: a Future of Generic and Biased Online Content?

Sharing the technology’s weak work product negatively influences the quality of its output going forward

Generative AI often seems incapable of producing exactly what users want it to produce — in text, code or image generation. 

Give a chatbot an assignment. It spits out an answer that’s based on what it’s learned from datasets and the internet. The answer may be technically right, but it doesn’t have the intended nuance or precision.

Opt In to the Review Monthly Email Update.

Users can revise the instructions to the AI, but that takes extra time and effort. If the AI system’s first response is good enough, especially if you’re in a hurry, you may just go with it.

And your bad AI becomes, quite possibly, everyone’s bad AI. 

The dumbing down of AI-generated content is of growing concern to researchers. The software that powers the likes of ChatGPT is beginning to fill the internet with content that’s bland and generic. As generative AI systems increasingly derive their knowledge from scraping the web, some researchers predict that increasingly they will be copying earlier AI, leading to widespread homogenization of information.

Slowing the Homogenization Process

A working paper by UCLA Anderson’s Francisco Castro and Jian Gao, a Ph.D. student, and Northwestern’s Sébastien Martin takes a human-centered approach to the problems of content homogenization and bias. Their research examines the ways in which people interact with generative AI systems and evaluates the societal consequences of these choices. The research also suggests that design changes that simplify the interactions between users and AI could help to slow the homogenization process.

The researchers employ a statistical model to analyze both the exchange of information between users and AI and also to examine how individual-level decisions and AI training may lead to societal challenges. Any output produced with the help of AI is less precise than output produced without AI because it doesn’t reflect the style, tastes, preferences and personal idiosyncrasies (you might call it creativity) of the user, according to their analysis. Yes, users are usually given a chance to revise and rewrite their prompts to an AI system or edit a response to generate results closer to what they’d intended. But will they?

Much of the promise of generative AI — and much of what appears to drive investor interest — is the technology’s ability to boost productivity. ChatGPT, for instance, has been credited in one experiment with significantly cutting the time it takes to perform professional writing tasks.

But fiddling with an AI prompt flies in the face of productivity. Crafting an AI prompt that elicits the exact kind of response a user wants can get so complicated that it’s given rise to a hot new tech specialty called prompt engineering

Low Fidelity

AI users are often faced with a trade-off between what Castro, Gao and Martin refer to as “fidelity” (how precisely the AI’s output reflects what a user intends) and “communication cost” (the time and effort it takes to revise a prompt to get a better output). 

The way users react to the trade-off depends on their uniqueness, according to the study. Some choose to take extra time to interact with the AI and refine their prompts to get output closer to what they’d intended. Other users — those described as “most unique” — forgo AI altogether, taking a do-it-yourself approach to create content that exactly matches their intentions. Users with more common preferences accept the first response generated by AI, avoiding extra time and effort if the default response is adequate. This is especially true of those who value productivity more than fidelity.

As the human-AI interactions generate less unique content, the more it may contribute to content homogenization and bias, the study suggests. AI systems are inherently biased — intentionally or not -— by the people who build them and the choice of databases they’re trained on. Furthermore, because large language models train on large public datasets, they know population-scale preferences. 

When responding to a prompt, the AI system may default toward those common preferences, the study suggests. In that case, almost any response generated by AI may be closer to the population’s mean preferences than a user’s actual preference. This flattening out of content may be exacerbated if AI-generated content is used to train the next generation of AI, according to the study.

As large language models train themselves on data produced by their predecessors, they may become susceptible to a degenerative process in which AI-generated data pollutes the training set, according to the authors of a 2023 paper..

As a result, content generated by newer generations of AI would reflect less and less of the variance of the original data, making them less and less useful — a state the researchers call “model collapse.”

Death Spiral

Writing effective prompts would become ever more difficult, forcing users to accept more generic and default responses generated by AI.

Their analysis indicates that human and technical factors might mutually reinforce each other, exacerbating the homogenization problem. “The next AI generation would be even more likely to return homogenized outputs, resulting in a ‘death spiral’ of homogenization, a dreadful outcome for human preference diversity,” the authors write.

Castro, Gao and Martin also studied the effects of AI bias that can occur when using an AI model that doesn’t accurately reflect a population’s preferences. Their analysis indicates that censoring type bias (like the bias against unique preferences) may negatively influence the usefulness of AI-generated content, especially for those with uncommon preferences. Directional bias (for example, content that is slightly right- or left-leaning) might lead to societal bias by influencing output of the users. Their research indicates that giving people tools that make it easier to interact with AI can significantly limit the risk of homogenization and bias.

Featured Faculty

  • Francisco Castro

    Assistant Professor of Decisions, Operations and Technology Management

  • Jian Gao

    Decisions, Operations and Technology Management, Ph.D. candidate

About the Research

Castro, F., Gao, J., & Martin, S. (2023). Human-AI Interactions and Societal Pitfalls

Related Articles

Semantic segmentation of cars on a road. Research Brief / Technology

AI’s Somewhat Hidden Secret: Invisible Human Workers

How 934 workers around the globe regard their labor; it doesn’t have to be this way

A departures board for international flights at an airport Research Brief / Startups

Why Do Startups Make International Moves?

In connection with funding, typically; those chasing money abroad tend to raise a lot more of it

A screen grab of the Volunteer Match website that reads Research Brief / Technology

Matching Algorithms Work Fine — Until Bypassed By External Links

How to rebalance supply and demand amid today’s chaotic internet traffic

Astronaut performing a space walk Research Brief / Operations

When Logistics Go into Orbit

Researchers are considering the challenges of new space industries