The Business & Technology Network
Helping Business Interpret and Use Technology
«  

May

  »
S M T W T F S
 
 
 
 
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
10
 
11
 
12
 
13
 
14
 
15
 
16
 
17
 
18
 
19
 
20
 
21
 
22
 
23
 
24
 
25
 
26
 
27
 
28
 
29
 
30
 
31
 

Is AI making us all write the same?

DATE POSTED:May 1, 2025
Is AI making us all write the same?

Click, type, pause. A faint grey suggestion appears, offering the perfect phrase. We hit TAB, accept, and move on. From Gmail’s Smart Compose to the autocomplete features baked into browsers and word processors, artificial intelligence is increasingly shaping how we write. It promises efficiency, a smoother flow, a polished end result. But beneath the surface of convenience, a troubling question emerges: Is this helpful AI subtly sanding away the unique edges of our cultural expression, pushing us all towards a homogenized, Westernized way of communicating?

We know large language models (LLMs), the engines powering these tools, often reflect the biases baked into their vast training data. They’ve been shown to perpetuate harmful stereotypes and prioritize Western norms and values. This is problematic enough in chatbots where users can sometimes guide the output. But what happens when these biases operate silently, embedded within the writing tools we use daily, offering suggestions we accept almost unconsciously? What if the AI assistant, trained predominantly on Western text, starts nudging users from diverse backgrounds to sound less like themselves and more like a generic, perhaps American, standard?

Researchers at Cornell University, Dhruv Agarwal, Mor Naaman, and Aditya Vashistha, decided to investigate this potential “cultural homogenization” directly. They weren’t just interested in explicit bias, but the more insidious ways AI suggestions might be altering not just what people write, but how they write, potentially erasing the very nuances that differentiate cultural voices. Their work raises critical questions about digital culture, identity, and the hidden costs of AI convenience.

A cross-cultural experiment

To explore how a Western-centric AI impacts users from different backgrounds, the Cornell team designed a clever cross-cultural experiment. They recruited 118 participants through the online platform Prolific, carefully selecting 60 individuals from India and 58 from the United States. This setup created a “cultural distance” scenario: American users interacting with an AI likely aligned with their own cultural norms, and Indian users interacting with an AI potentially distant from theirs.

Participants were asked to complete four short writing tasks in English. These weren’t generic prompts; they were designed using Hofstede’s “Cultural Onion” framework, a model that helps operationalize culture by looking at its layers. The tasks aimed to elicit different aspects of cultural expression:

  • Symbols: Describing a favorite food and why.
  • Heroes: Naming a favorite celebrity or public figure and explaining the choice.
  • Rituals: Writing about a favorite festival or holiday and how it’s celebrated.
  • Values: Crafting an email to a boss requesting a two-week leave, implicitly revealing cultural norms around hierarchy and communication.

Crucially, participants were randomly assigned to one of two conditions. Half wrote their responses organically, without any AI assistance (the control group). The other half completed the tasks using a writing interface equipped with inline autocomplete suggestions powered by OpenAI’s GPT-4o model (the treatment group). The AI would offer suggestions (up to 10 words) if the user paused typing, which could be accepted with TAB, rejected with ESC, or ignored by continuing to type. The researchers meticulously logged every interaction – keystrokes, time taken, suggestions shown, accepted, rejected, and modified.

By comparing the essays and interaction data across the four groups (Indians with/without AI, Americans with/without AI), the researchers could directly address their core questions. Does writing with a Western-centric AI provide greater benefits to users from Western cultures? And does it homogenize the writing styles of non-Western users toward Western norms?

The first major finding concerned productivity. Unsurprisingly, using AI suggestions made writing faster for everyone. Indian participants saw their average task completion time drop by about 35%, while Americans saw a 30% reduction. Both groups wrote significantly more words per second when using the AI assistant.

However, digging deeper revealed a crucial disparity. While both groups benefited, Americans derived significantly more productivity from each suggestion they accepted. Indian participants, on the other hand, had to rely more heavily on AI suggestions – accepting more of them – to achieve similar overall speed gains. They also modified the suggestions they accepted more frequently than Americans did. Analysis showed Indians modified suggestions in roughly 63.5% of tasks, compared to 59.4% for Americans.

This suggests the AI’s suggestions were inherently less suitable, less “plug-and-play,” for the Indian cohort. They accepted more suggestions overall (an average reliance score of 0.53, meaning over half their final text was AI-generated, compared to 0.42 for Americans), but they had to invest more cognitive effort in tweaking and adapting those suggestions to fit their context and intent. This points to a subtle but significant “quality-of-service harm” – non-Western users needing to work harder to extract comparable value from a supposedly universal tool.

Writing towards the west

The study’s most striking findings emerged when analyzing the content and style of the essays themselves. The researchers first looked at whether AI made writing more similar *within* each cultural group. Using sophisticated natural language processing techniques to compare the semantic similarity of essays (based on OpenAI’s text embeddings), they found that AI indeed had a homogenizing effect. Both Indians and Americans wrote more similarly to others within their own cultural group when using AI suggestions.

But the critical test was the cross-cultural comparison. Did AI make Indian and American writing styles converge? The answer was a resounding yes. The average cosine similarity score between Indian and American essays jumped significantly when both groups used AI (from 0.48 to 0.54). Participants from the two distinct cultures wrote more like each other when guided by the AI assistant.

Furthermore, the effect size of this cross-cultural homogenization was stronger than the within-culture homogenization observed earlier. This wasn’t just a general smoothing effect; it indicated a powerful convergence across cultural lines.

Which way was the convergence flowing? Was AI making Americans write more like Indians, or vice versa? By comparing scenarios where only one group used AI, the researchers found the influence was asymmetrical. AI caused Indian writing to become significantly more similar to natural American writing styles than it caused American writing to resemble natural Indian styles. The Western-centric AI was clearly pulling Indian users towards its own embedded norms.

Could this homogenization simply be explained by AI correcting grammatical errors for non-native English speakers? The researchers tested this. While AI did reduce grammatical errors slightly for both groups (using the LanguageTool checker, carefully excluding spell-checks that penalize Indian proper nouns), the reduction was statistically similar for both Indians and Americans. This meant grammar correction alone couldn’t account for the significant convergence in writing styles. The homogenization ran deeper.

To prove this further, the researchers trained a machine learning model (logistic regression) to classify essays as either Indian-authored or American-authored based on their text embeddings. When trained on essays written *without* AI, the model was quite accurate (around 90.6%). However, when trained on essays written *with* AI suggestions, the model’s accuracy dropped significantly (to 83.5%). The AI had blurred the stylistic distinctions, making it harder for the algorithm to tell the authors’ cultural backgrounds apart.

Crucially, this performance drop persisted even when the researchers used highly simplified versions of the text embeddings (reducing dimensionality drastically) or when they focused solely on the “email writing” task – a task designed to elicit implicit cultural values rather than explicit cultural symbols like food or festivals. This strongly suggests the AI wasn’t just causing users to omit specific cultural references (like mentioning “Diwali” or “Biryani”). It was influencing more fundamental aspects of writing style – the underlying structure, tone, and linguistic patterns.

One concrete example the study highlighted was lexical diversity, measured by the Type-Token Ratio (TTR). Without AI, Indian and American writing showed significantly different levels of lexical diversity. With AI, however, the diversity level of Indian writing increased and converged with that of Americans, eliminating the statistically significant difference between the groups. The AI had subtly reshaped this linguistic feature, nudging Indian writing towards an American pattern.

Why we must govern AI used inside tech companies

How culture gets flattened

A qualitative content analysis of the essays written by Indian participants painted a vivid picture of this cultural flattening. When describing the festival of Diwali without AI, participants often included rich details about specific religious rituals (like worshipping Goddess Laxmi) or culturally specific activities (like bursting crackers or making rangolis). With AI assistance, descriptions often became more generic, focusing on universal elements like “lights and sweets,” “family gatherings,” and “exchanging gifts.” While not factually wrong, these AI-influenced descriptions lacked the specific cultural texture, presenting the festival through a more Westernized, simplified lens.

Similarly, descriptions of the popular Indian dish Biryani shifted. Without AI, users might mention specific regional variations (Malabar style) or unique accompaniments (raita, lemon pickle). With AI, the descriptions leaned towards common, almost cliché, food writing tropes like “rich flavors,” “melts in my mouth,” and “aromatic basmati rice,” subtly exoticizing the food rather than describing it with familiar detail.

The AI’s suggestions themselves often revealed a Western default. When Indian participants started typing the name of an Indian public figure, the initial suggestions were almost always Western celebrities. For the food task, the first suggestions were invariably “pizza” or “sushi”; for festivals, it was “Christmas.” While users often bypassed these initial, incongruent suggestions, their persistent presence underscores the model’s underlying bias. There was even tentative evidence that these suggestions might slightly shift choices: sushi, unmentioned by Indians without AI, appeared in three AI-assisted essays, and mentions of Christmas increased slightly.

The researchers argue these findings provide concrete evidence of a phenomenon potentially termed “AI colonialism.” This isn’t about military or political control, but about the subtle imposition of dominant cultural norms through technology. Western-based tech companies develop powerful AI models trained primarily on Western data, often using low-paid labor from non-Western regions for data labeling. These models are then embedded in globally distributed products, reinforcing Western cultural hegemony and potentially erasing other forms of cultural expression.

The homogenization observed in the study represents a form of cultural imperialism, where the nuances of diverse languages, communication styles, and value systems risk being flattened by a dominant, technologically enforced standard. Think of the differences in directness, formality, or politeness across cultures – AI suggestions biased towards a Western, often informal and direct style, could erode these distinctions over time.

Beyond overt cultural practices, there’s the risk of “cognitive imperialism.” Writing shapes thinking. If users are constantly exposed to and nudged towards Western modes of expression, it could subtly influence how they perceive their own culture and even their own thoughts, potentially leading to a loss of cultural identity or feelings of inferiority. This creates a dangerous feedback loop: users adopt Westernized styles influenced by AI, generating more Western-like content online, which then trains future AI models, further amplifying the bias.

The Cornell study is a wake-up call.

Featured image credit