The Business & Technology Network
Helping Business Interpret and Use Technology
S M T W T F S
1
 
2
 
3
 
4
 
5
 
6
 
7
 
8
 
9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
26
 
27
 
28
 
29
 
30
 
 
 
 
 
 

LinkedIn’s 930 Million Users Unknowingly Train AI, Sparking Data Privacy Concerns

DATE POSTED:September 24, 2024

LinkedIn has thrust its 930 million users into an unexpected role: unwitting AI trainers, igniting a firestorm over data privacy and consumer trust.

The professional networking giant’s recent User Agreement and Privacy Policy update, which will take effect on Nov. 20, has caused concern in the business community. LinkedIn admitted it has been using users’ data to train its AI without consent, and while users can opt out of future training, there’s no way to undo past data use. This revelation has warned experts of growing tension between AI innovation and user privacy.

“Data is the new oil. When the data being sifted through contains personal information, that’s where privacy questions come into play,” David McInerney, commercial manager for data privacy at Cassie, told PYMNTS.

LinkedIn’s move could force businesses to reconsider their digital footprint, balancing the need for professional connectivity against the risk of compromising sensitive information. McInerney emphasized the stakes: “A whopping 93% [of consumers] are concerned about the security of their personal information online.”

Opting Out

While LinkedIn offers an opt-out setting for generative AI training, the company noted that it will not use data from users in the European Economic Area, Switzerland and the United Kingdom for AI training. This geographic distinction highlights the disparity between European data protection standards and the less regulated U.S. landscape.

As LinkedIn’s parent company, Microsoft, navigates this controversy, McInerney pointed out a fundamental challenge: “Businesses like Microsoft can say they trained their AI, and it made an automated decision. But a fundamental piece of GDPR is your right to challenge an automated decision.” This principle, he noted, becomes problematic when “nobody at a company knows how the algorithms work because they’ve become so complicated.”

The debate underscores a broader trend in the tech industry, where companies are racing to leverage AI capabilities while grappling with ethical considerations and user trust.

“Compliance is good — ethics are better,” McInerney said. “By prioritizing your customers, it’s proven to create stronger relationships, increased brand loyalty and higher sales.”

Right to Privacy?

Concerns over privacy in AI training data have grown as AI systems become more powerful and widespread. Central to this issue is how AI models, especially with large language models like OpenAI’s GPT-4 or Google’s Gemini, are trained on vast amounts of publicly available information scraped from the internet, including websites, social media and databases, often without explicit consent.

In a recent lawsuit, authors like George R.R. Martin and Sarah Silverman filed complaints against OpenAI and Meta, claiming that their copyrighted works were used to train AI models without permission. This raised alarms about how AI companies collect and use personal and proprietary data. The central argument is that AI companies have scraped this data en masse, sidestepping intellectual property rights and individual privacy.

Controversy erupted when Clearview AI, a facial recognition startup, was discovered to have been scraping billions of images from social media platforms to train its AI system without users’ knowledge. Privacy advocates expressed concern that such practices could lead to violations of personal privacy, particularly when sensitive information is used to profile or track individuals.

The European Union’s AI Act specifically addresses these concerns by regulating high-risk AI applications and requiring transparency in data usage. This regulatory framework may be a harbinger of more stringent laws as lawmakers recognize the need to protect personal data from being used without consent in AI.

As the Nov. 20 deadline approaches, businesses and individual users alike are left to ponder the implications of their professional data potentially fueling AI systems and whether the benefits of enhanced services outweigh the privacy concerns in an increasingly AI-driven world.

The post LinkedIn’s 930 Million Users Unknowingly Train AI, Sparking Data Privacy Concerns appeared first on PYMNTS.com.