Technology

Regulators Employ Old Rules, Creative Thinking to Tackle ChatGPT-Like AI Technology

As the race to develop more powerful artificial intelligence services like ChatGPT accelerates, some regulators are relying on old laws to control a technology that could upend the way societies and businesses operate.

The European Union is at the forefront of drafting new AI rules that could set the global benchmark to address privacy and safety concerns that have arisen with the rapid advances in the generative AI technology behind OpenAI’s ChatGPT.

But it will take several years for the legislation to be enforced.

“In absence of regulations, the only thing governments can do is to apply existing rules,” said Massimiliano Cimnaghi, a European data governance expert at consultancy BIP.

“If it’s about protecting personal data, they apply data protection laws, if it’s a threat to safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable.”

In April, Europe’s national privacy watchdogs set up a task force to address issues with ChatGPT after Italian regulator Garante had the service taken offline, accusing OpenAI of violating the EU’s GDPR, a wide-ranging privacy regime enacted in 2018.

ChatGPT was reinstated after the US company agreed to install age verification features and let European users block their information from being used to train the AI model.

The agency will begin examining other generative AI tools more broadly, a source close to Garante told Reuters. Data protection authorities in France and Spain also launched in April probes into OpenAI’s compliance with privacy laws.

Bring in the experts

Generative AI models have become well known for making mistakes, or “hallucinations”, spewing up misinformation with uncanny certainty.

Such errors could have serious consequences. If a bank or government department used AI to speed up decision-making, individuals could be unfairly rejected for loans or benefit payments. Big tech companies including Alphabet’s Google and Microsoft had stopped using AI products deemed ethically dicey, like financial products.

Regulators aim to apply existing rules covering everything from copyright and data privacy to two key issues: the data fed into models and the content they produce, according to six regulators and experts in the United States and Europe.

Agencies in the two regions are being encouraged to “interpret and reinterpret their mandates,” said Suresh Venkatasubramanian, a former technology advisor to the White House. He cited the US Federal Trade Commission’s (FTC) investigation of algorithms for discriminatory practices under existing regulatory powers.

In the EU, proposals for the bloc’s AI Act will force companies like OpenAI to disclose any copyrighted material – such as books or photographs – used to train their models, leaving them vulnerable to legal challenges.

Proving copyright infringement will not be straightforward though, according to Sergey Lagodinsky, one of several politicians involved in drafting the EU proposals.

“It’s like reading hundreds of novels before you write your own,” he said. “If you actually copy something and publish it, that’s one thing. But if you’re not directly plagiarizing someone else’s material, it doesn’t matter what you trained yourself on.

‘Thinking creatively’

French data regulator CNIL has started “thinking creatively” about how existing laws might apply to AI, according to Bertrand Pailhes, its technology lead.

For example, in France discrimination claims are usually handled by the Defenseur des Droits (Defender of Rights). However, its lack of expertise in AI bias has prompted CNIL to take a lead on the issue, he said.

“We are looking at the full range of effects, although our focus remains on data protection and privacy,” he told Reuters.

The organisation is considering using a provision of GDPR which protects individuals from automated decision-making.

“At this stage, I can’t say if it’s enough, legally,” Pailhes said. “It will take some time to build an opinion, and there is a risk that different regulators will take different views.”

In Britain, the Financial Conduct Authority is one of several state regulators that has been tasked with drawing up new guidelines covering AI. It is consulting with the Alan Turing Institute in London, alongside other legal and academic institutions, to improve its understanding of the technology, a spokesperson told Reuters.

While regulators adapt to the pace of technological advances, some industry insiders have called for greater engagement with corporate leaders.

Harry Borovick, general counsel at Luminance, a startup which uses AI to process legal documents, told Reuters that dialogue between regulators and companies had been “limited” so far.

“This doesn’t bode particularly well in terms of the future,” he said. “Regulators seem either slow or unwilling to implement the approaches which would enable the right balance between consumer protection and business growth.”

© Thomson Reuters 2023


Google I/O 2023 saw the search giant repeatedly tell us that it cares about AI, alongside the launch of its first foldable phone and Pixel-branded tablet. This year, the company is going to supercharge its apps, services, and Android operating system with AI technology. We discuss this and more on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated – see our ethics statement for details.



Source link

Related Articles

Leave a Reply

Back to top button
WhatsApp Logo Chat Us