Sure, that drunk selfie you posted on Instagram might be personally embarrassing. Now imagine that selfie is also training fuel for an artificial intelligence system that helps put an innocent person in jail.
Welcome to the age of artificial intelligence. What you do with your face, your home security videos, your words and the photos from your friend’s art show are not just about you. Almost entirely without your true consent, information that you post online or that is posted about you is being used to coach AI software. These technologies could let a stranger identify you on sight or generate custom art at your command.
Good or bad, these AI systems are being built with pieces of you. What are the rules of the road now that you’re breathing life into AI and can’t imagine the outcomes?
What you do with your selfies is not just about you. Your posts are being used to coach AI software. Credit:istock
I’m bringing this up because a bunch of people have been trying cool AI technologies that are built on all the information we’ve put out into the world.
My colleague Tatum Hunter spent time evaluating Lensa, an app that transforms a handful of selfies you provide into artistic portraits. And people have been using the new chatbot ChatGPT to generate silly poems or professional emails that seem like they were written by a human. These AI technologies could be profoundly helpful, but they also come with a bunch of thorny ethical issues.
Tatum reported that Lensa’s portrait wizardry comes from the styles of artists whose work was included in a giant database for coaching image-generating computers. The artists didn’t give their permission to do this, and they aren’t being paid. In other words, your fun portraits are built on work ripped off from artists. ChatGPT learned to mimic humans by analysing your recipes, social media posts, product reviews and other text from everyone on the internet.
‘We’ve exposed the problems. We don’t know how to fix them.’
Beyond those two technologies, your birthday party photos on Facebook helped train Clearview AI facial recognition software that police departments are using in criminal investigations.
Being part of the collective building of all these AI systems might feel unfair to you, or amazing. But it is happening.
I asked a few AI experts to help sketch out guidelines for the new reality that anything you post might be AI data fuel. Technology has outraced our ethics and laws. And it’s not fair to put you in the position of imagining whether your Pinterest board might someday be used to teach murderous AI robots or put your sister out of a job.
“While it’s absolutely a good individual practice to limit digital sharing in any case where you don’t or can’t know the afterlife of your data, doing that is not going to have a major impact on corporate and government misuse of data,” said Emily Tucker, executive director at the Center on Privacy and Technology at Georgetown Law.
Tucker said that people need to organise to demand privacy regulations and other restrictions that would stop our data from being hoarded and used in ways we can’t imagine.
“We have almost no statutory privacy protections in this country, and powerful institutions have been exploiting that for so long that we have begun to act as if it’s normal,” Tucker said. “It’s not normal, and it’s not right.”
Mat Dryhurst and Holly Herndon, artists in Berlin, helped set up a project to identify artists’ work or your photos from popular databases used to train AI systems. Dryhurst told me that some AI organisations including LAION, the massive image collection used to generate Lensa portraits, are eager for people to flag their personal images if they want to yank them from computer training data sets. (The website is Have I Been Trained.)
Dryhurst said that he is excited about the potential of AI for artists like him. But he also has been pushing for a different model of permission for what you put online. Imagine, he said, if you upload your selfie to Instagram and have the option to say yes or no to the photo being used for future AI training.
Maybe that sounds like a utopian fantasy. You have gotten used to the feeling that once you put digital bits of yourself or your loved ones online, you lose control of what happens next. Dryhurst told me that with publicly available AI, such as Dall-E and ChatGPT, getting a lot of attention but still imperfect, this is an ideal time to reestablish what real personal consent should be for the AI age. And he said that some influential AI organisations are open to this, too.
Hany Farid, a computer science professor at the University of California at Berkeley, told me that individuals, government officials, many technology executives, journalists and educators like him are far more attuned than they were a few years ago to the potential positive and negative consequences of emerging technologies like AI.
The hard part, he said, is knowing what to do to effectively limit the harms and maximise the benefits.
“We’ve exposed the problems,” Farid said. “We don’t know how to fix them.”
The Washington Post
The Business Briefing newsletter delivers major stories, exclusive coverage and expert opinion. Sign up to get it every weekday morning.
Most Viewed in Technology
From our partners
Source: Read Full Article