An illustration showing some of the labels Meta's Casual Conversations v2 data set captures.Image: Meta
Facebook parent Meta is sharing an updated data set for voice and face recognition AI that it hopes others in the industry will use to test how accurately their systems work across a diverse set of people.
Why it matters: Machine learning-driven artificial intelligence — which powers everything from these recognition algorithms to the popular ChatGPT — is only as fair as the data used to train and test it. The more representative the data, the less likely it is that human bias will turn into automated discrimination.
- Early facial recognition systems, for example, were shown to perform less reliably on people with darker skin.
How it works: Meta's new data set, known as Casual Conversations v2, includes more than 25,000 videos from more than 5,000 people across seven countries, with people self identifying their age, gender, race and other characteristics such as disability and physical adornments. Trained vendors added additional metadata, including voice and skin tones.
- The videos, featuring paid participants who gave their consent to be part of the data set, included both scripted and unscripted monologues.
- An earlier data set, released in 2021, had similar goals but involved fewer people, included a narrower set of categories and only had U.S. participants.
- As with the prior data, Meta is making it available both externally and to its own teams.
What they're saying: In an interview, Meta VP of civil rights Roy Austin Jr. told Axios that now is the best time to start ensuring that algorithms are fair and inclusive.
- "It’s much easier and much better to get things right at the beginning than it is to fix things late in the process," he said. "We are at the beginning of a technology likely to impact us for decades if not centuries to come."
The big picture: Ethical AI experts say there are a range of actions that need to be taken to minimize algorithmic bias, from improving training data to testing for disparate outcomes.
- Meta's new data release is designed to help with the latter issue, at least with regard to speech and visual recognition algorithms.
- Different approaches are needed for other types of AI. For instance, large language models trained on broad swaths of the internet, such as OpenAI's ChatGPT, can easily spew out hateful information or perpetuate stereotypes.
- "If we look at our large language models, they lack diversity," Austin said. "They are filled with a lot of hateful, bullying, harassing speech. The only way to test is to have a diverse model, is to have those voices that may not be in the larger models and to be intentional about including them."
Of note: Another way to limit bias in AI is to set boundaries for how a particular type of system will be used.
- Salesforce, for example, is limiting its generative AI to answering questions of the sort that a sales or support staffer would need to ask.
Source: Read Full Article