You have /5 articles left.
Sign up for a free account or log in.

Banner on Howard University campus with the university's name on it
Jeff Greenberg/Universal Images Group/Getty Images

Researchers at Howard University and Google are working to improve Black individuals’ experience when using artificial intelligence and automatic speech-recognition technologies, like Siri, Alexa or Google Assistant. 

The partners, which operate under the name Project Elevate Black Voices, released more than 600 hours of vocal data Tuesday, documenting a variety of African American English dialects, dictions and accents. The researchers hope AI developers can use the data to correct “inherent bias in the development process” that often leads ASR devices to either not recognize or incorrectly interpret the commands of Black users. 

“Many Black users have needed to inauthentically change their voice patterns away from their natural accents to be understood by voice products,” the partners stated in a news release.

Gloria Washington, an associate professor of computer science at Howard and a co–principal investigator on the project, said in a statement that voice assistant technology should be able to understand different dialects.

“It's about time that we provide the best experience for all users of these technologies,” she said.

Howard will own the data, which was collected at community events across 32 states. But other historically Black colleges and universities as well as Google AI-development teams can use it. Howard researchers want to “ensure that the data is employed in ways that reflect the interests and needs of marginalized communities,” the news release said.

Broader release of the data “will be held for consideration at a later date, with the intention of prioritizing those whose work aligns with the values of inclusivity, empowerment, and community-driven research.”