Companies establish standards to build trust in artificial intelligence

Suzanne Potter | California News Service
A new report looks at best practices in artificial intelligence, which is used for myriad applications including facial recognition programs. Photo Credit: DedMityay / Adobestock

Artificial intelligence is growing quickly, so companies such as Google, Amazon, Verizon, Microsoft and more have come together to establish best practices in the field.

They seek to foster responsible governance, prioritizing privacy, accountability and benefit to society.

Miriam Vogel, president and CEO of the nonprofit Equal AI, just published a report laying out the standards, in a bid to build trust in the AI system.

“This work is not just the right thing to do, it’s actually good business as well,” Vogel contended. “It is a competitive advantage for a company to follow the framework because the end result is building trust in their AI systems. It’s building trust with their employees. It’s building trust with their consumers.”

AI is a machine-based system capable of leveraging huge data sets to make predictions or decisions, and is behind such technology as the Alexa personal assistant, autonomous vehicles and ChatGPT. The new standards seek to ensure AI tech is safe, inclusive and effective for all possible end users.

Vogel noted while many current laws govern AI, more regulation is likely going forward. And companies cannot wait for the dust to settle as they forge ahead.

“This framework is intended to help people understand what they need to do now to make sure that they are not creating any unintentional harm,” Vogel outlined. “That they’re not inviting liability, either in litigation, prosecution, or above-the-fold terrible headlines.”

The framework is divided into six main categories: Responsible AI values and principles, accountability, documentation, defined processes, multi-stakeholder reviews, and metrics to monitor progress.

Categories
FeaturedTechnology

RELATED BY

0