A San Francisco-based startup called Credo AI is emerging from stealth and helping companies manage the regulatory and ethical risks of their artificial intelligence products.
Why it matters: The more widespread artificial intelligence becomes, the more important it is to build compliance and ethical standards into AI from the start.
- To do that, companies will need to bridge the gap between operating machine learning products and understanding how they’ll actually act in the real world.
What they’re saying: Credo — which has taken in $5.5 million in funding so far — was founded in 2020 with the goal of “asking how we ensure the unintended consequences of AI, which has now become the fabric of our lives, can be managed,” says Navrina Singh, a Qualcomm and Microsoft veteran and Credo’s founder and CEO.
- “And we want to promote that accountability among the company, among the stakeholders, among the [larger] ecosystem, and we want to do it at scale.”
How it works: Credo tries to do that in part by providing what it calls an “auditable record” of the data and decisions that go into creation, testing, deployment and monitoring of AI products.
- “Whether it’s your policy team or whether it’s your data science team, it allows you to really align on what good looks like,” says Singh.
The catch: Part of the challenge lays in defining what good actually looks like when, as Singh says, “it means so many different things to everybody.”
- And if answering that question can be difficult for conventional business, it’s even tougher in the fast-changing world of AI, where ever larger and more complex models seem to come out by the week.
What to watch: Whether AI governance as a service can match the turbo-charged growth of the overall AI economy.
- The State of AI 2021 report, released last week, found “research into AI safety and the impact of AI still lags behind its rapid commercial, civil, and military deployment.”
Credit: Source link