White House science advisers call for an “AI Bill of Rights”

The Biden administration is exploring a “bill of rights” to govern facial recognition and other potentially harmful uses of artificial intelligence, but the problems AI poses are much bigger than figuring out how to regulate a new technology.

The big picture: There’s no good way to regulate AI’s role in shaping a fair and equitable society without deciding what that society should look like, including how power should be balanced among individuals, corporations and the government.

Stay on top of the latest market trends and economic insights with Axios Markets. Subscribe for free

Driving the news: The White House’s Office of Science and Technology Policy launched a fact-finding mission yesterday that will ultimately result in a “‘bill of rights’ to guard against the powerful technologies we have created,” OSTP director Eric Lander and his deputy Alondra Nelson wrote in an op-ed published by Wired yesterday.

What they’re saying: “It’s important to start the conversations about what’s acceptable — and unacceptable — regarding AI and our personal data now, before it is too late,” says Sanjay Gupta, global head of product and corporate development at Mitek Systems, a leader in digital identity verification.

AI’s biggest boosters can fall victim to a kind of techno-solutionism — expecting technology to efficiently solve structural, societal problems.

  • Yes, but: At the same time, though, focusing too narrowly on the applications of AI risks a reverse techno-solutionism — believing that the fastest way to fix social problems is by tweaking the technologies that affect them, rather than the often intractable issues that underlie them.

Reminder: The original Bill of Rights is nearly 230 years old, and we’re still debating the meaning of nearly each of its 652 words.

Like this article? Get more from Axios and subscribe to Axios Markets for free.