Why abortion is tech's next big reputational risk

Article Published: July 13, 2022

Written By: Casey Newton for Platformer

On July 6, amid a growing number of questions about how the company would deploy its latest innovations responsibly, Google said it had undertaken a new approach to ethical design. The company had begun testing a “Moral Imagination” workshop, it said — “a two-day, live-video immersive set of activities for product teams.”

The purpose for the workshop was to think through the ramifications of artificial intelligence, and was announced in the aftermath of a responsible AI researcher there telling the Washington Post he believed that Google AI had become sentient. So far, 248 employees representing 23 product and research teams have participated in these imagination workshops, Google said — “resulting in deeper, ongoing AI ethics consultations on product development.”

One good aspect of the Trump presidency was the way it forced tech giants to do more of this kind of reckoning: asking themselves how current and future products would likely be misused and abused, and modifying them accordingly. And yet recent events have illustrated how, particularly in the way they collect and store data, the giants aren’t stretching their moral imaginations nearly far enough.

Fowler writes:

“It is their responsibility as a company to keep people’s data secure — but as it currently stands, it shifts the work onto the user to figure out how to delete their data,” said Jelani Drew-Davi, campaigns director of Kairos, a left-leaning digital advocacy group.

I understand there’s a sad irony in this exercise. “Take a minute and just feel how intolerable it is for us to essentially be supplicants toward a massively wealthy, massively powerful data company, saying, ‘Please, please, please stop collecting sensitive data,’” said Zuboff.

Read full article here.