Money, mimicry and mind control: Big Tech slams ethics brakes on AI

SAN FRANCISCO, September 8 (Reuters) – Last September, Google’s (GoogL.O) Cloud entity studied the use of artificial intelligence to help a financial firm decide who to borrow money from.

It turned down the client’s idea after weeks of internal discussions, believing the project was too ethically risky as the AI technology could perpetuate prejudices such as race and gender.

Since the beginning of last year, Google has also been blocking new AI functions that analyze emotions for fear of cultural insensitivity, while Microsoft (MSFT.O) limited software that mimics voices and IBM (IBM.N) declined a customer request for an advanced facial recognition system.

All of these technologies have been restricted by boards of executives or other executives, according to interviews with the AI ethics chiefs of the three US tech giants.

Reported here for the first time, their vetoes and the considerations led to them reflect an emerging industry-wide effort to reconcile the pursuit of lucrative AI systems with greater consideration of social responsibility.

“There are opportunities and harms, and our job is to maximize opportunities and minimize harm,” said Tracy Pizzo Frey, who serves as Managing Director for Responsible AI on two ethics committees at Google Cloud.

Judging can be difficult.

Microsoft, for example, had to weigh the benefits of its voice mimicry technology to restore the speech of impaired people against risks such as enabling political deepfakes, said Natasha Crampton, the company’s AI officer in charge.

Human rights activists say that decisions with potentially far-reaching consequences for society should not be made internally alone. They argue that ethics committees cannot be truly independent and that their public transparency is constrained by competitive pressures.

Jascha Galaski, advocacy officer at Civil Liberties Union for Europe, sees external oversight as the way forward, and US and European authorities are actually drafting rules for the young area.

If corporate AI ethics committees become “really transparent and independent – and this is all very utopian – then this could be even better than any other solution, but I don’t think that’s realistic,” said Galaski.

The companies said they would welcome a clear regulation on the use of AI, and this was vital for both customer and public trust, similar to the regulation on vehicle safety. They said it was also in their financial interest to act responsibly.

However, they take great care that all the rules are flexible enough to keep pace with innovations and the new dilemmas they create.

Among the complex deliberations to come, IBM told Reuters that its AI Ethics Board has begun discussing how to monitor a new frontier: implants and wearables that connect computers to brains.

Such neurotechnologies could help disabled people control their movements, but they could raise concerns such as the possibility of hackers tampering with thoughts, said Christina Montgomery, IBM’s chief privacy officer.


Tech companies admit that just five years ago they launched AI services like chatbots and photo tagging with few ethical safeguards and followed up with updates against abuse or biased results.

But as political and public scrutiny of AI bugs increased, Microsoft in 2017 and Google and IBM in 2018 established ethics committees to review new services from the start.

Google said it was confronted with its money-lending business last September when a financial services company discovered that AI was better at assessing people’s creditworthiness than other methods.

The project seemed well suited for Google Cloud, whose expertise in developing AI tools to help in areas such as abnormal transaction detection has attracted clients like Deutsche Bank (DBKGn.DE), HSBC (HSBA.L) and BNY Mellon (BK.N).

Google’s entity anticipated that AI-powered credit scoring could become a market worth billions of dollars a year and wanted to gain a foothold.

However, the ethics committee, which is made up of about 20 managers, social scientists and engineers who are reviewing potential deals, unanimously voted against the project at a meeting in October, Pizzo Frey said.

The AI system would have to learn from past data and patterns, the committee concluded, risking repeating discriminatory practices from around the world against people of color and other marginalized groups.

In addition, the committee known internally as “Lemonaid” has put in place a policy to skip all financial services business related to creditworthiness until such concerns have been addressed.

Lemonaid had turned down three similar proposals last year, including from a credit card company and a business lender, and Pizzo Frey and her sales colleague wanted a more comprehensive decision on the matter.

Google also announced that its second cloud ethics committee known as Iced Tea this year reviewed a service released in 2015 to categorize photos of people according to four expressions: joy, sadness, anger, and surprise.

The move followed a decision by Google’s corporate ethics committee, the Advanced Technology Review Council (ATRC), which last year withheld new services related to emotion reading.

The ATRC – over a dozen top executives and engineers – found that deriving emotions might be insensitive because facial signals are associated with feelings differently in different cultures, said Jen Gennai, founder and head of Responsible Innovation Teams from Google.

Iced Tea has blocked 13 planned emotions for the cloud tool, including embarrassment and satisfaction, and could soon discontinue the service entirely in favor of a new system that describes movements such as frowning and smiling without wanting to interpret them, called Gennai and Pizzo Frey.


Microsoft meanwhile developed software that could reproduce a person’s voice from a short sample, but the company’s Sensitive Uses Panel then spent more than two years discussing the ethics surrounding its use, consulting the company’s President, Brad Smith, Crampton, senior AI officer, told Reuters.

She said the panel – specialists in areas such as human rights, data science and technology – finally gave the go-ahead for the full release of Custom Neural Voice in February this year. But it has restricted its uses, including reviewing subject consent and approving purchases by a team with Responsible AI Champs trained in company policy.

IBM’s AI board, made up of around 20 department heads, struggled with its own dilemma when, at the start of the COVID-19 pandemic, it examined a customer request to adjust facial recognition technology to detect fevers and face coverings.

Montgomery said the board of directors she is co-chairing declined the invitation, concluding that manual reviews would be sufficient to make privacy less compromise, as photos would not be stored for an AI database.

Six months later, IBM announced that it would discontinue its facial recognition service.


To protect privacy and other freedoms, lawmakers in the European Union and the United States have extensive controls over AI systems.

The EU Artificial Intelligence Act, due to be passed next year, would ban real-time facial recognition in public spaces and require tech companies to review high-risk applications such as those used in hiring, creditworthiness and law enforcement. Continue reading

US Congressman Bill Foster, who held hearings on how algorithms advance discrimination in financial services and housing, said new laws regulating AI would keep providers balanced.

“When you ask a company to slump profits in order to achieve social goals, they say, ‘What about our shareholders and our competitors?’ That’s why you need sophisticated regulation, ”said the Illinois Democrat.

“There can be areas that are so sensitive that you will see tech companies deliberately staying out until there are clear traffic rules.”

In fact, some AI advances can simply be put on hold until companies can address ethical risks without investing enormous amounts of technical resources.

After Google Cloud denied its custom financial AI application last October, the Lemonaid committee informed the sales team that the unit would like to start developing credit-related applications one day.

First, research to combat unfair prejudice must keep pace with Google Cloud’s ambitions to increase financial inclusion through “highly sensitive” technology, according to the policy distributed to employees.

“Until then, we will not be able to provide solutions.”

Reporting by Paresh Dave and Jeffrey Dastin; Adaptation by Kenneth Li and Pravin Char

Our standards: The Thomson Reuters Trust Principles.