Browse By

10 news on AI ethics that made waves in 2020

Here is a list of 10 news that have made waves within the tech and data science community in 2020. From AI ethical oversight to the impact of covid-19 on accelerated digital transformation worldwide.

1.      Digital transformation took off for once and all

The disruption caused by the global coronavirus pandemic has had an accelerating effect on many levels of our daily lives, from remote working to a prioritized focus on evaluating and de-risking the end-to-end value chain. Climate change and medical advancements have also benefited from this online shift at scale, creating the opportunity for potential carbon emission reductions and telemedicine at scale, for example. On the other hand, there is a significant impact on the technology sector, affecting raw materials supply, disrupting the electronics value chain, and seemingly causing an inflationary risk on products.

On a related note, the speed of change is giving compliance experts more than one headache. A new survey from Baker McKenzie referred by LAW.com indicates that while companies are spending on technology to help manage the impact of COVID-19, they are often doing so without the input of their compliance personnel. Almost half of the survey respondents – 1,550 compliance leaders working across 18 global markets and six industry sectors – said that their organizations accelerated adoption of digital tools and products due to the pandemic. The massive shift online, from remote work to touchless payment to curb the spread of the coronavirus, have proved a boon to the digitalization of virtually every industry. Interestingly, almost the same proportion of surveyed people (47%) hinted that the compliance team is being excluded from strategic decision-making on technology and digital acquisitions—and that lack of oversight can come with some consequences.

2.      Google fires Timnit Gebru, once their leading AI Ethics researcher

In late November, Google artificial intelligence researcher Timnit Gebru was asked by a senior manager to either retract or remove her name from a research paper she had co-authored as it was found to be partially objectionable in an internal review. The paper in question discussed ethical issues raised by recent advances in AI technology that works with language, which Google has said is important to the future of its business. Gebru says she objected because the process was unscholarly and carried on with her day-to-day. Soon afterwards, Gebru shared the news on Twitter, explaining how she learned she had been immediately terminated while on holidays as one of her direct reports was trying to – unsuccessfully – reach her.  A Google spokesperson said she was not fired but resigned, and declined further comment.

Ever since, Gebru has received an overwhelming wave of continuous support from AI researchers at Google, top universities, and big corporate players such as Microsoft and chipmaker Nvidia. In a matter of days after the news of Gebru’s dismissal broke, hundreds of Google employees signed an open letter calling on the company to release details of its handling of Gebru’s paper and to commit to “research integrity and academic freedom.”

3.      NeurIPS sets ethical impact criteria for submissions

This year, the world’s best regarded AI and advanced analytics conference, NEurIPS, turned onto a new page by setting new criteria for accepting submissions. “Following growing concerns with both harmful research impact and research conduct in computer science, including concerns with research published at NeurIPS”, this year’s conference introduced two new mechanisms for ethical oversight: a requirement that authors include a “broader impact statement” in their paper submissions and additional evaluation criteria asking paper reviewers to identify any potential ethical issues with the submissions. Meanwhile, this year’s conference has been defined topic-wise by the tension between corporate interests, human rights, ethics, and power. There were many workshops focused on analyzing bias against a given group. For example, the Muslim in AI workshop served as a discussion arena for participants to explore the reported GPT-3’s anti-Muslim bias, as well as the ways AI and IoT devices are used to control and surveil Muslims in China. The Washington Post reported this week that Huawei is thought to be working on AI with a “Uighur alarm” that lets authorities track members of the Muslim minority group. Noteworthy, Huawei is a platinum sponsor of NeurIPS. The conference’s organizers have responded to the questions asked about how NeurIPS handles ethical considerations when it comes to sponsors by forming a new sponsorship committee to evaluate sponsor criteria and “determine policies for vetting and accepting sponsors.”

4.      A group of industry associations and tech experts identify 10 founding values for AI in business

The Institute of Business Ethics, together with organisations and technology experts, has identified the 10 founding values and principles that should form the framework for the use of artificial intelligence in business. This framework, which goes by the acronym ARTIFICIAL, will help to guide decision-making. The full framework is publicly available.

5.      OpenAI releases the most powerful NLP tool yet, GPT-3, while Google readies up its replica, PEGASUS

The launch of OpenAI’s GPT-3 autoregressive language model has been one of the biggest milestones of advanced analytics, specifically natural language processing or NLP. Defined by OpenAi as a ‘machine learning toolbox’, GPT-3 relies on 175 billion parameters, ten times more than any previous non-sparse language model, and has been used to date for translation, question-answering, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, performing 3-digit arithmetic or generating news articles which human evaluators have difficulty distinguishing from articles written by humans.

6.      Sony to review all their AI-powered products for ethical impact

In late 2020, Sony announced it will review artificially intelligent products from development to post-launch on such criteria as privacy protection. An AI Ethics Committee, with its head appointed by the CEO, will have the power to halt development on products with issues, so ethically deficient offerings will be modified or dropped. As advanced by the company, even products well into development could still be dropped. Ones already sold could be recalled if problems are found. The company plans to gradually broaden the AI ethics rules to offerings in finance and entertainment as well.

7.      CB Insights Game Changers Awards features AI Transparency as key criteria

CB Insights Game Changers 2020 features AI Transparency as one of its 12 categories of game changing innovations and startups to watch. AI transparency providing explainability (how and why AI works) and trustworthiness (framework indicating when and where it can be trusted and by how much). They list 3 startups as leading in this area: Fiddler Labs (explainable AI engine); Kydi (automates regulated business processes); DarwinAI (AI tools providing explainability, trustworthiness assessment, and AI model optimization).

8.      Google expands its cloud services with ethical AI review

As facial recognition gets ever engrained in our quotidian life, becoming more pervasive in consumer products and law enforcement, there are more voices raising concerns about the potential bias these algorithms might yield. Over the past couple of years, researchers have tested features of Microsoft and IBM’s face-analysis services designed to identify the gender of people in photos. Since a very early stage, both companies’ algorithms were virtually perfect at identifying the gender of men with lighter skin, but frequently erred when analyzing images of women with dark skin. Per various independent studies, the skewed accuracy appears to be due to underrepresentation of darker skin tones in the training data used to create the face-analysis algorithms.

9.      SolarWinds hack reminds public and private sector of the importance of remaining vigilant

Close to the year’s end, IT management software company SolarWinds reported a malware attack that has put about 18,000 of its customers, ranging from the U.S. Government to Microsoft, at risk of online fraud. A Russian intelligence agency is said to have carried out a sophisticated malware campaign, according to the U.S. State Department, the Cybersecurity and Infrastructure Security Agency (CISA) and security firms. The massive breach started earlier this year, when hackers compromised SolarWinds’ Orion, a piece of software that lets an organization see what’s happening on its computer networks. As explained in SolarWinds’ filing to the SEC, hackers inserted malicious code into Orion’s otherwise legitimate software update. This is known as a supply-chain attack since it infects software as it’s under assembly. The approach is far-reaching as thousands of companies and government agencies around the world reportedly use the Orion software.

10. China unveils draft guidelines to limit the scope of mobile apps’ collection of personal data

In December, the Chinese Government released a set of draft rules published by the Cyberspace Administration of China, including 38 types of apps from online shopping and instant messaging to ride-hailing and bike sharing. This step solidifies China’s increasing scrutiny of its technology sector, setting the basis for fairer competition – it unveiled a draft anti-monopoly framework for tech firms in November – and securer, more transparent and just marketplace for consumers. “In recent years, mobile internet applications have been widely used and have played an important role in promoting economic and social development and serving people’s livelihoods,” the cyber administration said in a statement. “At the same time, it is common for apps to collect … personal information beyond their scope, and users cannot install and use them if they refuse to agree.”