
The College of Washington pc science division denounced feedback made on-line by a retired professor over a debate about AI ethics, Timnit Gebru’s controversial exit at Google, so-called “cancel culture,” and extra.
A heated back-and-forth involving longtime AI researcher Pedro Domingos and the response from the UW demonstrates the complexity of public discourse on controversial matters. It additionally highlights unanswered questions associated to the societal implications of synthetic intelligence, and is the newest instance of the backlash that may happen when politics collides with academia and the tech industry.
Domingos, who joined the UW school in 1999 and is the writer of The Grasp Algorithm, sparked the preliminary dialogue on Twitter after he questioned why the Neural Data Processing Programs (NeurIPS) convention was utilizing ethics evaluations for submitted papers.
“It’s alarming that NeurIPS papers are being rejected based on ‘ethics reviews,’” he tweeted final week. “How do we guard against ideological biases in such reviews? Since when are scientific conferences in the business of policing the perceived ethics of technical papers?”
His opinion drew quite a lot of responses from different prime knowledge scientists and people concerned with NeurIPS.
“The problem here is that folks like him lack the humility to admit that they do not have skills in qualitative work and dismiss it all as a ‘slippery slope,’” tweeted Rumman Chowdhury, founding father of Parity and former international lead for Accountable AI at Accenture Utilized Intelligence. “Qualitative methods have rigor. Ethical assessment can be generalizable and sustainable.”
Hello Pedro, I helped create the NeurIPS moral evaluation course of. Seems to be like there is a wholesome dialogue happening right here already, however let me know if I can reply any particular questions. Up entrance, I ought to say that the moral reviewers gave suggestions; they didn’t settle for/reject papers.
— raia hadsell (@RaiaHadsell) December 8, 2020
The discourse on Twitter then shifted to final 12 months’s determination to rename NeurIPS. There have been considerations over the earlier identify NIPS due to racial slurs and sexism.
That set off the start of an extended trade between Domingos and Anima Anandkumar, a professor at Caltech and director of machine studying analysis at NVIDIA who led a petition to vary the identify of the convention. Pornography got here up in a dialogue about net search outcomes for the time period “nips,” sparking a response from Katherine Heller, chair of range and inclusion for NeurIPS 2020, and Ken Anderson, chair on the College of Colorado’s pc science division.
So that you get porn websites and I do not? Should be Google’s personalization algorithm.
— Pedro Domingos (@pmddomingos) December 11, 2020
Hello! This was flagged to me as an inappropriate dialog, that I’ll ask you to cease. Porn websites had been related to the previous identify for years and having that denied additional hurts members of our neighborhood. We’ve got now moved on. Thanks.
— Katherine Heller (@kat_heller) December 12, 2020
As a professor and chair of a division of pc science at a public college, I discover this conduct unacceptable, as would a lot of my colleagues. CS departments should proceed our work of broadening participation in computing to be united in opposing this conduct.
— Ken Anderson (@kenbod) December 12, 2020
As of Tuesday, Anandkumar’s Twitter was not lively. She declined to remark for this story.
NeurIPS posted a press release on ethics, equity, inclusivity and code of conduct on its homepage. We’ve reached out to the convention for remark.
“Having observed recent discussions taking place across social media, we feel the need to reiterate that, as a community, we must be mindful of the impact that statements and actions have on our peers, and future generations of AI / ML students and researchers,” it reads. “It is incumbent upon NeurIPS and the AI / ML community as a whole to foster a collaborative, welcoming environment for all. Therefore, statements and actions contrary to the NeurIPS mission and its Code of Conduct cannot and will not be tolerated.”
The Twitter chatter additionally delved into the current departure of Gebru, a prime AI ethics researcher at Google, and whether or not she was fired by the corporate or resigned following a controversy related to an AI ethics paper. Domingos tweeted that Gebru “was creating a toxic environment within Google AI” and said that she was not fired, regardless of Gebru stating otherwise.
I had learn it, and I’m interested by details. You, then again, appear to be extra interested by insulting folks, which is ideal for an ethics researcher.
— Pedro Domingos (@pmddomingos) December 11, 2020
When the particular person insulting folks accuses the folks he is at the moment insulting of insulting folks….lots of people have been studying in regards to the phrase gaslighting just lately and also you proceed to additional educate us on it.
— Timnit Gebru (@timnitGebru) December 11, 2020
Heller then tweeted at Domingos and stated he was violating the NeurIPS code of conduct.
Later that night, the UW’s Allen College of Pc Science and Engineering issued a lengthy statement via Twitter. The varsity’s management took concern with Domingos “engaging in a Twitter flame war belittling individuals and downplaying valid concerns over ethics in AI,” and for his use of the phrase “deranged.” Right here’s the assertion in full:
#UWAllen management is conscious of current “discussions” involving Pedro Domingos, a professor emeritus (retired) in our college. We don’t condone a member of our neighborhood participating in a Twitter flame warfare belittling people and downplaying legitimate considerations over ethics in AI. We object to his dismissal of considerations over the usage of expertise to additional marginalize teams ill-served by tech. Whereas potential for hurt doesn’t essentially negate the worth of a given line of analysis, none of us needs to be absolved from contemplating that influence. And whereas we could disagree about approaches to countering such potential hurt, we needs to be supportive of making an attempt completely different strategies to take action.
We additionally object within the strongest attainable phrases to the usage of labels like “deranged.” Such language is unacceptable. We urge all members of our neighborhood to all the time categorical their factors of views in essentially the most respectful and collegial method.
We do encourage our students to interact vigorously on issues of AI ethics, range in tech and industry-research relations. All are essential to our subject and our world. However we’re all too conversant in counterproductive, inflammatory, and escalating social-media arguments.
We’ve got requested Pedro to clarify he tweets as a person, not representing the Allen College or the College of Washington. We’d additional argue that this complete mode of discourse is damaging and unbecoming.
The Allen College is dedicated to addressing AI ethics and fairness in concrete methods. That work is ongoing, and plenty of of our actions are listed on our web site.
One key element is to increase the inclusion of ethics in our curriculum and put together college students to contemplate the very actual influence that expertise can have, particularly on marginalized communities.
In recent times, we’ve got added a number of courses on this subject at each the graduate and undergraduate ranges, and we plan to proceed to work towards increasing that facet of our curriculum.
As a college, we’ve got acknowledged our dedication to be extra inclusive and to contemplate the influence of our work on folks and communities. We is not going to be deterred, by naysayers inside or exterior of our neighborhood, from placing within the laborious work required to attain these goals.
Signed,
Members of the Allen College Management
Magdalena Balazinska, Prof. and Director
Dan Grossman, Prof. and Vice Director
Tadayoshi Kohno, Prof. and Affiliate Director for Variety, Fairness & Inclusion
Ed Lazowska, Prof. and Affiliate Director for Improvement & Outreach
Domingos described the varsity’s response as “cowering before the Twitter mob.”
A heartfelt because of everybody who has expressed their help by tweet, e mail and voice. My division’s cowering earlier than the Twitter mob was as craven and blinkered as you’d anticipate, nevertheless it’s heartening to see so many individuals who can nonetheless assume. Sustain the combat!
— Pedro Domingos (@pmddomingos) December 13, 2020
We adopted up with Magdalena Balazinska, a well-regarded researcher and educator who took over because the Allen College director final 12 months. Right here’s what she needed to say in regards to the matter:
“As leader of the Allen School, one of my highest priorities is to promote a culture and an environment that is diverse, equitable, and inclusive. I also deeply care about an environment in which people discuss issues, even potentially controversial ones, openly, with empathy, and without bullying. Witnessing what happened on Twitter this past week was disheartening. We need to find ways to come together. The entire tech industry should work toward all these goals, and we have much work to do.”
Ed Lazowska, a longtime chief on the Allen College, stated the division is dedicated to educational freedom and freedom of speech.

“We encourage good-faith dialogue, including on controversial issues,” he stated. “But we expect members of our community to engage in that dialogue in a respectful, collegial, and constructive manner that is free from personal attacks and is not dismissive of people’s lived experiences. Pedro failed to live up to those standards and we felt compelled to make clear where we stand.”
Lazowska added: “Pedro is within his rights to tweet. We felt it was important to distance the school from his views.”
In an e mail trade with GeekWire, Domingos stated the Allen College ought to have “stood by my right to voice my opinions, and back me up in my efforts to free the machine learning community from the miasma descending on it.”
“Instead, they chose to pay their obeisance to the ultra-left crowd, as they have before,” Domingos stated, referencing Stuart Reges, one other UW pc science professor who was criticized for his 2018 essay that claimed girls are underrepresented in software program engineering due to private desire, not as a result of institutional limitations deter them from pursuing careers in tech.
Reges instructed GeekWire he was upset that the Allen College “has thrown Pedro under the bus.”
“He has raised significant questions about the activism surrounding Timnit Gebru’s termination from Google and new efforts to inject ethics reviews into all aspects of AI research,” stated Reges. “The greatest sin he has committed has been to refer to ‘deranged activists.’ The unified mob reaction to try to cancel him proves that his opponents and the Allen School leadership are not willing to engage in meaningful dialog to explore the issues.”
Domingos stated the Twitter spat highlights how the machine studying neighborhood is being “progressively strangled by political correctness and extreme left-wing politics.”
“The larger problem is that academia and the tech industry, not just machine learning, are being strangled by a crowd that refuses to allow the free exchange of ideas on which research depends, and is successfully imposing an increasingly far-left orthodoxy,” he instructed GeekWire. “People live in fear of their attacks.”
When you’ve been focused by the cancel crowd, do not conceal in disgrace. Shout it from the rooftops. Carry disgrace and opprobrium on them. That is how we finish this.
— Pedro Domingos (@pmddomingos) December 16, 2020
Daniel Lowd, an affiliate professor on the College of Oregon who earned his PhD from the UW in 2010, tweeted that he “would like to publicly disavow and distance myself from these comments by my PhD advisor and collaborator.”
I want to publicly disavow and distance myself from these feedback by my PhD advisor and collaborator.
I’ve labored with Pedro on quite a lot of tasks, and I respect his perception in some areas, however his rhetoric right here is each false and dangerous. https://t.co/Lk3f0F0s6S
— Daniel Lowd (@dlowd) December 11, 2020
I’m unhappy, too, Pedro. I believed I had a colleague who revered folks with completely different experiences and viewpoints, who listened to proof and regarded when he is perhaps unsuitable, who argued in good religion. And I used to be unsuitable.
— Daniel Lowd (@dlowd) December 15, 2020
I sympathize, and now I perceive higher the place you are coming from. After all I respect their humanity. However – essential level – that does not justify the cancel tradition.
— Pedro Domingos (@pmddomingos) December 15, 2020
The response to Domingos’ unique tweet about ethics evaluations of AI papers additionally displays the urgent dilemma of AI ethics because the expertise more and more infiltrates on a regular basis life.
Contemplating the moral influence of AI analysis is “absolutely essential,” stated Oren Etzioni, a UW pc science professor emeritus (retired) who’s now CEO of Seattle’s Allen Institute of Synthetic Intelligence.
“That said, it’s hard to argue with Pedro’s observations about online attacks and the refusal to allow the free exchange of ideas,” stated Etzioni, who famous that he was talking to GeekWire as a person and never a consultant of any establishment.

Etzioni known as out a platform his father launched known as Civil Dialogues that encourages deliberation on urgent points. He additionally famous his “Hippocratic oath” created in 2018 as a approach to encourage AI software program builders to recollect their moral burden.
Requested about Domingos’ feedback on Twitter, Seattle College senior teacher and AI ethics skilled Nathan Colaner stated “it seems that his underlying attitude is that ethical concerns in AI are overblown, and that ethicists are making too much of their concerns, specifically when it comes to algorithmic bias.”
“I think that’s the wrong attitude to have,” Colaner stated. “First of all, there is no legitimate debate to be had about whether algorithms are ‘neutral.’ It is also now clear that AI is not going to remove human bias, as we sometimes used to hear. But what is still unclear is whether human bias is a worse or less bad problem than algorithmic bias.”
Colaner stated there are many unanswered questions that want solutions as AI innovation continues at a speedy tempo. The AI ethics neighborhood is “basically scrambling,” he stated, including that he helps the Allen College’s assertion. Colaner is managing director of the Initiative in Ethics and Transformative Applied sciences, an institute at Seattle U made attainable by way of a donation from Microsoft.
“Healthy debate sharpens everyone’s minds,” Colaner stated, “but since we in the AI ethics community have serious, time-sensitive work to do, distraction is not useful, which is why Twitter made the ‘unfollow’ button.”