Autistic Purdue professor accused of being AI for lacking ‘warmth’ in email
An assistant professor at Purdue University, who has been diagnosed with autism, said that they were accused by a fellow researcher of being an AI bot after sending an email that allegedly lacked “warmth.”
Rua Mea Williams, 37, warned that people with disabilities might be confused with artificial intelligence because fellow professors are not accounting for those who have neurological issues or are not native English speakers.
“Kids used to make fun of me for speaking robotically. That’s a really common complaint with Autistic children,” Williams told The Post on Thursday about the misconception.
“Chat GPT detectors are flagging non-native English speakers as using Chat GPT when really it’s just that they have an idiosyncratic way of pulling together words that’s based on translating their native language into English.”
Williams, who uses they/them pronouns, holds a Ph.D. in human-centered computing.
They chose to share the interaction on Twitter to illustrate how the mistake could happen to anyone with disabilities.
“The AI design of your email is clever, but significantly lacks warmth,” the researcher replied to Williams’ email, followed by a request to speak with a “human being.”
“It’s not an AI. I’m just Autistic,” the professor replied, telling The Post it was “probably” not the first time they’ve been accused of “roboticness,” but is the first time they received the “bot implication.”
Williams started teaching in 2020 at the university’s User Experience Design Program — which focuses on the design of new cutting-edge technologies — while reviewing and critiquing research from others in academia and the scientific community.
Williams is contractually unable to share who the peer was or what led to the accusation of being AI — but said they were simply giving the information for the study they were asked to provide.
“It is the first time in this recent milieu of AI suspicion that I got caught in a, ‘Are you for real?’” they said.
The tweet has since been viewed nearly 10 million times on Twitter — with Williams’ saying they picked up on an alarming trend within the reply of the thread since posting the interaction Wednesday.
“There’s lots of people talking about kids used to make fun of me for speaking robotically. That’s a really common complaint with Autistic Children.”
They revealed since the rise of ChatGPT and other AI bots, fellow professors are “suspicious” of all their student’s work but are not accounting for the ones who have neurological issues or are not native English speakers.
When asked how rampant the issues of professors on campus worried their students are using Chat GPT to cheat at Purdue, Williams explained this is not an isolated incident.
“Purdue is not any different from anywhere else,” the professor shared. “There’s a lot of anxiety among faculty about this idea that all their students are cheating with this new technology.”
Williams warned that their fellow professors need to be wary of blindly accusing students of cheating without definitive proof — sharing that most are not prepared for the storm that could come if they wrongfully accuse someone with autism or any disabled student of cheating.
Williams’ said they are most worried for the students with undiagnosed issues who are not labeled in the eyes of the university system which could show they may communicate differently than others.
“They’re not going to have a leg to stand on, to fight back,” they said, understanding as technology gains greater intelligence, it will be even harder to differentiate between a student using AI to cheat versus someone who communicates in a certain way because of a disability.
Read the full article Here