Six years ago, in March 2016, Microsoft Corporation launched an experimental AI-based chatbot, TayTweets, whose Twitter handle was @TayandYou. Tay, an acronym for “thinking about you,” mimicked a 19-year-old American girl online, so the digital giant could showcase the speed at which AI can learn when it interacts with human beings. Living up to its description as “AI with zero chill,” Tay started off replying cheekily to Twitter users and turning photographs into memes. Some topics were off limits, though; Microsoft had trained Tay not to comment on societal issues such as Black Lives Matter.
Soon enough, a group of Twitter users targeted Tay with a barrage of tweets about controversial issues such as the Holocaust and Gamergate. They goaded the chatbot into replying with racist and sexually charged responses, exploiting its repeat-after-me capability. Realizing that Tay was reacting like IBM’s Watson, which started using profanity after perusing the online Urban Dictionary, Microsoft was quick to delete the first inflammatory tweets. Less than 16 hours and more than 100,000 tweets later, the digital giant shut down Tay. Although Microsoft is one of the pioneers and adherents of the principles of “Responsible AI” in algorithm development, Tay was a public relations disaster. And critics, ominously, saw the problem as “AI at its very worst—and only [the] beginning.”
Two years earlier, Amazon quietly built an AI algorithm that could review and rate job applications on a five-point scale. The objective was to screen the enormous number of resumes it received and identify the most promising candidates. The retailer created 500 models to analyze applicants for each job by location, and it taught the algorithm to recognize more than 50,000 terms that had appeared in the applications and resumes it had received in the past. The process helped AI learn to assign a low weight to generic skills, such as the number of computer languages a programmer knew.