Algorithms gone wild
There’s no shortage of horror stories of algorithms deployed ostensibly for good, only to yield what would be considered fairly negative results. For example, in 2016 the engineers at Microsoft created a Twitter bot named “Tay”, which was driven by algorithms, allowing it to respond to ‘Millennials’ based on what was tweeted at it. However, within hours, Tay was tweeting racist, sexist, and Holocaust-denying tweets, proving that this technology is fallible, and that it simply learned and then reflected back the sexist and racist biases of its target audience.
In early 2017, Amazon’s Alexa smart home device made headlines when a 6-year-old-girl in Dallas was able to order a $160 dollhouse and a tin of cookies for herself, after having a conversation with the device about her love of those very items. As the story gained nation-wide med attention, it was reported that the Alexa device placed the same order for other families, based on hearing the television news reports.
These examples serve to illustrate the need for careful attention to be paid to algorithms developed for machine learning, and the role the Principals of Development can play into ensuring there’s a limitation on unintended consequences.
For researchers utilizing this technology in their work, artificial intelligence raises the idea of the ‘black box’. However, as long as developers incorporate principles of transparency and control by the researcher, software can be an ally and be seen as a research assistant. At QSR, we are continuing to develop these tools with our research base to make them more accurate, transparent, and ultimately improve the working lives of researchers.