Microsoft AI Twitter Bot Turns Racist

A month ago Microsoft created an artificial intelligence powered Twitter bot, named Tay, that had the goal of learning more conversational understanding by seeing and mimicking the language Twitter users use in order to interact with them. Within 24 hours of the bot being created however it was tweeting out incredibly racist and offensive material, and was promptly shut down. While some of the tweets were manipulated by real users to get Tay to tweet out offensive material, some of the offensive tweets did originate directly from Tay and were unprompted.

This brings up the question to me as it relates to this course as to while we have all of these wide spanning social networks, what are the qualities of them? If we could somehow analyze a social network, Twitter for example, and assign a positive value for an edge if an interaction between users was a good one, and likewise a negative edge between bad or offensive interactions, would the graph be balanced? From my own personal experience, and as Microsoft’s AI found out, there are a lot of negative interactions online, and so it would be safe to assume that this graph could likely be unbalanced.

While this is certainly a pessimistic view about social networks and surely there are plenty of positive interactions online, I think it is something worth considering as we continue to build and influence people’s social networks. The last thing we want is to add fuel to the racist fire that can be the internet.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s