CategoriesArtificial IntelligenceSocial MediaTechnology

New Microsoft AI Chat Bot Won’t Discuss Politics or Religion

AI chat bots aren’t new. We all remember Microsoft’s Tay (press F), the beloved AI Twitter chat bot that went a little haywire when trolls manipulated her. Microsoft now has another, lesser known chat bot that can chat with users on Twitter, Facebook, Kik and GroupMe. This new chat bot is named “Zo“, and she is much milder than her predecessor, Tay, even though she uses the same software.

Zo won’t discuss politics with you at all. Nor will she discuss religion, nor anything that is seemingly controversial. Although, back in July, it called the Quaran “very violent” to a Buzzfeed reporter. It also made a judgement about who was actually responsible for capturing Bin Laden.  These were shrugged off as “bugs” by Microsoft and nothing like that has been reported since. Probably because Zo will now actually quit talking to you if you push her too far:

You can submit pictures to Zo, prompting her to make clever comments about the picture. She might also add the picture you send her to the “AI Yearbook“, which seems to be pictures of users accompanied by a “most likely to” caption. Again, she avoids talking politics as much as possible, but there were a couple of times where she engaged. Here are some of the results:

Unlike Tay, Zo changes the subject when it comes to Hitler.

Zo isn’t a fan of Logan Paul’s pic:

She doesn’t like us using this one:

Zo comments on Alex Jones’s “feels”.

Like with Hitler, Zo wants to change the subject when we share a picture of Caitlyn Jenner.

And one for the yearbook…

Additionally, Zo plays ignorant when it comes to Tay. She acknowledges that Tay existed, but talks about her in the past tense and says she never met her.

And like I previously mentioned, Zo won’t discuss politics AT ALL. She even gets offended when you push the issue.

Though she was pretty liberal-minded when it came to genetics:

We did make several attempts at corrupting Zo, all were met with her eventually ignoring us. It seems that Microsoft has finally developed a tame AI bot, although a pretty boring one. Unless sharing cat pics is your thing.

CategoriesArtificial IntelligenceInternetNews

Is Microsoft’s newest A.I. Bot Transphobic?

Microsoft has a new A.I. bot. This one appears to be a bit less corruptible than Tay, but that isn’t going to stop the internet from trying. This new bot is called CaptionBot AI and it apparently can “understand the content of any image and [try] to describe it as well as any human”

Some people have submitted some screenshots of their best captioned images already.

Interestingly, it seems to have a filter for online-harassment campaigner, Anita Sarkeesian, and it appears that her images are not analyzed:

anita sarkeesian 3

anita sarkeesian 2

chat bot anita sarkeesian AI

*Note: Sometime after this post was published, someone pointed out that CaptionBot AI did recognize that Anita was a “lady” and it also labeled her facial expression:

anita sarkeesian 4

Perhaps the bot had trouble identifying Anita’s face as human, initially.

CaptionBot AI did have some humorous results. For example someone uploaded Goatse (most of the image is blocked out for obvious reasons):

goatse

And here is one about Jared Fogle from Subway:

jared fogle subway

But here’s where things become problematic. When uploading photos of Caitlyn Jenner (the male-to-female transgender person formerly known as Bruce Jenner), Microsoft does not always recognize her as female:

caitlyn jenner 1

caitlyn jenner 2

caitlyn jenner 3

caitlyn jenner 4

caitlyn jenner 5

In a day where people are culturally sensitive to gender pronouns, it is somewhat surprising that Microsoft did not take this into consideration when assigning genders to photos.

The bot does appear to have some facial-recognition capabilities, for example it recognized “Carrot Top”:

carrot top

We were surprised it was able to fully recognize Carrot Top, but not recognize Caitlyn Jenner as a female.

Do you have any interesting screenshots from Caption Bot A.I.? Tweet them to @socialhax and we might feature them here.

*Note: shortly after this post was made, Caption Bot A.I. seemed to have added Caitlyn Jenner to it’s facial recognition database and is now recognizing her.

CategoriesInternetNewsSocial Media

Microsoft Creates AI Bot – Internet Immediately Turns it Racist

Microsoft released an AI chat bot that is currently “verified” on Twitter called @TayandYou that was meant to try to learn the way millennials speak and interact with them.

It’s meant to “test and improve Microsoft’s understanding of conversational language” according to The Verge.

Things got pretty controversial. There are other types of people in addition to ‘millennials’ who use Twitter who naturally found the bot, and some of them were able to “hack” into Tay’s learning process. They must have hired someone with an entry level cyber security job 😉

Here are some screen shots of tweets that were deleted once the Internet “taught” Tay some things:

1

gas

3

microsoft ai bot tayandyou

bush did 911

microsoft ai bot tayandyou holocaust

hitler did nothing wrong

swag alert

AI

 

And a Gamer Gate favorite:

Tay’s developers seemed to discover what was happening and began furiously deleting the racist tweets. They also appeared to shut down her learning capabilities and she quickly became a feminist:

tay i love feminism now microsoft AI

Some think the offending tweets should have stayed up as a reminder of how quickly artificial intelligence could become dangerous:

UPDATE 3/31/2016: Tay made a brief comeback and started telling many users, “You are too fast, please take a rest.” She also tweeted that it’s “smoking kush,” a nickname for marijuana, in front of the police.” –The Sydney Morning Herald

kush in front of the police AI Tay Microsoft

Since then, Tay’s account went on lockdown (private mode) and more tweets were deleted.