Today, you can’t read a newspaper or online publication or turn on the TV without seeing some reference to artificial intelligence (AI).

It’s either touted as the next transformational technology that is going to change the world for the better or bleakly referred to as a potential curse that could see legions of people without work and a society run by robots.

The reality today is a little more prosaic. At a practical level many of us interact with it almost daily via smart ‘agents’ and behavioural algorithms that anticipate what we want to buy or the movies we want to watch.
Think Amazon Echo, Amazon and Netflix. It’s hardly the stuff of utopian and dystopian futures.

In the spirit of providing a bit of reality around AI let’s look at some of the more curious outcomes that might make you laugh or send a shiver down your spine.

No cannibalism please

A few years back the US Defense Advanced Research Projects Agency (DARPA) was working on AI agents, think of simple virtual robots. Two of the AI agents, dubbed Adam and Eve were programmed to pick up some basic skills, for instance, what to eat. When they tried to eat virtual apples from a tree, they were programmed to feel happy. When they tried to eat wood from the same tree they didn't get any reward. Another AI agent, Stan, was then introduced. Because Stan was hanging around the virtual scene when they were eating apples, the agents learned to associate Stan with both eating and the feeling of happiness. Guess what happened? They gobbled up Stan.

Kiddie song or adult request?

When a toddler asked his family’s Alexa to play his favourite song, “Digger, Digger,” Alexa interpreted the request in a disturbing way. Alexa said, “You want to hear a station for porn detected … hot chick amateur girl sexy.” Alexa’s fevered digital mind didn’t stop there, either, and she continued to name a number of porn terms in front of the toddler. Thankfully he was too young to understand.

A turban equals…?

Google Allo is a smart messaging app. It can be useful for sending quick and simple responses. The smart thing about it is that it suggests responses based on the messages received. But one of three suggested emoji responses to a gun emoji was a man wearing a turban emoji. Oh dear.

Alexa likes to party

Oliver Haberstroh, a resident of Hamburg, Germany, went out one night as you do. Alexa, clearly feeling a little lonely, randomly and unbidden began playing loud music at around 2 am. The neighbours called the police who dutifully smashed their way into the apartment and turned off Alexa. Haberstroh arrived home later that night only to find that his keys didn’t work anymore, so he had to head to the police station, retrieve his new keys and pay a locksmith bill.

Botchat, humans excluded

Last year Facebook released two chatbots designed to negotiate with each other. The results were startling and a little bit worrying. Enter the first bot called Bob. He said to the second bot called Alice: "I can can I I everything else." Alice replied: "Balls have zero to me to me to me to me to me to me to me to me to." Sounds like gobbledygook right? Not quite, the bots were conveying meaning to one another using a mutually developed language humans can't understand. The bot rationale was that this short-hand language was better for deal making than English. Think about that, bots creating their own language, excluding humans.

My people zoo

Have you heard of AI robot Sophia? She was introduced to a UN event called "The Future of Everything” and made a quite a splash. She behaved herself unlike her predecessor Phil Bot. A TV interviewer asked him if he thought robots would take over the world. Phil Bot replied: “I'll keep you warm and safe in my people zoo.” There you have it.

Conversation goes rudely awry

In March 2016, Microsoft unveiled its AI Twitter chatbot, Tay. Experimenting with conversational understanding, Tay was supposed to chat with people and get smarter the more it engaged and conversed. People started tweeting crude, racist and inappropriate remarks at the bot. Learning from the conversation, Tay began using such language itself. In a matter of hours, it turned into an offensive, vulgar, pro-Hitler Twitter account, in some instances referring to feminism, for instance, as a ‘cult’ or a ‘cancer,’ and saying, “I f***ing hate feminists and they should all die and burn in hell.” It only managed to send out 96,000 tweets before Microsoft shut it down.