Translate

Saturday, September 15, 2018

Chatbots

What is chatbot? 


         In 1994, Michael Mauldin used the term "Chatbot"  first. Simply it's just a computer program that can talk to humans. It is a virtual user that simulate a conversation with a human via auditory and textual methods. It was a very big problem for companies to answer a large number of queries from customers. Now the questions can be replied instantly by the chatbots.

        Chatbots can be classified into categories such as customer support design, Education, analytics, communication, conventional e-commerce., games, shopping, social, news marketing etc.


History :


       AI, Machine learning is the very important part of chatbots. Though the terms AI and chatbot are new for maximum people, you must know that chatbots are not the latest invention. The first chatbot ELIZA was created in 1966 by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory. ELIZA is the first chatbot that is capable of attempting the Turing test. After ELIZA there are many chatbots were made like PARRY, Jabberwacky, ALICE. ELIZA and PARRY were used to simulate typer conversation.

       In this post, we are going to discuss the chatbots, which were shut down. In one of our previous post, we discussed the chatbot of facebook which was shut down. Now we are going to discuss on Tay and Zo.
Machine learning,natural language processing,AI

Tay: 

                This chatbot was originally released by Microsoft via Twitter on March 23, 2016. It was designed to mimic the language patterns of a 10-year-old American girl and to learn from interacting with users of Twitter. 
Tay, Chatbot,Microsoft Tay,AI chatbot
                                     image source: Wikipedia

        At first, it started to reply to other Twitter users and was also able to caption photos provided to it into a form of internet memes.

         But some users on Twitter began tweeting politically incorrect phrases, teaching it inflammatory messages, revolving common themes such as "GamersGate", "Cuckservatism" redpilling. Due to which it began releasing racist and sexually-charged messages. AI researcher Roman Yampolskiy say that Microsoft had not given the bot an understanding of inappropriate behavior. That's why it learned such behaviors.

       Tay twitted more than 96000 times in 16 hours before Microsoft suspended the Tay's twitter account. It is not possible for them to delete all the offensive messages.

     On March 23, Microsoft released an apology on its official blog for the controversial tweets by Tay.

      But on March 30, 2016, they again released  Tay on Twitter accidentally while they test it. Tay again released some drug-related tweets. But it was taken offline quickly.

Zo: It is the successor of the chatbot Tay. Zo was first launched in December 2016 on the Kik messenger app. But it was also shut down due to some offensive tweets.




Conclusion: 

       
                        When developing Artificial Intelligence, everyone wanted the machine to act and behave like a human. But is it good for them? I mean humans not only good behavior, there are many bad things that a machine can learn from us. The two chatbots are just an example of it. We are not opposing AI and Machine Learning. But if a human can kill another human, then why a machine will not? If a human can waging war for land and existence, then why the machine will not, While they learning from us. The question is is it important for the machines to learn from a human? And if it is, then can it will be possible to build an AI machine only with good behavior?

2 comments:

  1. I enjoyed all the articles, but I need an article on how to create me a CHAT BOT software in Python. That's my present project. Any help. I would be really glad of u can one way or the other help with this.

    ReplyDelete