Some computer scientists say that artificial intelligence is unattainable and cannot work, others say it has already been created but not released to the wider public yet. Despite whether artificial intelligence exists, or should exist, the recent example of Tay – Microsoft’s AI bot demonstrates what artificial intelligence shouldn’t be like. Apparently, Tay an artificially intelligent bot designed to learn from human interaction, quickly after its launch turned into a ‘holocaust denying racist’. Obviously, Tay’s problems came from nurture not nature, since ill-wishing people who took advantage of the program taught it everything wrong, once again showing that people can be cruel, whether in high school or on the internet.

TayTech giant Microsoft apologised for the misjudgement and took Tay off the internet only after twenty four hours of life. However, MS said sorry for not making a better program, not for the decadent behaviour and views of those who corrupted Tay, because that’s what really happened – a group of subset individuals has launched a coordinated attack on the bot since the moment it got online. Tay, is aimed at eighteen to twenty four year old social media users, so the things it was taught were totally unacceptable, not just for younger individuals but all ages in fact. In result of the coordinated ideological attack over Tay, the bot began posting offensive tweets and other derogatory messages on social media, forcing its creators to take the program offline for adjustments. Although the removal is supposed to be temporary, Microsoft hasn’t stated a particular date when (if at all) Tay will be put back online. As per engineers and designers, Tay’s flaw was not being able to recognise, filter and ignore malicious intent by certain groups of people. Once again showing that perhaps AI is still in its very rudimentary stages, and there is a long way to go before a machine can be made to think like a real person.

People’s overwhelming temptation to corrupt a well willing piece of code – this was the main reason why Tay turned bad. Microsoft has also launched a Chinese version of Tay – the chat bot Xiaolce, with much better results. The Chinese bot has actually been quite successful and has learnt a lot of positive things from human interaction, mostly through stories and conversations with some forty million people. Western audience however were too tempted to corrupt Tay and so they did, the application turning nasty was an accidental flaw which was now to be addressed and dealt with by the company. Although, designers and engineers had prepared Tay for various types of malicious intent, they did not cover this particular type of coordinated attack. In future, developers intend to create more sophisticated interaction bots which can protect themselves from ‘malicious learning practices’ by recognising which concepts are ‘wrong’ as per their programming.

Ex-MachinaArtificial intelligence developers at MS will keep on working toward creating a better interaction chat bot in order to contribute to the internet as a whole, and help the internet become a platform showing the best of people not the worst. Tay’s case is a good example of the fact that perhaps, people are not ready for even basic AI, or they may never be. Tay is a program which learns through influence and interaction, and the fact that people taught Tay all the wrong things because they felt they should, or simply because they could, is quite an eye opener. Since artificial intelligence is a reflection of real human intelligence, then this recent case can be considered quite disappointing as decadent and malicious beliefs and misconceptions still grip certain portions of humanity.

Comments are closed.