This is an incredibly exciting technology, but the model is so large that running, and especially for processing a large number of queries will need a lot of computing power, and perhaps at some point even be cheaper to hire people to communicate with users, said Galina Alperovich, a researcher in the field of machine learning in Avast.
in addition, because the bot was trained on Reddit posts, it can be quite biased and write rude messages just because sometimes people leave such comments in Reddit. Also, this bot may invent facts, because he has not yet integrated knowledge base on which he could rely. “But this is a General problem of all bots, trained to the statistical data from the past. This means that the bot can give people the wrong information and fool them.
“most Likely, the majority of companies would prefer to use a simpler, more straightforward, predictable and cheap chat-bots to communicate with their users,” concludes Alperovich.
the Problem of “humanity” voice assistants rests in a legal and ethical viewpoint, and not technology, says Denis Khoruzhii, technical Director of the company iD EAST. “Today it is already possible to create a convincing imitation of a real dialogue, including with empathy, he says. – Another issue is that the really good algorithms are somehow based on self-learning. To influence them, but to surrender is impossible. Therefore, it is obvious that we need a complex system of filtering and restrictions for content that is generated by the system. Otherwise, you can get another bot-Nazi.”
the Expert believes that the system needs to educate themselves largely on the basis of user information. “In theory, it can generate new knowledge, and content beyond what is used for training, adds horujy. – There are a lot of questions concerning, in particular, the use of personal data. Conventionally, the system can be used in subsequent dialogs the data from the previous one. But how can you guarantee the data isolation in a constantly trainees neural networks – is not yet clear”.
Since, in 2016, Microsoft CEO Satya Nadella announced that the “chat-bots is a new app”, they have turned into a trend today they are used government organizations, banks, retail, transport company. “Although chat bots have not always can replace him-the man, the attitude towards them in the society is mostly positive – people tend to appreciate their efficiency and commitment, – says Vladislav Shershulsky, Director of emerging technologies of Microsoft in Russia. – What if there are problems, so it is with emotion, or empathy – the ability to empathize with the interlocutor. It is necessary for full communication, but to teach her intelligent chat-bot is not easy. The fact that learning chat bots on our human examples, and thus inherit not only our advantages but also disadvantages. It is no secret that the conflict as a form of communication requires much less mental effort than empathy, and therefore not so uncommon. If you want to chat-bot was empathetic to his creation should be approached responsibly – learning is not at all available in the Internet conversations, but only on “best practices”, to instill morality and sense of duty. As it turned out, the creation of a responsible social chat bot – the problem is not only technical, but also ethical”.
Salavat Khan founder of the service ad-blocking 1Blocker, I am sure that this technology will be used in voice assistants in a few years. “It is possible that we will meet this bot in call centers support clients,” he added.