JOIN OVER 100,000 PEOPLE, START EARNING MONEY WITH YOUR SMARTPHONE
Google has released a chatbot powered by the neural network called Meena which claims is better than any other chatbot out there.
Data slurp: Meena was trained on a whopping 341 gigabyte of public social media chatter—8.5 times more data than the GPT-2 of OpenAI. Google says Meena can pretty much chat about anything, and even make up (bad) jokes.
Why it matters: It's hard to have an open-ended conversation covering a wide range of topics and most chatbots can not keep up. At some point most people say things that don't make sense or reveal a lack of basic knowledge of the world. A chatbot that avoids such mistakes will go a long way towards making AIs feel more human, and making characters more lifelike in video games.
Sense and specificity: Google has developed a new metric to measure Meena, which it calls the Sensitivity and Specificity Average (SSA), capturing important attributes for natural conversations, such as whether every utterance makes sense in context— which many chatbots can do — and is specific to what has just been said, which is more difficult.
What are you talking about? For instance, if you say "I like tennis" and a chatbot replies "That's cool," the reply is meaningful but not accurate. Most chatbots rely on such tricks to conceal the fact that they don't know what you're talking about. On the other hand, a answer like "Me too— I can't get enough of Roger Federer" is specific. Google used crowdworkers in about 100 conversations to produce sample conversations and to score utterances. Meena got a 79 percent SSA ranking, compared to 56 percent for Mitsuku, a state-of- the-art chatbot who has won the Loebner Prize in the last four years. In this new test, only human communication partners scored only 86 per cent.
May I converse with Meena? Nevertheless. Google says it won't release a public prototype until it's checked the safety and prejudice model, which is probably a good thing. After Microsoft launched its chatbot Tay on Twitter in 2016, within hours it started spewing racist, misogynistic invective.
Images Credit: Innovative Zone