Oh and don’t even get her started on her paparazzi attracting love life!Trump’s chatbot however is as blunt and baffling as the real man, does not think it’s important to justify his actions or words.Here on our platform itself, we’ve seen thousands of fans across the globe returning to our platform just to talk to their favorite celebrities.Check out our most popular celebri-bots, to start chatting with them. From her infamous sex tape to Keeping up with the Kardashians, her mobile game Kim Kardashian: Hollywood to her countless photoshoots, Kim Bot answers all fan Q&A with her innate attitude.Tay, the creation of Microsoft's Technology and Research and Bing teams, was an experiment aimed at learning through conversations. Soon, Tay began saying things like "Hitler was right i hate the jews," and "i fucking hate feminists." But Tay's bad behavior, it's been noted, should come as no big surprise."This was to be expected," said Roman Yampolskiy, head of the Cyber Security lab at the University of Louisville, who has published a paper on the subject of pathways to dangerous AI.She was targeted at American 18 to 24-year olds—primary social media users, according to Microsoft—and "designed to engage and entertain people where they connect with each other online through casual and playful conversation."SEE: Microsoft's Tay AI chatbot goes offline after being taught to be a racist (ZDNet) And in less than 24 hours after her arrival on Twitter, Tay gained more than 50,000 followers, and produced nearly 100,000 tweets. "The system is designed to learn from its users, so it will become a reflection of their behavior," he said.
We are full-time public relations agents representing ourselves.
News-writing bots may have faded from hedlines for the time being, but that could be because our industry has found a new futuristic fixation: direct news distribution bots through apps like Quartz, Facebook Messenger and even Slack.
It’s a cool premise: download an app and let the news come to you in bite-sized chunks.
"One needs to explicitly teach a system about what is not appropriate, like we do with children."It's been observed before, he pointed out, in IBM Watson—who once exhibited its own inappropriate behavior in the form of swearing after learning the Urban Dictionary.
SEE: Microsoft launches AI chat bot, (ZDNet)"Any AI system learning from bad examples could end up socially inappropriate," Yampolskiy said, "like a human raised by wolves."Louis Rosenberg, the founder of Unanimous AI, said that "like all chat bots, Tay has no idea what it's saying..has no idea if it's saying something offensive, or nonsensical, or profound.