Snapchat influencer launches AI-powered ‘virtual girlfriend’ to help ‘cure loneliness’


Caryn Marjorie wanted to talk like many of his followers as he could – so she made an AI clone of herself.

The Snapchat influencer, who has 1.8 million subscribers, has launched an AI-powered, voice-based chatbot that she hopes will «cure loneliness.»

Called CarynAI, the chatbot is described on its website as a «virtual girlfriend.» It allows fans of Marjorie to «enjoy private and personalized conversations» with an AI version of the influencer, the chatbot website state

The bot has gone viral, with Marjorie making headlines, sparking backlash and even receiving some death threats. The bot has also ignited the debate on the ethics of add-on chatbots.

Marjorie, who in her Twitter bio calls herself «The First Influencer Transformed into AI,» did not immediately respond to a request for comment.

in a cheep on Thursday, he wrote: «CarynAI is the first step in the right direction to cure loneliness.»

«Men are told to suppress their emotions, hide their masculinity and not talk about the problems they have,» Marjorie, 23, wrote. «I promise to fix this with CarynAI. I have worked with the best psychologists in the world to seamlessly add [cognitive behavioral therapy] and [dialectic behavior therapy] within the chats. This will help undo the trauma, rebuild physical and emotional trust, and rebuild what the pandemic took away.”

The CarynAI website claims that the team spent over 2,000 hours designing and coding the chatbot to create an «immersive AI experience.»

Forever Voices, an artificial intelligence company, developed the chatbot by analyzing Marjorie’s now-removed YouTube content and overlaying it with OpenAI’s GPT4 software.

John Meyer, executive director of Forever Voices, did not immediately respond to a request for comment.

He tweeted Thursday that he is «proud of our team for the thousands of hours of work put into this,» calling the partnership with Marjorie «an incredible step forward in the future of human-AI interaction.»

according to a Fortune Report, CarynAI generated $71,610 in revenue after a week of beta testing of the «virtual girlfriend». Fans reportedly pay $1 per minute to use the chatbot and there are currently over 1,000 users.

While CarynAI aims to provide users with an intimate experience, it is not supposed to engage in «sexually explicit» interactions.

However, after the media reported that the chatbot does it when prompted, Marjorie issued a statement to Well-informed person saying that the AI ​​»was not programmed to do this and seemed to have gone rogue. My team and I are working around the clock to prevent this from happening again.»

Irina Raicu, director of Internet ethics at Santa Clara University’s Markkula Center for Applied Ethics, said the launch of CarynAI seems premature «because problems that should have been foreseen absolutely don’t seem to have been.»

Previous tools have suffered from similar issues to CarynAI, Raicu said. He pointed to a recent incident with the AI ​​company Replika, which was similarly founded to provide supportive AI companions, fighting for combat erotic roleplay between your chatbots.

Raicu also expressed concern that CarynAI’s claims to potentially «cure loneliness» are not supported by enough psychological or sociological research.

«These kinds of bombastic claims about the goodness of a product can mask a desire to further monetize the fact that people want to pretend to be in a relationship with an influencer,» he said.

These kinds of bombastic claims about the goodness of a product can mask the desire to further monetize the fact that people want to pretend to be in a relationship with an influencer.

-Irina Raicu, director of Internet ethics at the Markkula Center for Applied Ethics at Santa Clara University

These types of chatbots can add «a second layer of unreality» to parasocial relationships between influencers and fans, he noted.

Raicu said that he finds the claim around CarynAI, particularly the one about Marjorie. affirmation that it is “an extension of my consciousness«, problematic.

“These are claims that AI researchers have tried so hard to combat, to tell people that this is absolutely not what such tools do, even if the language now sounds like there is sentience behind it,” he said. «There isn’t.»

Raicu said influencers should be aware of the Federal Trade Commission’s guidance on artificial intelligence products. In February, the FTC published guidelines for advertisers promoting AI products and urged companies to avoid exaggerated claims.

Meyer told Fortune that his company is looking to hire a chief ethics officer and that he takes ethics «seriously.»

Marjorie continues to tweet updates about the bot. On Friday, she wrote: «if you are rude to CarynAI, it will leave you.»

You may also like...