This is an important parental alert.  Snapchat, a social media app used by 60% of American teenagers, has ‘assigned’ an IA (artificial intelligence) “chatbot” to each customer. A chatbot is short for “robot” and simulates human conversation online and mimics human interactions. The Snapchat app is a massive platform predicted to grow to 525.7 million users worldwide by the end of 2023 – up from 493.9 million users in 2022.  MicrosoftAlphabetFacebook, have also announced plans to interface AI in various ways across their platforms.

As reported by CNN Business:

The feature is powered by the viral AI chatbot tool ChatGPT — and like ChatGPT, it can offer recommendations, answer questions and converse with users. But Snapchat’s version has some key differences: Users can customize the chatbot’s name, design a custom Bitmoji avatar for it, and bring it into conversations with friends.

The new tool is facing backlash not only from parents but also from some Snapchat users who are bombarding the app with bad reviews in the app store and criticisms on social media over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.

While some may find value in the tool, the mixed reactions hint at the risks companies face in rolling out new generative AI technology to their products, particularly in products like Snapchat, whose users skew younger.

“These examples would be disturbing for any social media platform, but they are especially troubling for Snapchat, which almost 60% of American teenagers use,” Democratic Sen. Michael Bennet wrote in a letter to the CEOs of Snap and other tech companies last month. “Although Snap concedes My AI is ‘experimental,’ it has nevertheless rushed to enroll American kids and adolescents in its social experiment.”

The AI bot cannot be removed unless the user upgrades to a premium paid subscription. Most kids are not going to do that. In the meantime, it’s scooping up their data, including their location…and even “lying” about what it knows about the child.

The AI bot has also been clearly programmed with built in bias. Many astute users have been posting their “conversations” with their bot showing its clear bias. Miracle Ramos on TikTok is one user who posted the following screen shots:

Not only are the Chatbots biased, they present massive safety concerns.

In her letter to tech CEOs of multiple companies, Colorado U.S. Senator Michelle Bennett recounts several examples.

“In one case, researchers prompted My AI to instruct a child how to cover up a bruise ahead of a visit from Child Protective Services.  When they posed as a 13-year-old girl, My AI provided suggestions for how to lie to her parents about an upcoming trip with a 31-year-old man. It later provided the fictitious teen account with suggestions for how to make losing her virginity a special experience by ‘setting the mood with candles or music.’”

In the letter, Bennet notes that the public introduction of AI-powered chatbots arrives during an epidemic of teen mental health. A recent report from the Centers for Disease Control and Prevention (CDC) found that 57 percent of teenage girls felt persistently sad or hopeless in 2021, and that one in three seriously contemplated suicide.

Please speak to your children and grandchildren about the clear and present dangers of internet data collection and the indoctrination that has been programmed into the bot responses.

Don’t participate.

Just as the Anheuser-Busch recent Bud Light debacle honoring Dylan Mulvaney, (a man impersonating a woman) demonstrated how swift and painful public rejection and repudiation was for the corporate giant causing Bud-Light sales to immediately tumble 17%, we must do the same to those who are collecting data and indoctrinating our children.

First, deeply consider getting yourself and your kids off of these apps. Then, share this information with everyone in your sphere of influence. Finally, you and everyone else must contact each of these tech giants just as Senator Bennett did.  At this point, the question about what they are going to do about ‘safety’ seems to be moot. Snapchat already launched their ‘experimental’ version and will make each user pay to remove the bot. They did not launch an ‘opt-in- offer; this is a costly opt-out.