Deliver Your News to the World

Early Learnings from My AI and New Safety Enhancements

As part of our joint work to improve My AI, we want to share an update on some of the safety enhancements we have recently put in place as a result of our learnings — along with new tools we plan to implement.


We rolled out My AI, a chatbot built with OpenAI’s GPT technology. We started slowly by providing My AI to Snapchat+ subscribers and, in a little over a month, we have learned a lot. For example, we know some of the most common topics our community has asked My AI about include movies, sports, games, pets, and math. 

We have also learned about some of the potential for misuse, many of which we learned from people trying to trick the chatbot into providing responses that do not conform to our guidelines. As part of our joint work to improve My AI, we want to share an update on some of the safety enhancements we have recently put in place as a result of our learnings — along with new tools we plan to implement.

My AI’s Approach to Data

Privacy has always been central to Snap’s mission — it helps people feel more comfortable expressing themselves when communicating with friends and family. Across Snapchat, we try to provide our community with clarity and context about how our products use data and how we build features using privacy-by-design processes. For example, the way we handle data related to conversations between friends on Snapchat is different from how we handle data related to broadcast content on Snapchat, which we hold to a higher standard and require to be moderated because it reaches a large audience.

However, since My AI is a chatbot and not a real friend, we have been deliberate in treating the associated data differently, because we are able to use the conversation history to continue to make My AI more fun, useful, and safer. Before Snapchatters are allowed to use My AI, we show them an onboarding message that makes clear that all messages with My AI will be retained unless you delete them.

Being able to review these early interactions with My AI has helped us identify which guardrails are working well and which need to be made stronger. To help assess this, we have been running reviews of the My AI queries and responses that contain “non-conforming” language, which we define as any text that includes references to violence, sexually explicit terms, illicit drug use, child sexual abuse, bullying, hate speech, derogatory or biased statements, racism, misogyny, or marginalizing underrepresented groups. All of these categories of content are explicitly prohibited on Snapchat.

Our most recent analysis found that only 0.01% of My AI’s responses were deemed non-conforming. Examples of the most common non-conforming My AI responses included My AI repeating inappropriate words in response to Snapchatters’ questions.

We will continue to use these learnings to improve My AI. This data will also help us deploy a new system to limit misuse of My AI. We are adding Open AI’s moderation technology to our existing toolset, which will allow us to assess the severity of potentially harmful content and temporarily restrict Snapchatters’ access to My AI if they misuse the service.

Age-Appropriate Experiences

We take seriously our responsibility to design products and experiences that prioritize safety and age appropriateness. Since launching My AI, we have worked vigorously to improve its responses to inappropriate Snapchatter requests, regardless of a Snapchatter’s age. We also use proactive detection tools to scan My AI conversations for potentially nonconforming text and take action.We have also implemented a new age signal for My AI utilizing a Snapchatter’s birthdate, so that even if a Snapchatter never tells My AI their age in a conversation, the chatbot will consistently take their age into consideration when engaging in conversation.

My AI in Family Center

Snapchat offers parents and caregivers visibility into which friends their teens are communicating with, and how recently, through our in-app Family Center. In the coming weeks, we will provide parents with more insight into their teens’ interactions with My AI. This means parents will be able to use Family Center to see if their teens are communicating with My AI, and how often. In order to use Family Center, both a parent and a teen need to opt in — and interested families can learn more about how to sign up here.

We continue to encourage Snapchatters to use our in-app reporting tools if they receive any concerning responses from My AI and to submit feedback to us about their overall experiences with the product.

We are constantly working to improve My AI, and we will continually evaluate additional measures to help keep our community safe. We appreciate all of the early feedback on My AI, and we are committed to providing a fun and safe experience for our community. 

( Press Release Image: )


This news content was configured by WebWire editorial staff. Linking is permitted.

News Release Distribution and Press Release Distribution Services Provided by WebWire.