Deliver Your News to the World

Smarter, Better, Faster: Using Machine Learning to Review Emotes


WEBWIRE

About the Author: I’m Linda and I’m an applied scientist working on safety related problems, like spam and abuse in chat. This is how I built an image classification model to help our internal safety specialists to review custom emotes. If you are interested in working on safety related problems, feel free to reach out to @lindarrrliu

Emotes are an indispensable part of the Twitch experience. They’re the (unofficial) official language of Twitch because they pack a ton of meaning and let you share so much without saying a word. Plus, they’re fun to use. I watch Twitch daily, and I always laugh the hardest when chat reacts with the perfect emote. Without them, chat feels lifeless, but there is always a flip side. Emotes that are intentionally made to be mean, offensive, or harmful to certain people or communities quickly ruin the chat experience for everyone. To make sure people  enjoy their time on Twitch, our safety specialists review custom emotes to make sure they comply with the Twitch Community Guidelines.

Streamers are creating new emotes for their communities daily, and with Twitch’s rapid growth and advancement in Emote tooling, the number of new emotes needing review is always growing extremely quickly. That is where we come in. To help our specialists out, the Proactive Detection team (that’s us!), designs and builds machine learning models to facilitate Emote reviews. Currently, our model reviews custom Emote submissions and automatically approves a large chunk of the static emotes you see and love on Twitch. Not only does this mean less work for our safety specialists, but it lets streamers use many emotes instantly. 

How did we build the model? 

First, I looked at all the data that was available to us. When partners and affiliates upload their custom emotes, they give each one a specific emote code, as shown in Figure 1 (see below). The emote image and emote code pair will then be reviewed by the safety specialists. The specialists decide if the emote violates Twitch’s Community Guidelines, and if it does, they categorize the violation by type. For example, emote images or code pairs that violate our hate speech guidelines are categorized accordingly. Because the Safety Operation team specializes in enforcing community guidelines, their emote review data can be regarded as a source of truth. In other words, their data is high quality training data for our machine learning algorithm.

The training data for our algorithm consists of 112 pixel by 112 pixel emote images, their corresponding alphanumeric emote codes, and their corresponding primary violation category, if applicable. We’ve tracked all emote review data since Q1 2020, resulting in hundreds of thousands CG violating emotes and millions approved not-a-violation emotes. 

With the available review data, we frame the problem as a multi-class classification problem. In other words, we ask the model to predict the following: “given an image and text pair, what’s the likelihood the pair violates one of the community guideline violating categories?”.   

Training a model takes time, and Twitch chat moves fast, so we wanted to speed up iterations where we could. For our model architecture, we chose to leverage transfer learning, because it generally works well and is simple to implement. Put simply, Transfer learning is a technique used in Deep Learning, where you use a pre-trained model as model features into a new neural network model.

The emote image is transformed into a vector via an image embedding, and the emote code is transformed into a vector via a character-based text embedding. The image embedding is passed to a global pooling average and then is concatenated with the text embedding. The combined vector goes through a series of dense layers, the last of which predicts the violating class.

For the image embedding we transfer learn from a MobileNetV2 model pre-trained on the imageNet dataset. We chose MobileNetV2 because MobileNet is very fast to run. We use an internally developed GRU based model as the text embedding.

What is my model thinking?

In order to understand better that the model has learned what we expect, we used LIME  to detect which parts of an emote image were contributing most to a community guideline violating prediction. For example, under the Twitch Community Guidelines, emotes with individual letter or character will not be accepted. Below, I draw the following images and use lime to interpret the model result. We see that the model primarily bases its decision on the “Y” and “A” of the image, capturing our intuitive sense of why emotes are violations.

Figure 2: The ping area signals the area that causes the model to flag, yellow color indicates the boundary, and gray area is unimportant.  

The model takes too long to train! 

Originally, the model took more than a day to train because it needed to process millions of images. Long training time greatly hinders our ability to iterate and keep the model up to date with new emote trends. We found some ways to reduce it by almost double. 

Our biggest bottleneck was data collection. The original method of collecting data was highly inefficient: we downloaded images and codes from Cloudfront and Redshift respectively, and uploaded them to S3 one by one. We parallelized this process to download 20 images at once, speeding up the data collection by 20 times. 

We also use in memory caching during training and choose bigger AWS instance types.

Conclusion

Looking through many Emote violations in the past year, I truly believe that people violate Twitch’s Community Guidelines because they are simply not aware of them. If you or your friends are thinking of getting custom emotes, please spread the word and read the guidelines.

Want to Join Our Quest to empower live communities on the internet? Find out more about what it is like to work at Twitch on our Career Site,  LinkedIn, and Instagram, or check out our Job Openings and apply.


( Press Release Image: https://photos.webwire.com/prmedia/7/290893/290893-1.jpg )


WebWireID290893





This news content was configured by WebWire editorial staff. Linking is permitted.

News Release Distribution and Press Release Distribution Services Provided by WebWire.