Call of Duty Turns to AI to Moderate Voice Chats and Combat Toxic Behavior
Game developers are addressing the issue of abusive language in online gaming communities, and Call of Duty is taking steps to combat toxic players with the help of artificial intelligence (AI). Activision, the publisher of the massively popular shooter franchise, has launched a new feature called ToxMod, developed in collaboration with AI startup Modulate. ToxMod uses AI to identify and take action against toxic speech in real time, including harmful language, hate speech, discriminatory language, and harassment. The feature is already live in North America and will expand globally (excluding Asia) on November 10. Activision’s existing filtering tools have already made a positive impact, with over 1 million accounts restricted for violating the game franchise’s code of conduct since last November.
Key Points:
- Call of Duty has launched a new AI feature called ToxMod to combat toxic behavior and abusive language in its online player community.
- ToxMod uses AI to identify and take action against toxic speech in real time, including harmful language, hate speech, discriminatory language, and harassment.
- The feature is currently live in North America and will expand globally (excluding Asia) on November 10.
- Activision’s existing filtering tools have already made a positive impact, with over 1 million accounts restricted for violating the game franchise’s code of conduct since last November.
- Call of Duty players are encouraged to continue reporting disruptive behavior despite the implementation of the AI feature.
Hot Take:
The use of AI in moderating voice chats in online gaming communities is a step in the right direction to combat toxic behavior. By identifying and taking action against abusive language in real time, game developers can create a safer and more inclusive environment for players. Activision’s efforts to address this issue demonstrate their commitment to improving the gaming experience for all players.