09/07/2023 / By Arsenio Toledo
Popular gaming company Activision has enabled a new artificial intelligence-powered tool in some of its most popular games that can eavesdrop on in-game voice chat conversations to identify and suppress instances of so-called hate speech, discrimination and harassment in real time.
Activision, known for its popular first-person shooter franchise “Call of Duty,” has officially announced a partnership with AI startup Modulate. This partnership entails integrating Modulate’s AI-powered voice moderation tool known as ToxMod into several of the company’s most popular games, including “Modern Warfare 2,” “Warzone 2” and eventually “Modern Warfare 3” when it comes out in full release in November.
Modulate describes ToxMod as “the only proactive voice chat moderation solution purpose-built for games.” The company’s website notes that ToxMod works by triaging suspected voice chat samples to flag them for bad behavior, analyzing the nuances of each conversation to determine the level of toxicity and bringing them up to human moderators who can then quickly respond to each incident by supplying relevant and accurate context.
Modulate CEO and co-founder Mike Pappas said in a recent interview that the voice moderation tool goes beyond just transcribing events for review by human moderators. ToxMod analyzes voice chat samples to consider factors like a player’s emotion and volume into context in order to differentiate whether statements made during in-game conversations are harmful or playful.
The presence of human moderators to screen reports submitted by ToxMod is a key factor in the AI tool’s rollout. A FAQ released by Activision itself notes that the new voice chat moderation system “only submits reports about toxic behavior” and Activision’s human moderators still have the final say in determining how reported violations of voice chat behavior are enforced.
The rollout of Modulate’s ToxMod is part of a rapidly expanding trend of corporations turning to new technologies like AI to moderate or censor speech on the internet. (Related: Google unveils new “fact-checking tools” meant to censor and keep independent media out of search results.)
ToxMod was developed with assistance from the Anti-Defamation League (ADL). Modulate reportedly called upon the organization to help design ToxMod in such a way that it can detect whether players are “white supremacists” or “alt-right extremists.”
According to Modulate, ToxMod will be detecting “terms and phrases relating to white supremacist groups, radicalization and extremism in real-time.”
“Using research from groups like ADL, studies like the one conducted by [New York University], current thought leadership and conversations with folks in the gaming industry, we’ve developed the category [of “violent radicalization”] to identify signals that have a high correlation with extremist movements, even if the language itself isn’t violent,” wrote Modulate on its website.
Other “violent radicalization” voice chats ToxMod has been primed to detect include the alleged “targeted grooming or convincing vulnerable individuals [like children and teenagers] to join a group or movement” and the overt planning of violent actions or the threat of committing physical violence.
So far, over one million gaming accounts have had their ability to use voice chat options in games restricted by Activision’s anti-toxicity team on “Call of Duty.”
Many consider this as another form of censorship. “Tech corporations are making increasing use of AI technology to censor content, a practice they call ‘content moderation,'” wrote Yudi Sherman in an article published on Frontline News.
Learn more about instances of online censorship at CyberWar.news.
Watch this old clip from Fox Business discussing how Activision once punished Hong Kong e-sports player Ng Wai Chung, better known by his online persona Blitzchung, for his support of the Hong Kong democracy protests, by suspending him from competitive gameplay and rescinding $10,000 in tournament winnings.
This video is from the News Clips channel on Brighteon.com.
Documents offer glimpse inside Censorship Industrial Complex.
Users beware: You could be held liable for mistakes made by AI chatbots.
Conservative chatbot creator says OpenAI tried to censor content forcing them to abandon platform.
Sources include:
Tagged Under:
Activision, AI, Anti-Defamation League, artificial intelligence, Call of Duty, Censorship, cyber war, future science, future tech, game toxicity, gaming, Glitch, Hate speech, Modulate, online games, privacy watch, robotics, robots, speech police, thought police, toxicity, Toxmod, Tyranny, Video Games
This article may contain statements that reflect the opinion of the author
COPYRIGHT © 2017 FUTURETECH.NEWS
All content posted on this site is protected under Free Speech. FutureTech.news is not responsible for content written by contributing authors. The information on this site is provided for educational and entertainment purposes only. It is not intended as a substitute for professional advice of any kind. FutureTech.news assumes no responsibility for the use or misuse of this material. All trademarks, registered trademarks and service marks mentioned on this site are the property of their respective owners.