Friday, December 13, 2024

Microsoft’s Xbox introduces novel AI-powered safeguards to protect gamers from unwanted messages within a comprehensive approach to online security.

As we continue our mission to bring joy and community to even more gamers at Xbox, we remain steadfast in our commitment to shielding players from harmful online behaviors, crafting safer and more welcoming experiences, and remaining transparent about our efforts to safeguard the Xbox community.

We’re leveraging cutting-edge technology by integrating player-centric solutions with the accountability of AI to further enhance the human experience in detecting and preventing undesirable behavior on our platform, ultimately ensuring a stable environment that meets the evolving needs of our growing gaming community?

During the period spanning January 2024 to June 2024, our primary focus has been on curtailing malicious messaging and preventing non-friend solicitations, as well as developing AI-powered tools designed to detect and thwart spam and promotional content in conjunction with our comprehensive approach to safeguarding players.

  • Since the introduction of our pioneering approach to identifying and blocking harmful communications outside of friendships, we have achieved significant success in curbing the proliferation of offensive content, leading to a substantial surge in safeguarded conversations. Between January and June, we successfully reached out to gamers through a comprehensive approach that incorporated textual content, pictures, and videos. This updated approach harmoniously strikes a balance between shielding players from harmful content sent by non-friends and preserving the authentic online gaming experiences that our community values. We recommend that gamers take advantage of the customizable features, offering enhanced control and flexibility when interacting with others.
  • Participation in reporting remains a vital component of our overall security strategy. During this period, our gaming community played a significant role in fostering a surge in engaging content and enthusiastic promotion across the platform. To counter the proliferation of fake accounts on our platform, we continuously refine our methods to minimize their impact on both players and moderators alike. . Gaming communities played a crucial role in facilitating the collection of this dataset, as they shared their online interactions through Searching for Group (LFG) posts. Compared to the previous transparency report period, various content types have evolved significantly throughout.
  • Two new AI-powered instruments have been introduced to support our moderation teams effectively. These enhancements not only prevent the proliferation of disruptive content among gamers but also empower our human moderators to focus their attention on more complex and subtle issues, thereby enhancing overall community moderation effectiveness. The primary among these new features is Xbox AutoMod, a system introduced in February that facilitates the moderation of reported content by providing real-time assistance to human moderators. The second AI-powered resolution we introduced in July is designed to proactively identify and prevent unwanted interactions. Now we’ve directed these options to detect spam and promoting, enabling us to broaden our efforts to stop further harm sooner rather than later.   

At the foundation of these innovations lies a security framework that relies on individual gamers’ vigilance and the expertise of human moderators, ensuring the consistent and accurate operation of our platform through a closed-loop feedback process. 

Microsoft Gaming’s pursuit of innovation in security is complemented by a broader effort to elevate gamer expertise.

At Mojang Studios, we believe that every participant plays a vital role in safeguarding Minecraft as a secure and inclusive environment for all users. Mojang has introduced a feature in Minecraft’s Bedrock Edition that proactively notifies players of the game’s Community Standards when potentially harmful or offensive behavior is detected in text chat, thereby promoting a safer and more respectful gaming environment. This feature serves as a gentle reminder to server users about expected behavior, allowing them to adjust their interactions before facing potential consequences such as account suspension or ban. In the past year, Mojang, in collaboration with GamerSafer, has empowered numerous server owners to significantly bolster their community management and security protocols. Guiding gamers, parents, and esteemed guardians, this resource empowers them to locate reliable Minecraft servers that prioritize their cherished values of security and safety.

The Name of Responsibility is committed to eradicating toxicity and promoting fair play. To combat the scourge of harmful behavior that flouts community guidelines, the team leverages advanced technology in tandem with artificial intelligence to amplify the effectiveness of moderation teams and combat toxic behaviors. These instruments are specifically designed to cultivate a more inclusive community where players are treated with dignity and respect, while engaging in fair and honest competition. Since over 45 million text-based messages have been blocked across more than 20 languages, a significant reduction in online toxicity has become apparent. As a result of this intervention, publicity surrounding voice toxicity has decreased by an impressive 43%. With the launch of our crew rolled out assistance for voice moderation, now available in French and German, joining existing support for English, Spanish, and Portuguese. As part of our ongoing research, the team also undertakes an examination of prosocial behavior in the context of gaming.

As the gaming landscape continually innovates, we’re building a vibrant community of dedicated, like-minded, and empathetic players who gather on our platform to indulge in captivating adventures, revel in enjoyment, and forge meaningful connections with fellow enthusiasts.

We remain committed to platform security and the development of accountable AI through our design principles, informed by Microsoft’s guidelines and strengthened by our collaborations with organizations such as the Tech Coalition. Thank you for your ongoing support in making our community thrive, and for staying connected with us throughout our evolution.

Some further assets:     

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles