This afternoon, Instagram’s head Adam Mosseri warned users not to take photos at face value online, citing the increasing prevalence of AI-generated content that can be “clearly mistaken” for reality. Because of this, he suggests that customers should consider the supply, and social media platforms should facilitate that process.
“Our primary responsibility as web platforms is to accurately label AI-generated content to the best of our abilities,” Mosseri states, acknowledging that some content may still slip through undetected. To enhance transparency, platforms should provide additional context about the individuals or entities sharing content, enabling users to make informed decisions about how much to trust the information presented.
While it’s crucial to be vigilant, it’s also essential to remember that chatbots can disseminate misinformation with ease. To ensure the accuracy of information, it’s vital to verify the source and credibility of posts, including photographs, before accepting them as true. Currently, Meta’s platforms do not offer much of the contextual information that Mosseri shared, despite impending updates to their content guidelines on a large scale.
What Mosseri describes appears to align with user-driven moderation, similar to that of Neighborhood Notes on X or Bluesky’s. While the likelihood of Meta implementing such features remains uncertain, it’s possible that they may draw inspiration from Bluesky’s playbook nonetheless.