Meta explains the AI behind its social media algorithms
Meta has published a deep dive into the company’s social media algorithms in a bid to demystify how content is recommended for Instagram and Facebook users. In a blog post published on Thursday, Meta’s President of Global Affairs Nick Clegg said that the info dump on the AI systems behind its algorithms is part of the company’s “wider ethos of openness, transparency, and accountability,” and outlined what Facebook and Instagram users can do to better control what content they see on the platforms.
“With rapid advances taking place with powerful technologies like generative AI, it’s understandable that people are both excited by the possibilities and concerned about the risks,” Clegg said in the blog. “We believe that the best way to respond to those concerns is with openness.”
22 “service cards” are now available that outline how content is ranked and reccomended for Facebook and Instagram users
Most of the information is contained within 22 “system cards” that cover the Feed, Stories, Reels, and other ways that people discover and consume content on Meta’s social media platforms. Each of these cards provides detailed, yet approachable information about how the AI systems behind these features rank and recommend content. For example, the overview into Instagram Explore — a feature that shows users photo and reels content from accounts they don’t follow — explains the three-step process behind the automated AI recommendation engine.
The card says that Instagram users can influence this process by saving content (indicating that the system should show you similar stuff), or marking it as “not interested” to encourage the system to filter out similar content in the future. Users can also see reels and photos that haven’t been specifically selected for them by the algorithm by selecting “Not personalized” in the Explore filter. More information about Meta’s predictive AI models, the input signals used to direct them, and how frequently they’re used to rank content, is available via its Transparency Center.
Instagram is testing a feature that will allow users to mark reels as “interested” to see similar content in the future
Alongside the system cards, the blog post mentions a few other Instagram and Facebook features that can inform users why they’re seeing certain content, and how they can tailor their recommendations. Meta is expanding the “Why Am I Seeing This?” feature to Facebook Reels, Instagram Reels, and Instagram’s Explore tab in “the coming weeks.” This will allow users to click on an individual reel to find out how their previous activity may have influenced the system to show it to them. Instagram is also testing a new Reels feature that will allow users to mark recommended reels as “Interested” to see similar content in the future. The ability to mark content as “Not Interested” has been available since 2021.
Meta also announced that it will begin rolling out its Content Library and API, a new suite of tools for researchers, in the coming weeks, which will contain a bunch of public data from Instagram and Facebook. Data from this library can be searched, explored, and filtered, and researchers will be able to apply for access to these tools through approved partners, starting with the University of Michigan’s Inter-university Consortium for Political and Social Research. Meta claims these tools will provide “the most comprehensive access to publicly-available content across Facebook and Instagram of any research tool we have built to date” alongside helping the company to meet its data-sharing and transparency compliance obligations.
Those transparency obligations are potentially the largest factor driving Meta’s decision to better explain how it uses AI to shape the content we see and interact with. The explosive development of AI technology and its subsequent popularity in recent months has drawn attention from regulators around the world who have expressed concern about how these systems collect, manage, and use our personal data. Meta’s algorithms aren’t new, but the way it mismanaged user data during the Cambridge Analytica scandal and the reactions to TikTok’s tepid transparency efforts are likely a motivational reminder to over communicate.
Read the full article Here