Context

Recommendation algorithms blackbox their internal logic and decision-making systems. We don’t know what kinds of data go in, and have no visibility into why certain decisions come out. We end up fetishising the algorithms and calling them “magic” to dismiss the need to make them legible to users

Problem

Recommendation algorithms on many popular social media platforms run unchecked – as an audience we have no visibility into what metrics the engineers are optimising for. The engineers themselves don’t fully understand how the algorithms choose what to recommend. The lack of transparency makes it difficult to evaluate why we’re being shown certain content. It hides the fact some algorithms maximise for qualities like emotional outrage, shock value, and political extremism. We lack the agency to evaluate and change the algorithms serving us content. These algorithms also decide what advertisements we see based on the data the platform has access to. As users we have no way of understanding what data is being used, and why certain companies or products are targeting us as potential customers.

Solution

When a piece of content is recommended by an automated system, it should include an epistemic disclosure message explaining why it was suggested, and what factors went into that decision. Advertisements in particular should bear these disclosures, indicating what data points led to the ad being served (age, gender, race, location, income)