Veering away from the implications of business, the algorithm present on FaceBook was designed innocuously at first, and over time, it became a tool that limited information and categorized users into convenient compartments to sell to advertisers. It goes without saying, but algorithmic gatekeeping on social media platforms prevents users from being presented with ideas, perspectives, or products that don’t exist outside their filter bubble. Mark Zuckerberg wants FaceBook to feel like “the perfect personalized newspaper,” but if FaceBook’s algorithm deems that a launch of a new lipstick by Mac Cosmetics is more relevant to you mass genocide in the Middle East, is that truly a news source? What if the algorithm determines that a picture of your friend at a Rihanna concert is more important than the livestream of the Republican/Democratic debate?
The solution is a delicate balancing act because the reality is that algorithms will never completely disappear. We, as patrons of the Internet, will always be subjected to changes that benefit advertisers and allow for companies, like FaceBook, to compartmentalize their user base. At the bare minimum, if the algorithm can’t change, we should be able to change around it, and at the click of a button, I should be able to opt out of algorithmic gatekeeping and arrange my timeline in chronological order without having to sift through menus and submenus to get the desired result.
As Ingram put it, author of Gigaom, if you control the platform, you control the information flow, and adding an algorithm to the mix constructs more barriers in the flow of information. At what point will algorithmic gatekeeping change the way we perceive the world and at what cost? At what benefit?
More recently, the government urged Silicon Valley to construct terrorist-identifying algorithms to flag, detect, and measure radicalization, and while this concept may alleviate some anxieties felt by intelligence agencies and citizens of the United States, it could be a precursor for other algorithms that target everyday Americans on the very same platforms. For example, Media Sonar, a service used to understand location-based conversations and trends, was used to identify “threats” without the public’s consent by the Fresno Police Department.
Surveillance of the public through the lens of social media and algorithmic gatekeeping practically go hand in hand because these tools are used without transparency or accountability by the agencies that utilize them. That cannot be permissible, and intelligence agencies must be removed from the equation if the Internet is to be used how it was originally designed. No single person wants to be targeted or tracked based on their Internet usage, and that is truly the first step in severing the chains that bound normally free-floating information.