If I Ruled The Internet

If I ruled the Internet, it would defeat the purpose. No single individual or company should be able to filter or manipulate the free-flow of information present on the Internet. The Internet and the spread of ideas should be organic and without structure or limitation. I should not have to spend money to communicate an idea. I should not have mistrust of companies like FaceBook and Google for manipulating my search results and News Feed based on what I liked yesterday. I should not have to conform to an algorithm that lacks a basic moral compass.

The Internet is for everyone; some use it for bad, others use it for good, as with most tools readily available to us, but it is up to us to ensure the integrity of the Internet as a independent platform without unnecessary rules or regulations that inhibit the average user from seeking information, challenging ideas, or digesting content.


The likelihood that everything we do on the Internet is recorded is all the more reason to let the Internet live on as a self-sustaining platform free to use and free from tyranny.

If I ruled the Internet, there wouldn’t be one in the first place.


Food For Thought

This is the original TedTalk that compelled me to write about the topic of algorithmic gatekeeping.

Edward Snowden, notorious whistle-blower, on the nature of the Internet.

New York Times article on censorship in relation with algorithms.

Bloomberg on algorithmic gatekeeping.

On The Subject of Transparency

As I’ve said before, algorithms aren’t inherently dangerous. The precedence they set, too, is not in itself dangerous. Algorithms aid in companies, like FaceBook, having a more comprehensive understanding of its user base, thus giving it the ability to market more easily. That is the premise anyways, but we’re seldom affected by targeted advertisements. At best, they’re scary, and at worst, they’re ineffective, but cataloguing our content into convenient boxes only shrinks a user’s understanding of the vastness that exists on the Internet.

Algorithms were not designed to limit or constrain an audience, but that’s precisely what they do in the grand scheme of things. And maybe it’s our own fault for wanting content delivered to us in the most efficient way possible with little to no sifting in order to find what we need? Using Google has become easier, and while we can’t necessarily complain about that fact, it is difficult to find comfortability in the idea that Google will only show me content based on my location and previous clicks.

If algorithms are to be used for the betterment of the Internet, then there should be transparency about them. The average person should be able to understand the ins and outs of an algorithm if they’re subjected to it, or alternatively, the average person should be able to opt out of an algorithm altogether. If the average user can’t understand the algorithm, how can that user determine how accurate the information flow is in the first place? A handful of people are so deeply-involved with FaceBook and derive many of their personal values from it that it’s almost criminal to designate an algorithm that does not induce perpetual growth in a person and encourage different perspectives. I believe the News Feed algorithm that exists today benefits businesses and advertisers more than it does its organic user base, and I think it’s important that we dissect what it is that’s attractive about FaceBook and the implications of the content delivered to us with and without our consent.

A site that boasts to deliver the very best content to its user base needs to be consistent in its claim and determine the long-term effects of changing the flow of information. Will we become stagnant in our ideals and pursuits based on the content pushed to us on a daily basis? Will we lose sight of important values because we’re more inclined to ‘like’ something the complete opposite? Will we lose the ability to communicate on a massive scale because the content we post isn’t paid for?

Fixing The Problem

Veering away from the implications of business, the algorithm present on FaceBook was designed innocuously at first, and over time, it became a tool that limited information and categorized users into convenient compartments to sell to advertisers. It goes without saying, but algorithmic gatekeeping on social media platforms prevents users from being presented with ideas, perspectives, or products that don’t exist outside their filter bubble.   Mark Zuckerberg wants FaceBook to feel like “the perfect personalized newspaper,” but if FaceBook’s algorithm deems that a launch of a new lipstick by Mac Cosmetics is more relevant to you mass genocide in the Middle East, is that truly a news source? What if the algorithm determines that a picture of your friend at a Rihanna concert is more important than the livestream of the Republican/Democratic debate?

The solution is a delicate balancing act because the reality is that algorithms will never completely disappear. We, as patrons of the Internet, will always be subjected to changes that benefit advertisers and allow for companies, like FaceBook, to compartmentalize their user base. At the bare minimum, if the algorithm can’t change, we should be able to change around it, and at the click of a button, I should be able to opt out of algorithmic gatekeeping and arrange my timeline in chronological order without having to sift through menus and submenus to get the desired result.

As Ingram put it, author of Gigaom, if you control the platform, you control the information flow, and adding an algorithm to the mix constructs more barriers in the flow of information. At what point will algorithmic gatekeeping change the way we perceive the world and at what cost? At what benefit?

More recently, the government urged Silicon Valley to construct terrorist-identifying algorithms to flag, detect, and measure radicalization, and while this concept may alleviate some anxieties felt by intelligence agencies and citizens of the United States, it could be a precursor for other algorithms that target everyday Americans on the very same platforms. For example, Media Sonar, a service used to understand location-based conversations and trends, was used to identify “threats” without the public’s consent by the Fresno Police Department.

Surveillance of the public through the lens of social media and algorithmic gatekeeping practically go hand in hand because these tools are used without transparency or accountability by the agencies that utilize them. That cannot be permissible, and intelligence agencies must be removed from the equation if the Internet is to be used how it was originally designed. No single person wants to be targeted or tracked based on their Internet usage, and that is truly the first step in severing the chains that bound normally free-floating information.

The Business Perspective

Algorithmic gatekeeping is not mandated by any government agency, but it is probably silently encouraged. It allows information agencies to access information in a concise and efficient means, and more so, it enables FaceBook to bill advertisers top-dollar for “Suggested Posts.” FaceBook has become, now more than ever, a comprehensive and interactive advertisement, and this is becoming increasingly problematic for people who run “Pages” for their companies, magazines or zines, products, or services.

As one user pointed out, FaceBook’s algorithm becomes an obstacle for the admins of “Pages” because it filters out the content unless users are actively engaging with it. On a long enough timeline and with a small enough budget, a successful “Page” could grow stagnant. When this happens, FaceBook will prompt the admin to extend its reach – or it’s Internet influence – by purchasing an advertisement. This presents a slippery slope due to the fact that money is spent to gain “Likes,” but the users who “Like” the page will seldom see the content posted by the Page in the first place. Admins also have the option to “Boost” a post, and depending the amount of subscribers a Page has, the price can range from $7 and up per post. In the link above, Clifton claims that to boost a post, she would’ve had to pay $150 for a single post to reach up to 76,000 users.

Conversely, other users describe their success with boosting their posts and the revenue it generated shortly after, but it’s hard to imagine that as being sustainable for “Pages” that have hundreds of thousands of subscribers. Beyond the business perspective, your average user suffers because they will not always wake up the following day liking the same things. That is the major flaw in the algorithm present on FaceBook today.

Fixing the problem, however, is an entirely different issue. FaceBook is a business, and like any other business, it seeks to consistently make a profit year over year. Making FaceBook capitalistic in nature is not necessarily a negative concept, but successful Pages that existed (before the surge in usage by business) must ultimately pay more to stay relevant. FaceBook’s algorithm clearly rewards the purchase of ads, and there is nothing inherently wrong with that practice. The problem arises when FaceBook doesn’t clearly address how the money spent affects the bottom line because the algorithm is not transparent to business, or even the average user, leaving advertisers blindly dumping their budget into ineffective ads that could be spent elsewhere with an overall better reach.



Taken from The Federalist.

Skewing Perspectives

As I mentioned earlier, algorithmic gatekeeping is a practice that ‘personalizes’ your news feed, Google search results, and most recently Instagram (but not without a fight). On Easter, Instagram announced that it, too, would test launch a new algorithm, similar to FaceBook’s. Users collectively voiced their concerns over the coming change, while others encouraged their followers to preemptively opt in for “post notifications” to prevent missing an account-specific post on their newsfeed (as pictured below).



Taken from Google Images.

Ironically enough, FaceBook users don’t seem to be affected by the strict algorithm present on FaceBook that has slowly evolved since August 2013 because the changes were gradual. Even Eli Pariser, from the TEDTalk I mentioned previously, didn’t notice when his newsfeed began to change until it was too late, and fortunately for Instagram users, we were given a notice, albeit short, to affect that change in a way we deem necessary. Users of social media platforms seldom read privacy statements or pay attention to updates because they’re seen as “necessary,” and by necessary, I mean the website would cease to exist without them.

While that couldn’t be further from the truth, your everyday user might feel completely consumed by the platform that minor changes that affect how information is received might be on the back-burner of their mind. Change continues, and the user suddenly doesn’t recognize the platform for what it was; that’s when there’s a noticeable problem. Truthfully, the problem surfaces the minute users admit that they’re powerless to changes, both in privacy and interface, and this problem compounds until the data is skewed and information becomes more difficult to access.

Algorithms can be useful, but when it comes to the filtering of information or determining the relevancy of our content, it can be harmful. Users of Instagram across all walks of life opposed the testing, but most notably, Kylie Jenner:


Taken From Google Images.

Algorithmic gatekeeping can become harmful when it impacts users who get paid for their acclaim value on the platform, which is fairly common. Instagram responded diplomatically by saying, “we are aware of your concerns, and we assure you nothing is changing with your feed right now.”

Removing the chronological format of Instagram might be harmless if implemented correctly, but I believe that it is capitalistic in nature as it is which isn’t necessarily a bad thing. Popular users have adjusted to time zone differentials and understand the most opportune times of the day to distribute new content to their followers. If an algorithm removes the chronological order format that exists today, it removes the premise of Instagram that makes it desirable. It will become just a overly-visual FaceBook, and content that is deemed relevant by the algorithm will probably fall short just as FaceBook has.

Why You Should Be Worried About Algorithmic Gatekeeping

The birth of the Internet was perhaps the next evolutionary step for mankind. Information at the click of a mouse: the good, the bad, the ugly, but we took it in strides and welcomed the ability to transmit information at the speed of light. Misuse of the Internet resulted in government interference, but we still have arms outstretched towards it with hopes it will continue to provide the information we require from it without interruption. While net neutrality threatens to pass to torch to our government to regulate ISPs, I believe there is a considerably-larger threat afoot.

Algorithmic gatekeeping, a term coined by Eli Pariser in a still-popular TEDTalk, removes the premise that made the Internet so attractive in the first place. The Internet challenges its patrons to digest information that is unpleasant, informative, different, and respective of other points of view, but instead, we are subjected to only what is deemed relevant by search engines like Google and social media platforms like FaceBook. The sparkling appeal of the Internet vanishes with algorithmic gatekeepers because it clutters our news feeds and search results with items that it assumes we want to see and not what we actually need to see. Google and FaceBook do this to each of us unknowingly, and we indulge in the information because we assume it is the only thing present for us to consume. Commercial interests distort perceptions and manipulate the cyberspace. FaceBook filters out the conflicting viewpoints of others once the algorithm determines that we interact with that content infrequently.

What we’re left with is a paradigm shift in the way we seek information on the Internet and how it is broadcasted to us. The danger exists when we become stagnant in our perspectives of the world and become unable to be presented with information that challenges our beliefs.