10 reasons you might hate Facebook in 2021

Facebook has been a topic of debate for at least 10 years now. I first became aware of the controversies surrounding it around 2016. I participated in an online debate on the subject you can watch on youtube here, in 2017. I think the issue has gained a lot of dimensionality over the intervening years, and I’ve got a wider range of thoughts on it. This is a long one, but I invite you to skip to the included topics you find interesting.

Facebook went and changed the companies name to Meta just as I was finishing this article. I’ll use the term only when I am referring to the coproration rather than the software or its management.

My own bias

I use Facebook often and I generally like it. There are aspects I don’t like very much, but by and large, it entertains me and keeps me closer to people I care about than I otherwise would be. I didn’t really use it a lot until around 2015 when I started traveling full time. While I have accounts on many social networks, Facebook is the one I use most often by a wide margin.

I’m also a software developer so I have a soft spot for developers and understand some of their challenges and limitations. Finally, I like arguing and public debate, which is one of the things Facebook facilitates that other people can find objectionable.

So for all these reasons, I’m inclined to defend it, and indeed I often do.

The Nature of Facebook

If you are reading this, you probably know the basics of what Facebook is: a piece of software that lets you share information and communicate with other people you designate as friends. That is the consumer side of it.

On the business side, it is a marketing platform and information gathering tool. The program serves up targeted ads to its users, and it collects information about its users that is of value to other organizations, mostly for marketing purposes.

There is also a grey area where individuals and companies make accounts they use to share information for marketing and customer service purposes. While it is commercial in nature, it isn’t something that makes Facebook money the way ads and information brokering does.

Facebook makes money by virtue of individuals and companies spending time using the site. Mostly they want you reading/viewing the site so you can see the ads they sell. They also want you to be posting on the site because that makes good material for others to read/view. Both your posting and reading generate data for them which they can use to target ads to you or to sell to companies who want to understand their target market better.

As a result of this business model, Facebook has three basic types of customers they need to serve

  1. Organizations that want to advertise or consume marketing information
  2. Individual and organizational users who read and post content.
  3. Governments that can regulate or manipulate Facebook based on political motivations

The paying customers are the Organizations. In a way, they come first, but the Individuals are what said Organizations are really paying for, so they are just as essential to Facebook. Facebook would probably prefer not to have to serve the needs of Governments at all, but they often don’t have a choice and must cater to them or suffer serious consequences. These three sets of customers don’t all have the same desires and goals so Facebook has to do some balancing acts to keep everyone sufficiently happy that they can earn money.

Topic 1: Hosting Disinformation

Disinformation is nothing new, but Facebook not only allows for wide distribution but it also cloaks it in the guise of -coming from your “friends.” Furthermore, since it is so easy to simply click-to-share, the disinformation is conveyed without the distortion of retelling. It is a very powerful medium for lies to propagate.

Critics of Facebook would like to see the company censor disinformation. It’s a worthy cause, but there are a number of reasons this is a much easier thing to say than to do.

  1. It is much easier to make a lie than to determine the truth. Bad actors can create new lies constantly making effective censorship very challenging.
  2. The neraly 3 billion users makes it very expensive to try and censor them all using a central authority.
  3. It is difficult to decide what the truth is on a wide range of topics in which people distribute disinformation and few would identify Facebook personell as trustworthy arbiters of truth on any topic.
  4. Meta has little to no financial incentive to remove disinformation unless such information actively reduces the number of people reading and posting on the site.
  5. People who get censored often leave the platform so widespread censorship is likely to lead to fewer users which means less revenue, which in turn means less funds to pay for censorship.
  6. No mater what you censor, some people will be angry to see it censored.
  7. If you censor members of the ruling government it could result in state action against the company severely hampering their ability to do business.
  8. Government could use political pressure to influence Facebook to censor in such a way that purely benefits said government and harms their political opponents making Facebook into a propaganda tool of the ruling power.
  9. The use of computers to determine what needs be censored helps make it more cost effective but leads to many things being censored in a way that seems arbirary and rediculous.
  10. Humans, while they don’t make the same kinds of errors AI tends to, are also error prone or bias and thus will over or under censor.
  11. It is quite possible that disinformation is more effective than factual information at getting people to read and post on facebook, thus making it a valuable asset to Facebook’s business model.
  12. Should you include comedy and satire as disinformation and how do you accurately tell the difference?
  13. If Meta has de-facto control of what is published on Facebook they are more likely to be vulnerable to legal liability for what is published on Facebook.

This last point highlights that the bad actor in this scenario isn’t really Facebook but its users. The problem is people. People post disinformation, people share disinformation, and people believe disinformation. Those of us who think we know better are asking Facebook to police the liars for the benefit of the gullible. I think the truth is, Facebook cannot so much as change or control these people’s behaviors and at best simply banishes them to other channels of communication.

While I probably sound like I am arguing against Facebook attempting to censor disinformation, I am not. My aim is to try and set realistic expectations. I think that Facebook does have some financial incentives to limit disinformation on their platform and I also think that there are social responsibilities that even a corporation should try and live up to. However, I think it’s unrealistic to expect them to be highly effective in combating human nature when their financial incentives lie squarely in accommodating it.

I think Facebook’s best strategy here is to be selective in identifying specific messages or topics that both have a significant and relatively direct social impact and then either censoring those messages or applying a message of some kind to them. I think their approach to COVID-19 disinformation is a good example. The disinformation, or at least the topic, is fairly easy to identify and it can have a very real life or death impact. Even here, you can find many weaknesses in execution, but I would argue a perfect or even near-perfect execution is probably structurally impossible given all the challenges outlined above.

Topic 2: Hate Speach

The issue of Facebook censoring hate speech not only has all the same challenges as fighting disinformation, but it also adds additional difficulties to the list.

  1. What exactly qualifies as hate speech?
  2. What level of expression of hatred is over the line?
  3. Who needs to be protected from such speech?
  4. How do you measure the harms created by hate speech and how do you measure success in censoring it?

There are also two aspects of this topic that work in favor of censorship.

  1. The targets of hate speech almost universally don’t like hate speech, and thus there is a clear financial incentive for Facebook to not have any hate speech on their platform.
  2. The harms of hate speech tend to be pretty clear in society as where the harms of disinformation are sometimes harder to pin down.

Again, while I absolutely would like Facebook to take measures to limit hate speech on their platform, but I have rather low expectations on how much success they can have in truly stomping it out. It’s very easy for those propagating hate speech to invent new language and codes to spread it. They can also co-opt anti-hate language or symbols, or those of unrelated groups to create false censorship. I’ve personally seen a number of civil rights activists get censored on the basis of rules designed to censor hate speech.

Topic 3: Stifling Free Speech

Since Facebook has, as of late, started censoring disinformation and hate speech, it has come under fire for stifling free speech, especially on political and social topics. This is of course not a violation of the first amendment since Meta is not an arm of the US government, but nonetheless, if you believe in the general principle, you may find Facebook’s actions objectionable.

While I do enjoy the freedom of speech both as a legal right and as a principle, I am not an absolutist on it, especially in the principle sense. I think Meta has a right to make Facebook the kind of place they want it to be, just like I make this blog the kind of place I want it to be. They don’t have any special mandate to accommodate anything I want to say. They have the prerogative to censor me, and I have the prerogative to take my eyeballs elsewhere if I don’t like it.

I really don’t expect Facebook to be some bastion of freedom or public discourse even if it often is used that way.

Topic 4: The algorithms

I think this topic benefits from an explainer. Facebook, the software, pays attention to what you spend time reading, responding to, and posting. It uses this information to tailor suggestions to you and to order the content created by your friends and the pages you follow. Basically, the more you look at something, the more it thinks you want to look at more stuff like that. It also makes guesses about what you might like based on your personal data, extrapolating that you would probably enjoy what other people like you also enjoy. It does all this tailoring so that you spend more time on the site and thus they have more opportunities to share marketing with you and more information to collect and sell about you.

The problem with these algorithms is that if you spend time reading disinformation, hate speech, or other malodorous material, Facebook will automatically show you more of the same. Facebook doesn’t understand if you like or don’t like it, or how it makes you feel, they just know you spend time on it and they like it when you spend time on things. There are many of us who probably spend more time looking at and responding to things we don’t like than we do with things we do like. As a guy who likes to debate and argue, I spend way more time with material that I disagree with than material I agree with.

The harms here are twofold. Firstly, Facebook’s algorithm inadvertently favors aggravating material to some degree, this can impact the mental health of the reader. It can also make them less happy with the Facebook experience and lead to them leaving the platform. The outmigration is minimized by the fact that many folks are attracted to alarming information. The evening news has been milking that cow for a long time.

The second issue is that Facebook’s social nature, combined with its algorithm means that Facebook is naturally going to encourage people to argue with one another on the platform. If I spend time arguing against some right-wing post, Facebook is going to feed me more right-wing posts. Since I like arguing, that’s not so bad for me, but most people don’t actually enjoy such interactions.; they feel compelled to speak up when they see something objectionable and then fall into the trap of teaching Facebook how to agrivate them. This process of making money over rancor leads to greater social division and animosity in society.

I don’t think that Facebook has done anything nefarious here. A system that gives people more of what they naturally engage with is not itself a bad thing. If you spend your time looking at cute puppies, philosophical treatises, or whatever wholesome thing you like, Facebook gives you more of that as well. Again, the real problem is us, the users and our potentially rottent sense of taste or at best, our propensity to wallow in the muck with good intentions. I absolutely don’t expect Facebook to try and make a better person out of me.

Where I think Facebook could make some improvement and not strike too badly into their bottom line, is to give users more explicit control of the kind of content we see. Allow users to pick some of their own favored topics and keywords they like and use that in the algorithm. Not only will users get a better tailored experience, but advertisers will be better able to target their intended audiences.

Topic 5: Privacy

Because I’ve grown up using computers and even programming them, I think I have more realistic expectations for privacy on an integrated computer network like the internet. Which is to say, I expect to have very little privacy. I expect that everything I do can be tracked, recorded, and analyzed. I’m also fortunate in that this doesn’t especially bother me. If information about me is used against me, I’m more inclined to be angry with the exploitation of that information than the fact it is available.

However, I’m well aware, and respect that not everyone feels like I do or has the same technological awareness. Most users don’t give much thought to how software works, only what it does for them. I also think that more people than not like to protect their personal information from anyone but those they explicitly share it with.

I do think that some degree of legislation is useful in this area. It is entirely possible that a company could sell information to some party that could use it to cause serious harm to consumers. I also think it is unreasonable for consumers to think they can get software for free from a for-profit company without being subject to marketing or the company making commercial use of their activity on the site.

I think the best path is to put some hard limits on what kind of information can be sold, and rules requiring companies to transparently disclose what information they sell and to whom. For consumers to make an informed decision about whether they want to risk this information sharing, they need to have a reasonable idea of exactly what is being shared. While individuals may have some challenge sorting that out, reporters probably can so long as this information is publicly available.

Topic 6: Monoply

The whole notion of a monopoly has become more challenging as the scope of commerce has continued to expand. Facebook certainly has a size and reach that dwarfs most of its direct competitors in the social networking universe. Those that have tried to unseat their worldwide #1 position have failed. The fact that a social network’s most valuable asset is how many people are already using it means there would have to be a very dramatic event to unseat them. It’s more likely to come in some complete shift in technology or consumer interest than from a direct competitor.

That said, the traditional problems with monopolies don’t apply especially well to Facebook. From a consumer standpoint, they can’t exploit their price point since the product is free and there are plenty of other social network products on the market that people can and do use, they just aren’t as popular. So consumer pricing and choice aren’t really affected by their market share.

On the other side, they do have incredible strength as a provider of advertising services. They have one of the worlds largest audience pools and can demand significant rates for their targeted adds. That said, they only amount to around 10% share of the digital marketing world, and digital marketing is a sub-section (if a large one) of the overall marketing industry. Companies absolutely have other viable options than paying Facebook for their marketing campaigns.

I think Facebook’s monopoly is most felt in its influence. You can’t really talk about social networking without talking about Facebook and it’s the glue that binds many lives together over great distances today. It’s become central to our discussions of politics and culture on many occasions. This is not a traditional category of monopoly exploitation. Thus it is unlikely any government would step in on this basis other than by authoritarian regimes.

Topic 7: Government Capitulation

Facebook has come under fire a number of times for capitulating to government demands in various countries at the expense of the privacy and security of its users. I think this makes for a good argument why people should be careful what they share on Facebook and should perhaps consider not using it if they fear government intrusion into their life. I don’t see it as a strong argument condemning Facebook’s corporate behavior. If you want to operate a business in a given country, you are subject to its laws and authority.

In the US they don’t just roll over for the government on any and every occasion. They can and do put up a legal fight from time to time and negotiate as to what they are willing or unwilling to reveal. I’m not trying to especially praise them for their efforts, but I’d say they behave about on par with other corporations in this regard. When operating in more hostile territory, they put up less of a fight, but there honestly aren’t many recourses in those countries they can pursue.

Topic 8: Facebook is Used by Criminals and Extreemists

The fact alone is not really that significant. All of the internet, the phone systems, the cell phone systems, and the postal services are also all used by criminals and extremists. All these companies, Meta included, have rules against their use for criminal activities. They do police that, but like regular police, they don’t catch them all. If they made no effort they would certainly be negligent, but they have shown they don’t want criminals on Facebook if they can help it.

Extremism is trickier and faces many of the same issues with hate speech. It’s sometimes easier in that if a well-known gang or terrorist group has a page with their name on it, Facebook can probably find it and shut it down, at least temporarily. If they are there, but hiding, it’s going to be much more difficult to identify them and verify they are indeed extremists. If they are pretty clearly extremists but not well known, then Facebook would have to rely on its hate speech detection systems to identify them and shut them down, which is much easier said than done, but still worth an effort.

This is an area where Facebook has clear incentives to try and remove any group or individual falling into this category. It’s a narrow enough band of users it isn’t likely to hurt their bottom line, the represent a significant liability threat, and governments tend to be very supportive of such efforts. The real challenges are; how much money are they willing to spend, how effective will their efforts be, and will it satisfy the government and its critics.

Topic 9: Psychological Concerns

This is a broad topic as a wide range of psychological concerns relating to Facebook have been raised and examined. Cyberbullying, suicide ideation, narcisim, depression, lost work time, addiction, and many more potential negatives have been identified as areas of concern. Each has its own dynamic and there is variance in how much clinical evidence supports the concern.

I think it is an important area of study, and I think Facebook should well consider recommendations based on clinical evidence that can mitigate these potential harms. The bigger challenge comes when the recommendations are contrary to the nature of the product they are offering. It’s like asking a beer maker to make their beer not contain alcohol, while that is possible, it rather defeats the typical consumer interest in the product. Asking facebook to try and get people to spend less time using Facebook is not going to work out.

I think this is an area where it is more effective to reach out to people suffering from these problems and help them find habits or alternatives to Facebook that are healthier for them. Facebook could help with that by granting some free access to their marketing system to groups looking to help such people. Facebook could also help by offering people built in tools to help with self moderation. There already are a few built-in, but refining these could ultimately help Facebook.

Topic 10: Genreal Company Malfesence

Like many big companies, Meta comes under fire for issues like tax evasion, working conditions, environmental impact, shady business practices, and so on. I’m not that interested in looking at these in any detail, nor offering any defense or recommendations. Suffice to say, Meta should follow the law, pay their taxes, treat workers well, and seek to minimize their environmental impact. If they don’t, they should be held accountable.

The Conclusion

Overall, I like Facebook. I think many of its issues are the product of the darker side of human nature. When your platform is a public forum, the public are usually the ones causing the problems. That doesn’t absolve Facebook of any responsibility, but it does mean there are limits as to how effectively they can or should try to police our bad behavior.

As consumers, we bare some of the responsibility for what products we choose to use, and how we choose to use them. Facebook has a responsibility to be forthright with what its product does and offers as well as the responsibility to do what is practical to protect its users from one another’s worst impulses. What I don’t think we should expect of Facebook is to make us better human beings or to subvert the basic business model of their product for the sake of making people behave better.

Sigfried

2 Responses to “10 reasons you might hate Facebook in 2021

  • Shea Aubuchon
    2 years ago

    There is no such thing as “hate speech”, there is only free speech or there isn’t.
    Chew on that one for a while.
    I can call anything you say that I don’t like or agree with hate speech.
    I can just keep moving that goal post forever.
    Is a 25 year old employee at any company qualified to establish what constitutes “disinformation”?
    Are you qualified?
    Who do you trust?
    Do you want your free access to information filtered by someone else or would you rather filter it yourself?
    How much “real” news turns out to false? How much “disinformation” turns out to be true?
    In history, is it the good guys or the bad guys that don’t want to let their opposition speak out?
    Best wishes for you and yours

  • Thanks for replying with your thoughts Shea.

    I disagree that there is no such thing as Hate Speech. It is speech that expresses someone’s bigotry in an aggressive or pernicious fashion.
    Whether any given statement is hate speech is debatable, but the lines on which it is debated are whether it is bigoted and whether it is malicious/pernicious.

    For example, if someone said…
    “Those damned Xongers deserve to all be strung up and bleed out, the dirty fucks!” and Xongers is some racial/ethnic/religious/etc… group of people, then ya, anyone can see that is Hate Speech.

    On the other hand, if someone says…
    “I don’t trust Xongers myself.” That may be bigoted but its not especially malicious. It might be somewhat pernicious. It’s probably not hate speech, but someone might argue it is.

    Finally something like…
    “I love Xongers, they are really cool folks.” Is obviously not hate speech since it is generally a positive and supportive statement about said group of people.

    “Do you want your free access to information filtered by someone else or would you rather filter it yourself?”

    Personally, I want to be able to learn what I want to learn, but I don’t mind censorship when it is a matter of maintaining a certain environment in a private venue or broad public space for the sake of making it pleasant to be there. I do mind systematic censorship that seeks to completely erase a given idea or prevent anyone from learning about it. Facebook is entertainment and a commercial venture, not a library or state institution. And bigotry, of all topics, is not one I think we are going to forget exists, nor if we could completely banish the idea do I think we’d be missing out on anything.

    “How much “real” news turns out to false?”
    “real” news is a bit nebulous, but professional journalists, more often than not, do take some pains to follow professional standards that help improve the quality of their reporting. I think there is a pretty insidious effort to obfuscate what journalism is so that people will simply believe anything they like rather than doing the hard work of critical thinking.

    “How much “disinformation” turns out to be true?”
    Not much generally. By definition, disinformation is information that has the intent of deceiving people. If it is true, it would be by accident.

    “In history, is it the good guys or the bad guys that don’t want to let their opposition speak out?”
    Both, though pretty much everyone thinks they are the good guys and the other guys are the bad guys. I think the people you can probably trust the most to tell you the truth are the ones that don’t claim to be the good guys and can admit their own faults.

    “Best wishes for you and yours”
    The same to you and yours Shae, thanks for the civil discussion.