How do I deal with the state of technology and culture now that I understand the neuroscience behind it?

I am still really struggling with this information, what to do with it and how to not just be a crazy crackpot retreating from society.

If you’ve been following my posts lately, you’ve seen me talk a bit about themes of manipulation and the neuroscience of arousal and anxiety. The neuroscience knowledge underpinning these ideas has come to me from several sources, with a prominent one being the book “Behave” by neuroscientist Robert Sapolsky. Sapolsky goes through the evidence gathered from FMRI studies and other brain imaging techniques by which the activity of the brain has now been pretty well mapped. We do not have the technology to read the actual content of neuron impulses, but what we can do is we can measure what areas of the brain are active and when, and correlate that to behavioral and cognitive outputs. Some of these studies consist of things like showing someone an image or playing a sound and measuring what part of the brain lights up. Others consist of measuring nerve impulses and response times. Studies along this line of inquiry are what led me a long time ago to reject the idea of strong free will.

Consider this experiment: the subject is stimulated with a hot touch (“hand on a stove”) and the activation of nerves and brain centers is measured. The objective measurements show that their physical response, pulling their hand away from the threatening object, happens physically before the nerve signal has had time to reach the brain. But when interviewed, the subject insists that they made a conscious choice to move their hand. This is a study that you could replicate at home if you wanted to; it’s an objective, irrefutable truth, that in at least some clear cut situations, human beings behave without conscious input and then after the fact insist that they decided consciously. This is essentially my baseline experimental input to reject the idea of strong free will, which is a phrase I used to refer to the idea that human beings act entirely out of voluntary choice and therefore can control their own behavior, and by extension are personally and morally responsible for their life outcomes. Because we often act before we are consciously aware of what we are doing, this worldview cannot be true.

So then we go deeper. Sapolsky’s book primarily focuses on the circumstances that stimulate activation of the amygdala and deactivation of the frontal cortex. What you need to know about these two brain structures is that the frontal cortex is essentially the center of both reason and empathy, and the amygdala is the “lizard brain” that acts impulsively and generally just unconsciously reacts to stimuli. You don’t think with your amygdala, it is where you act without thinking. But also, the amygdala sends signals to the frontal cortex that influence it in turn. Activation of the amygdala is essentially responsible for all of our worst behaviors. Racism, sexism, and other latent biases all come through the amygdala. This is part of why you can’t just decide not to be racist, incidentally, but that’s another chapter. The amygdala leads us to make impulsive decisions, to behave emotionally rather than rationally. And it turns out that to most corporations, all of that makes us better customers. And that idea is where I basically lose touch with society.

I have come to realize that social media is designed around the premise that an activated amygdala makes a person a better consumer of advertising. This can be phrased another way: A person who is in a state of mental anxiety and distress, aka a bad mood, is a better customer for social media platforms. This isn’t a secret; even Google won’t hide from you the actual published news articles revealing that indeed, Facebook was caught in the act of and even openly admitted to running experiments on users to stimulate a state of anxiety. This is at the core of their algorithms that determine what content you see as you scroll, and who sees the content that you post and when. And I have come to realize that this is now in all social media and essentially every social app or platform that we interact with. Ultimately, it’s about competing interests. You and the company do not have aligned interests, which is another way of saying that you don’t both want the same thing. When you go onto a social media platform, you want to get updates on the lives of your friends, be entertained, and share the stories and experiences of your life with others. But that is not ultimately what the owners of the platform want. What they want is to get other businesses to happily give them lots of money for ads, and having been on that side of things as well, I can lend you some surprising insights. First off, most ads are “pay per contact” so Facebook is heavily invested in not just showing you the ad but getting you to click on it. What makes a person more likely to click on an ad? My advertising consultant told me, back around 2019, that the best keywords to use to get clicks on your ads aren’t anything to do with the content of your ads themselves or the attributes of your particular target customer – in my case, people who were out of work and had a disability – but instead terms like “hillary clinton” and “donald trump” that simply correlated with people who tended to interact with ads more. Why is that? It’s because those topics of discussion provoke anxiety. When you go on Facebook or Reddit or TikTok to discuss politics, or labor issues, or the various ways you’ve been mistreated, you are yourself entering a state of heightened anxiety, and you are also stimulating anxiety in the people interacting with your posts. Why does the algorithm promote this content, when so much of it is vocally against their interests? The answer is because regardless of the content of the posts and comments, often stuff decrying the very problems these companies are amplifying, those posts are effective at stimulating a toxic state of mind that makes people more receptive to advertising. And they know, because they have the data that tells them that when people interact with a particular user on a group page, they become statistically more likely to click ads within the next few minutes. So, the algorithm works to promote content from individuals who provoke anxiety in others, and invites trolls and bullies to be the first to comment on posts.

This is of course the exact opposite of what you as a user want. When I go on a Facebook group, it’s often with the goal of finding an answer to an obscure question, in hopes that the large size of the group will help me connect with the obscure expert. But what happens instead is that the algorithm shows my post not to the expert who has a track record of calmly and politely explaining the relevant ideas, but instead to a bombastic bully who has no meaningful advice to offer, because the platform’s data has shown the bully does a good job of pushing other users into a mental state where they are more receptive to ads. So the parts of the experience that we as users and the mental health community feel are the worst, are actually the parts that the companies themselves are most interested in. The algorithm is directly working against your goals, directly trying to thwart you from having a positive experience, and ultimately actively and intentionally set up to harm you.

It turns out, though, that it’s not just Facebook. It’s basically any piece of software where this paradigm could possibly be applied. So games, especially mobile games, have gone from being stress relievers to little devices programmed to actually increase your stress, because that makes you more impulsive and more likely to either click the ad (they are all pay per click) or to engage in some microtransaction. And it really is absolutely everywhere on your phone; damn near every app you can get these days has ads, microtransactions, or both. The whole industry rapidly adopted that, and if you talk to individual workers at these companies, most of them will say we have to because the rest of the industry is already doing it. Where else? Search engines. retailers like Amazon, and now even in-person retailers are getting in on it as well, with a recent major newspaper article acknowledging that retailers have intentionally abandoned the old paradigm of making stores welcoming and inviting to stimulate “lingering” and impulse buys from boredom and comfort, in favor of the online data-driven paradigm of triggering anxiety and utilizing the worst parts of behavioral neuroscience to make us better customers, and worse consumers from the individual standpoint. It is absolutely everywhere.

This leaves me in a pretty uncomfortable position. What can be done about this? Many people, most people really, just accept it. They say there’s nothing to be done and it just is the way of the world, so might as well embrace it and just try to get your piece of it while you can. And I can’t do that. But what do I do instead? There’s no way for me to get the message out. Ironically, the platforms don’t mind me talking about it for one simple reason: because by talking about it, I’m doing what they want and getting people in a heightened state of anxiety. Talking about it doesn’t fix it, not directly, because for most people there is no accessible alternative. We are on these platforms because we don’t know of anywhere else to go. How are you going to stop using Google? There aren’t a lot of good alternatives. DuckDuckGo may not do as much cookie tracking, but it’s still driven by a search algorithm that is easily manipulated by marketers. You aren’t going to get meaningfully better results there than on Google; on both you will still get pages and pages of low quality chaff before you see any real quality content. And quitting any of these things is just like quitting any other addiction, except that also everyone around you is still deeply addicted and will actively try to engage FOMO to lure you back. The toxic establishments are everywhere and easy to find with prominent locations and giant signs along every road, while the less toxic alternatives are fewer and harder to find, and for the most part, reluctantly or not, engaging the same techniques on that same logic of “if we don’t do it our competitors will beat us.” And so it gets harder and harder every day to find mentally safe places to engage in basic commerce and the logistics of daily life, as well as to keep in touch with others and to stay entertained. It’s basically everywhere, pervading every corner of western civilization, in our pockets and hands, on the devices I am using to compose this thought.

I’m just not sure what to do with this information besides Quixotically try my best to share it, and I know ultimately that I won’t ever win.