We are, if you believe the headlines, living in the midst of an unprecedented mental health crisis, exacerbated by the stress and isolation of the pandemic. According to the US Centers for Disease Control and Prevention, at the end of May, nearly 40% of American adults had experienced symptoms of depression and anxiety during the past month, with nearly a quarter filling a prescription for an antidepressant or other psychiatric drug. The rise in mental health problems has been particularly dramatic among the young: the New York Times reported in July that between 2017 and 2021, antidepressant use rose by 41% among American teenagers.
Often, such grim descriptions are paired with an upbeat solution: better access to mental health care, including psychiatric drugs. Given the understanding of mental health that has been promoted to the public for the past 40 years, this advice makes sense. We have come to understand that depression and other mental problems are disorders of the brain. Psychiatric treatments fix this brain disorder, or at least induce changes in the brain that are helpful to the suffering person.
However, if the past is prologue to the future, this understanding will continue to make the problem worse. For the past 35 years, we have sought to spread “awareness” of depression and other mental health issues. This has led to a dramatic increase in the prescribing of antidepressants: in 1990, the CDC reported that fewer than 3% of adults had taken an antidepressant during the previous month — a number that rose to 13.2% in 2018 and 23.1% earlier this year. Yet the burden of depression in the United States and other Western countries has only risen during this time.
There is a fundamental reason why. We have organised our thinking, and care, around a narrative of medical progress — of effective drugs that fix chemical imbalances in the brain — that isn’t to be found in the scientific literature. And the chasm between what is told to the public and what is found in the scientific literature has led our societies astray.
Writers, philosophers, and doctors have long observed that melancholy — a time of sadness or grief — visits nearly everyone now and then. As the 17th-century physician Robert Burton advised in The Anatomy of Melancholy, “it is most absurd and ridiculous for any mortal man to look for a perpetual tenure of happiness in this life”. It was only when melancholy became a “habit” that it could be considered a “disease”.
This was the understanding that prevailed until the Eighties. People often suffered bouts of depression, particularly in response to setbacks in life, but such feelings were seen as normal. Depression was only a “disease” in cases where people stayed depressed for no apparent reason, and these were rare. Community surveys conducted in the Thirties and Forties in the United States found that fewer than one in a thousand adults suffered an episode of “clinical depression” each year. Among this group, most did not need to be hospitalised, and only a small minority became chronically ill.
These findings led experts at the National Institute of Mental Health (NIMH) in the Seventies to advise the public that depression was an episodic disorder, which would generally clear up on its own. Most depressive episodes, wrote Dean Schuyler, head of the depression section at the NIMH, “will run their course and terminate with virtually complete recovery without specific intervention”.
However, this understanding of depression was soon to disappear.
During the Seventies, leaders of the American Psychiatric Association (APA) worried that their field was in crisis. Critics argued that psychiatry functioned more as an agency of social control than as a medical discipline, that its diagnoses lacked validity, and that its brand of talk therapy, psychoanalysis, was no more effective than other therapies offered by psychologists and counsellors.
In response, American psychiatry decided to rebrand itself. The public needed to understand that psychiatrists were medical doctors who cared for patients with real diseases. Not only would this rebranding improve psychiatry’s public image, but it would also give psychiatrists a privileged place in the therapeutic marketplace. They had the power to prescribe drugs, while psychologists and counsellors did not.
The APA, when it published the third edition of its Diagnostic and Statistical Manual in 1980, reconceptualised psychiatric disorders as diseases of the brain. The age-old distinction between ordinary depressive episodes and “clinical depression” was dropped, and both were lumped together as a single “disease”. Nancy Andreasen, future editor of the American Journal of Psychiatry, set forth tenets of this new psychiatry in her 1984 book, The Broken Brain. “The major psychiatric illnesses are diseases,” she wrote. “They should be considered medical illnesses just as diabetes, heart disease and cancer are.” With this understanding in mind, the APA quickly set out to market its new model of depression to the public.
It found an ally in pharmaceutical companies, who were also eager to change the narrative. Erasing the distinction between ordinary depression and clinical depression promised to create a huge market for antidepressants. Pharmaceutical companies gave money to the APA to develop its PR machinery in the early Eighties, and then, in 1988, they provided funds to support a NIMH campaign, called the Depression Awareness, Recognition and Treatment (DART) program, that was designed to sell the disease model to the public.
In anticipation of this campaign, the NIMH had conducted a survey of public attitudes about depression. Only 12% of Americans said they would take a pill — an antidepressant — to treat a depressive episode. And 78% said they “would live with it until it passed, confident that they could handle it with their own”. The purpose of DART was to relieve the public of this “misconception”. According to the NIMH, Americans needed to understand that depression was a “disorder” that regularly went “underdiagnosed and undertreated”. Absent treatment, it could become a “fatal disease”.
The public was also presented with a new — apparently scientific — theory of depression: that it was caused by a lack of serotonin in the brain. Thankfully, scientists had discovered a medicine, selective-serotonin reuptake inhibitors (SSRIs), that fixed this chemical imbalance. Prozac and other SSRIs were heralded in the media as “breakthrough medications” that could not only fix depressed patients but make them feel “better than well”.
Untreated depression was now presented as a pressing public health concern. Most important, people were being trained to monitor their own emotions, and to treat sadness or emotional discomfort as symptoms of a disease requiring medical intervention.
The PR blitz worked. In a 2005 press release, the APA shared the “good news”: 75% of consumers now understood that “mental illnesses are usually caused by a chemical imbalance in the brain”.
The low-serotonin theory of depression arose in the Sixties from the discovery of how the first generation of antidepressants, tricyclics and monoamine oxidase inhibitors, altered normal brain function. Both hindered the normal removal of serotonin (a monoamine) from the synaptic cleft between neurons.
Once this “mechanism of action” was discovered, researchers hypothesised that perhaps depression was due to too little serotonin. However, when researchers ran experiments to test whether people diagnosed with depression, prior to being medicated, suffered from low serotonin, the results were disappointing. As early as 1984, NIMH investigators concluded that “elevations or decrements in the functioning of serotonergic systems per se are not likely to be associated with depression”.
Investigations into the low-serotonin theory continued, but none provided convincing evidence to support it, and in 1999, the APA, in the third edition of its Textbook of Psychiatry, declared the theory dead, writing that decades of research “has not confirmed the monoamine depletion hypothesis”.
These conclusions were never promoted to the public, and so, this past June, when British investigators published a review of the history of this research and found there was no evidence to support the low-serotonin theory of depression, their conclusions were reported as shocking. In fact, we have known as much for two decades.
The real story, however, is even worse. Antidepressants block the normal reuptake of serotonin from the synaptic cleft. In response, the brain adapts to try to maintain its normal functioning. Since antidepressants raise serotonin, the brain responds by dialing down its own serotonergic machinery. In other words, antidepressants induce the very abnormality — a deficit in serotonergic function — hypothesised to cause depression in the first place.
Antidepressants, then, do not fix any known disorder. But, their defenders might counter, could they nonetheless help depressed people?
Here, too, the evidence is thin. In the world of “evidence-based” medicine, placebo-controlled, double-blind randomised trials (RCTs) are the gold standard for assessing a drug’s effectiveness. A recent meta-analysis of such studies determined that 15% of depressed patients treated with an antidepressant experience a short-term benefit; the remaining 85% are exposed to the adverse effects of the drugs without any benefit beyond placebo.
Even those short-term results suggest a major problem with widespread use of antidepressants: six of seven patients experience the drugs’ side-effects without any corresponding benefit. The most common side-effect may be sexual dysfunction, which in some cases can last long after patients stop taking the drugs. But some patients can also suffer from a drug-induced worsening of their original symptoms.
There are two notable elements of this drug-induced worsening. First, antidepressants triple the risk that a depressed patient, within 10 months of initial treatment, will turn manic and be diagnosed as bipolar, which is a much more severe disorder than depression. Second, over the long-term, antidepressants increase the risk that a person will remain symptomatic and functionally impaired.
The latter worry showed up in the Seventies, not long after antidepressants were introduced. At that time, clinicians still had a memory of depressive episodes that regularly cleared up without the use of drugs, and several reported that patients treated with antidepressants were now relapsing more frequently than before. Epidemiological studies agreed. The third edition of the APA’s Textbook of Psychiatry, published in 1999, summed up the disappointing findings: Only about 15% of patients treated with antidepressants recover and are still well at the end of one year.
Studies conducted since then suggest that even that 15% recovery rate may be too high. In the largest antidepressant trial ever conducted, the STAR*D study, only 108 of the 4041 patients who entered the trial remitted and remained well at the end of one year, a stay-well rate of 3%. The vast majority never remitted, remitted and then relapsed, or dropped out of the study. Meanwhile, a 2006 NIMH study of depressed patients who didn’t take antidepressants found that 85% recovered after one year, just like in the pre-antidepressant era.
Naturalistic studies in depressed patients regularly find that, over the long term, medicated patients are more likely to remain symptomatic and to become functionally impaired. These findings led Italian psychiatrist Giovanni Fava to propose, in a series of papers dating back to the Nineties, that antidepressants induce a biological change in the brain that makes patients more vulnerable to depression. As Rif El-Mallakh, an expert in mood disorders at the University of Louisville School of Medicine, put it in a 2011 paper: “A chronic and treatment-resistant depressive state is proposed to occur in individuals who are exposed to potent antagonist of serotonin reuptake pumps (i.e., SSRIs) for prolonged time periods.”
In other words, there is reason to believe that the mass prescription of antidepressants is making us, on the whole, more depressed. Indeed, the “economic burden” of depression — composed of workplace-related costs (absence from work), suicide-related costs, and direct-care costs — has steadily risen since the SSRIs came on the market. In 1990, it was calculated at $116 billion in inflation-adjusted terms. By 2020, it had nearly tripled to $326 billion.
Disability due to mood disorders has also risen. In community surveys conducted in 1991 and again in 2002, 30% of the adult population was found to suffer from an anxiety, mood, or substance disorder, based on DSM diagnostic criteria. However, while the prevalence of these disorders didn’t change, the percentage of people who got treated did, rising from 20% in 1991 to 33% in 2002. Over the same period, the number of American adults receiving a government disability payment due to a mood disorder rose from 292,000 to 940,000.
Following the publication of DSM III in 1980, the public was told a story of a great advance in medicine. Research had found that depression was due to a chemical imbalance, which antidepressants fixed. We organised our thinking around that narrative: depression was a biological “disease” that required medical treatment. This false narrative is the root cause of our mental health crisis today.
The tragedy is that there is another, more optimistic narrative about depression that exists in the scientific literature. This narrative informs us that human beings are responsive to their environments, and that depressive episodes often arise in response to setbacks in life. Time, and finding ways to change one’s environment, regularly lead to a spontaneous remission of depressive feelings.
A society that wants to promote good “mental health” should strive first to create more nurturing environments — improving access to housing and childcare, and working toward a more equal distribution of financial resources. It should also favour, as a first response, holistic treatments for depression — diet, exercise, walks in nature, social engagements, and so forth — as these complement our natural capacity to recover.
Antidepressants could still serve as a useful tool. Their use would simply need to be informed by research that tells of their limited short-term efficacy and of their potential negative long-term effects. Doctors would also need to inform patients that these drugs do not fix a “chemical imbalance”. True informed consent would dramatically reduce the use of these drugs, and surely diminish prescribing habits that treat them as a go-to response.
Paradigm shifts do happen, and today’s mental health crisis is telling us that one is desperately needed. Forty years of the disease model of depression has left us sicker and unhappier than ever before. There is little reason to believe that more of the same will fix our problems, and plenty of reason to think it will continue to make them worse.
Disclaimer
Some of the posts we share are controversial and we do not necessarily agree with them in the whole extend. Sometimes we agree with the content or part of it but we do not agree with the narration or language. Nevertheless we find them somehow interesting, valuable and/or informative or we share them, because we strongly believe in freedom of speech, free press and journalism. We strongly encourage you to have a critical approach to all the content, do your own research and analysis to build your own opinion.
We would be glad to have your feedback.
Source: UnHerd Read the original article here: https://unherd.com/