Online Courses and CE: We offer a series of online educational programs for professionals and the public. Visit us here for previews and discounts on our online programs.

Follow PsychologySalon on Facebook: Become a fan of the PsychologySalon page; updates will appear in your news feed.

Looking for a therapist? We have eleven registered psychologists in our clinic, and we are accepting new clients. For information, visit www.changeways.com.

Monday 14 December 2015

How to Buy Happiness: A Plan for 2016


Can Money Buy Happiness? Maybe.

We’ve all been told that it can't. This can seem true. You probably know people who have very little money and seem quite happy. You probably also know people who are wealthy and miserable. So having a big paycheck is no guarantee of happiness.

But let’s turn happiness from a simple state into a continuum, with extreme joy and contentment at the top end, and extreme misery at the bottom.

Purists may argue that positive and negative emotions are distinct phenomena. But measures of Quality of Life go from the 1st percentile (very displeased with one’s life) to the 100th (very pleased indeed). And all of us can understand statements like “I don’t feel as bad as I did yesterday, so I guess that’s better” and “I’m not the happiest ever, but I’m still pretty happy.” We instinctively grasp that how we feel can fall along a line from negative to positive.

If happiness is a continuum, then it’s like a two-way street. Doing something may lift us higher or drop us lower. If we are dubious about the idea that having money (or doing something in particular with it) might raise us upward, then we can look the other way.

“Is there any way of making yourself LESS happy by handling money in a certain way?”

At this point, suddenly everyone comes on board. Yes, there are all kinds of ways of making ourselves miserable with money. Spend it all. Go into debt. Buy huge piles of possessions that give nothing back. Waste it in the casino. Purchase intoxicants and spend life in a haze. Plunk it down on a huge mortgage that will enslave you for life. Easy.

So if using money one way can lower our position on the happiness scale, then surely doing something else can raise it as well. At minimum, we can retain our existing position by avoiding those mood-lowering traps. So although we might not be able to guarantee lasting, ecstatic, continuous happiness, we can certainly have an influence on ourselves.

Psychologist Daniel Gilbert (author of Stumbling on Happiness) has pointed out that most of our decisions are based, at least in part, on the impact they will have on our future mood.
  • If I get this car, I’ll enjoy it.
  • If I order the ravioli, it’ll be delicious.
  • If I get married I will feel fulfilled.
  • If I achieve what my father wants for me, I will feel successful.
The problem is that we are astonishingly poor predictors of future happiness. We grasp for things that will make us miserable, and neglect things that experience has shown will actually make us happier in the long run.

The result, when it comes to money, is that if we spend based on instinct and impulse, we are almost certain to be less happy than we could be. If we were to adopt a more considered approach, we could boost ourselves on that life satisfaction scale.

There are two nice aspects about this idea:

  1. Considerable research has been done that examines the actual impact of spending behavior on subsequent happiness.
  2. We have all, unintentionally, been conducting happiness research on ourselves all our lives – by making decisions and then feeling something afterward. We just haven’t analyzed the data.

So if we look at what actually seems to make most people happy, and what has, in fact, made ourselves happy in the past, we can guide our behavior more effectively.

In the online course How to Buy Happiness we consider various strategies to influence life satisfaction through spending. Some of this is based on the existing research. Some of it is based on my years of discussing these issues with clients. And much of it is based on your own experience working with yourself.

Throughout the course, one principle dominates:

In order to buy happiness, you have to buy happiness.

Sounds profound, doesn’t it? What I mean by this is that you cannot simply scatter money around and expect to get happiness as a side effect, like a prize at the bottom of a cereal box. You actually have to target your happiness more directly. This takes thought and planning.

Does all this sound a bit, umm, selfish?

It should. It does to me. But if we look at people who have the greatest life satisfaction, they tend not to be particularly selfish. They are active, engaged in the world, and generally pursuing pro-social ends.  They use their financial resources in the pursuit – and fulfillment - of their values.

My experience has been that people who go through this process do indeed raise their satisfaction with life. But they also buy fewer objects and feel better about the money they spend. Rather than spending on their temptations, they spend on their aspirations.

So what happens in the course?

You’ll learn about the research and hear lots of examples, You will comb through your expenses, your bills, your debts, and your own past to develop a personalized plan for a better life. Along the way you'll work through a series of exercises in the accompanying 60-page workbook (included with the course).

30 Day Guarantee

The host website offers a 30-day complete money back guarantee, no questions asked.

Free Previews

In addition to the preview above, some of the lectures are also available as free previews, so you can get a sense of the course before you buy. Just use the link below to access the discounted version of the course.

Discounts

The regular cost of this course is $130 USD. You can get this course for just $50USD by using the coupon code “happiness50.” For that discount, click here. The link will take you to the site, where you can see the additional preview videos and decide whether to go any further.

But as part of the launch of this course, we're offering a deeper discount to just $10USD using the coupon code "happiness10." There is a limited number of spaces on this coupon, and it will stop working January 1. To get that discount, click here.

Once at the site, you can read more about the course, preview some of the videos, and decide whether you want to take it. The price, should you go on to purchase the course, will be reduced automatically.

You'll get a series of over 25 short videos, 5 to 15 minutes long, plus a 60-page downloadable PDF exercise book to help you apply the ideas in your own life.

The Lectures

Want a bit more indication of what’s in the course? Here are the lecture titles:

Section 1: Happiness swings both ways
Can money really buy happiness?
The nature of happiness: A behavioural control system
The relationship between income and life satisfaction
Can money buy misery? Yes!

Section 2: Purchasing the future: Why do we buy?
The onward arrow: Every purchase is instrumental
You’ve been conducting happiness experiments all your life
Planning the future by mining the past

Section 3: The price of slavery
Your balance of assets and liabilities
Taking a look at your ballast: Debt and unhappiness
Swimming into the light: A path out of debt

Section 4: A dose of reality: Income versus DISPOSABLE income
Andre takes a hit: Income versus disposable income
Your turn: Calculating your own disposable income
Pricing your purchases in work hours

Section 5: How to give yourself a raise
Andre doubles his disposable income
Finding the Easter eggs: Increasing your OWN disposable income

Section 6: How to change your life
Finding north: Defining your Ultimate Goals
SMART Immediate Goals: The steps on the path

Section 7: Buying happiness without buying anything
Your single biggest happiness purchase
Pay yourself first: Budgets do not work!
Freedom can be bought

Section 8: Buying happiness with real money
Spend the most on things you hate
Luxury at last

Section 9: Non material purchases: How to spend money with no object
Savings: How to build up a happiness-inducing nestegg
Experiences versus objects: Why events induce more happiness than things
Voluntary tax: The charitable sector as a source of your own happiness
Buying personal enhancements: The value of skill-building

Section 10: Check, please!
Your five year plan
Live the life you envision

Monday 7 December 2015

Why I Have (Mostly) Given Up on Diagnosis

The Professional's Big Book of Stories?
Every year about this time I review my template file for new client notes. It has blank sections for name, presenting concern, history, plan, and a number of other categories.

This year I found myself staring at it, considering whether a revision was in order. And the category that leapt out at me was “Diagnosis.” The truth is, I seldom use it any more.

Once, that part of the template was more elaborate. I had space for all 5 diagnostic axes of DSM-IV, and sometimes convinced myself that by filling them in I was doing something useful. That didn’t last long – at least not for the majority of people I see.

I dropped the detail years ago, leaving only that “Diagnosis” heading. When DSM-5 came out it seemed that I had been prescient. The axes were gone. The new Bible revealing the truth of immutable human nature had been revealed, and the axes were not a part of it. The nature of disease had, once more, changed overnight.

Maybe new copies of the DSM should come with an “Orwell pill” that erases memories of previous editions. This might help older clinicians like me forget that things were ever different, and we would more readily leap onboard with a fervent belief in the new revelation.

It’s hard for a psychologist not to diagnose. In many jurisdictions, diagnosis is one of the few controlled acts that distinguish our profession from other counselors: We’re allowed to do it, they are not. In reality, of course, they do it all the time. We say “This individual suffers from panic disorder.” They say “This individual reports symptoms that appear to meet criteria for panic disorder.”

Nevertheless, it’s an important distinction. Somehow. If it’s your one Special Power, it’s hard not to use it.

What’s diagnosis for?

Definitions vary, but the core of diagnosis is the identification of the character and etiology of (in this context) a disorder. (Quibble if you wish.) The essential point of diagnosis is to guide action. By knowing what you are faced with, you may know what to do about it.

Diagnosis is enormously useful in most of medicine. A patient might present with a host of symptoms – perhaps including fever. Diagnosis might attempt to determine whether the person suffers from a viral or bacterial infection. If bacterial, an antibiotic might be effective. If viral, the antibiotic will be a waste of time. Diagnosis takes a host of symptoms and creates a knife-edge dichotomy: yes or no, this or that. If done correctly, the side of the line on which one falls can make a huge difference in the treatment used and the outcome expected.

Diagnosis becomes more difficult when problems are continuous rather than dichotomous. Instead of “Does she have mumps or does she not?” the question is “Are the symptoms severe enough to merit a diagnosis of depression?” In the first case, we are making an inference about the underlying reality of the disorder. In the second case, we are drawing an artificial line, much like the lines defining different time zones. We could move the line a few kilometers to the west or east, and it would be meaningless for someone to argue that we had done it incorrectly.

In striving to emulate the rest of medicine, psychiatry and the other mental health professions saw diagnosis as essential. In many ways this was true.

  • In order for a study on the effective treatment of OCD in Baltimore to be relevant to the treatment of patients in Edinburgh, we had to know we were talking about the same thing. 
  • In order to apportion scarce mental health resources, we had to have a systematic way of admitting patients to treatment – it isn’t fair for the primary determinant of whether one gains access to a mental health center to be which side of Oak Street one lives on. 
  • And indeed, there are a few conditions – very few – that seem genuinely and sharply distinct from others. An example from the 20th century was tertiary syphilis – a condition that might, to the untrained eye, resemble other conditions but had an entirely distinct cause.

There may be others. Currently a great deal of work is looking at the genetic markers for disorders like schizophrenia. Although for the most part these seem only to be hints of the level of risk for a disorder, it is conceivable that distinct causes will be discovered.

As well, in some cases we may not have sliced the pie thinly enough. Many people suspect, for example, that one problem with Major Depression is that we are looking at a cluster of disorders rather than a single entity. If we could identify distinct types we might be able to select treatments and predict outcomes better than we can now.

In clinical settings, however, the main point of diagnosis is utility. If we take all the complexity of the human life sitting before us and boil it down to a nice clear label, this will tell us what to do.

The trouble is, it almost never does this.

I frequently get referrals of clients who already have a diagnosis. Setting aside the fact that the diagnosis is often incorrect (not surprisingly, having usually been made in the context of a five-minute consultation), this tells me little. In order to decide what course treatment should take, I need to take all the detail back out of the box to see the complexity again. What’s going on in the person’s life? When did it all start? Which of the symptoms of that disorder does the client actually have? What thoughts are they having as they exhibit those symptoms? What do they think is going on?

We are indoctrinated in the church of diagnosis so firmly, that it took an inordinately long time for me to begin questioning what I was doing. I would see the client, pull up my intake template, pull out my DSM, turn to the disorder that seemed to describe the person most closely, and puzzle through whether they had the requisite “5 of 9,” or “6 of 11,” or “A, B, and 3 of the 5 symptoms for C” to meet criteria. I’d put down the answer and that would be the last time I looked at it.

As a tool for guiding treatment, diagnosis was the ultimate null entity. A placebo, if you like.

True believers are infuriated by this. Take even a simple case, they say. Your client is too afraid to get on a plane. What are they afraid of? If it’s that the plane will crash, they have 300.29, Specific Phobia. If it’s that they will have a panic attack, and these occur in other settings as well, then it’s 300.01, Panic Disorder. This makes a difference! In the one case you’ll look at the overprediction of catastrophe; in the other you’ll do panic treatment. A classic knife-edge diagnostic distinction.

Well, yes. But it’s not the diagnosis that tells you this. It’s your assessment. Take away the assigning of the number, and you still have all the information on which your diagnosis was based. In fact you have more, because the whole point of diagnosis is to shave away the detail to leave you with a nice clear label. In order to treat the client you have to reclaim all the shavings and flesh out your understanding.

So you spend all that time packing things into a box, only to open up the box and take everything back out in order to decide what to do. The diagnosis provides you with almost no guidance.

What about that cutoff line? In addition to asking “this disorder or that disorder,” diagnosis attempts to declare “disorder or no disorder.” In my work, the cutoff is virtually irrelevant. If someone has OCD-ish symptoms and meets criteria, I’ll look at the details and work with them on the OCD – maybe with exposure and response prevention (ERP). If they have OCD-ish symptoms but fall short of the full diagnostic criteria I’ll do exactly the same thing. No one has ever shown that the diagnostic line has magical qualities: people who fall on one side respond to ERP; people on the other do not. In both cases I’ll point out that therapy requires a lot of work, and that it is up to them whether they wish to invest the effort in it.

Some people argue that my stance comes from having a psychotherapy practice. Medications, it is sometimes argued, have specific neurochemical effects and so identifying the underlying pathology via diagnosis is critical. (I am setting up a bit of a straw man, here, admittedly – given the last 10 years of psychopharmacology almost no one argues that diagnosis reveals specific neurochemical underpinnings anymore.)

In practice, however, sharp diagnostic lines do not often dictate prescribing practice. Antipsychotics, once given almost exclusively to people with psychotic-spectrum disorders (hence the name), are now as ubiquitous as Jelly Tots. Antidepressants are routinely prescribed to patients who do not meet diagnostic criteria for major depression (or any psychiatric disorder), despite the lack of evidence of efficacy of these medications for subclinical symptoms. (The relatively poor evidence of efficacy for appropriately diagnosed mild-to-moderate depression is another concern.)

So what? Is there a downside to diagnosis?

One problem with diagnosis is that we spend scarce clinical time performing an act that often has little value in structuring treatment. But there are others.

Diagnosis can shape clients’ self-perceptions. Sometimes this can be positive. Certain clients learn that they have an identifiable disorder and sigh with relief. “So I’m not just weird – and there are other people like me.” 

But diagnosis can have the reverse effect as well. It can draw a firm line dividing the person from the rest of humanity. “So I’m different from mentally healthy people. They have intact minds and I do not.” In my experience diagnosis more often has an alienating effect on clients than a soothing one.

If I sense that a client will feel better to know the name for their problem, I have no problem giving it. “Yes, these are all symptoms of depression.” But I make it clear that the dividing line is neither important nor indelible. “Just as many of us will have symptoms of a cold which then go away, we will work to bring your symptoms down to the point where you are not in a depression.” I attempt to intervene if I see the label becoming welded to the person’s sense of self - for example, if they begin referring to themselves as “a depressive.” The more they incorporate the symptoms into their vision of who they are, the more recovery will involve “killing” a part of the self or becoming a new and strange person.

This me-versus-them distinction might be inevitable if it was based on a clear reality: “Yes, you are infected with Hepatitis C and most people are not.” But again, most psychiatric diagnosis is defined by a consensus agreement of committee members around a boardroom table, not by a blood test. Most diagnosis involves the drawing of artificial and rather arbitrary lines. But it becomes a psychological reality in the mind of the person diagnosed. “The problem,” so a colleague once said to me, “is that people think these disorders are actually REAL.”

Well, the symptoms are real. The distress is real. In many cases the need for treatment is real. But the label itself takes on a reality that is often a barrier to improvement rather than an aid.

Furthermore, diagnoses find their way into the clinical record. Once a label is down on paper it can influence the outcome of court cases, career aspirations, and the quality and nature of medical care. Many have noted the chill that a diagnosis of “Borderline Personality Disorder” can give to subsequent providers, whether or not it is accurate. Similar unintended effects can happen with depression, anxiety disorders, or other problems.  This can be entirely appropriate: if an airline pilot is in the throes of major depression we want to know this. Capriciously given, however, or assigned as a routine part of a clinical consultation, a diagnosis can have long-lasting and damaging effects that outweigh any beneficial elements of the encounter.

Clients themselves are often unaware of the potential long-term difficulties associated with receiving a diagnosis. It is not uncommon for a university student to petition me for a diagnosis of ADD or anxiety disorder to be provided to examination staff. Except in extreme circumstances I have become very resistant to these requests. It is a service provided readily by many practitioners, but is not one I am obliged to offer as part of my work – and I have seen too many unintended negative consequences of casual diagnosis.

Some people have said to me, “Isn’t there an ethical requirement to assign a formal diagnosis prior to conducting therapy?” Actually, there isn’t. There is a requirement to conduct a proper assessment to see what’s going on. But the assigning of a label is optional.

First, do no harm. If I am doing something that runs a significant risk of causing harm to a client, I had better know that the likely benefits of such an act outweigh those risks. In the case of diagnosis the benefits have become steadily less apparent to me, and the downside steadily more visible.

Partly I have become less enthusiastic about diagnosis because I fail to see its usefulness in most circumstances. Partly it’s because I see it as a potential ethical failing.

Maybe we should be trying to understand our clients more, and oversimplify them less. Just a thought.

Sunday 29 November 2015

A repost: Gifts for the Needless

This is a repost of a Black Friday-ish article from a couple of years ago. The ideas remain valid, and the images of Black Friday malls only reinforce the message.

"Everything is better in Gears of War 3... Four player co-op, new multiplayer modes, bajillions of unlockables, retrolancers, stuff that blows up, and perfect controls make this a top pick for the holiday season." - Metro News, Nov 25-27 2011, written with no apparent sense of irony.

Recently I posted some survival strategies for the holidays, and I mentioned that one of the biggest stresses for some people is trying to decide what to get for the person who has everything they need.
How about giving a mountain?

People often tell me they like the parties, the dinners, the decorations, and even the weather at Christmas. They just don’t like the shopping. They don’t know what to get, the people on their list don’t really need anything, and they’re uncomfortably aware of the discrepancy between the supposed point of Christmas and what it has become. We are one of the wealthiest societies the planet has ever known. The last thing most of us need is more stuff.

So stop buying people objects they do not need or, in most cases, want.

“Oh, but I have to give them something,” you say.

Fine. Give them eye surgery.

Not on them. On someone who needs it.

It’s possible to give great gifts to people who already have everything. Because the one thing they don’t have is a world in which others aren’t suffering.

What if you could give someone an HIV clinic, or an urban greenbelt, or a passport for a border-crossing bear, or eye exams for an entire village, or an acre of Canadian habitat, or an elementary school education, or a microfinance bank? Wouldn’t that be better than yet another sweater they won’t wear?

An increasing number of people are taking the opportunity of Christmas to give gifts that fit with their values and that have greater meaning than a new cordless drill.

Here are just a few suggestions. Your values and preferences may vary, so investigate your favourite organizations to see what they offer.

The Nature Conservancy

When you give a gift they send the recipient a calendar and certificate.  For $40 you can give a gift representing an acre of habitat. Larger donations represent habitat areas for more wide-roaming species ranging from owls ($55) to caribou ($400). You can also give any amount of money to support the activities of the Conservancy. www.natureconservancy.ca

Trans-Himalayan Aid Society

TRAS, a local Vancouver charity, funds projects in India, Nepal, and Tibet, including child education sponsorships at a variety of schools in the region. www.tras.ca

Seva Canada

SEVA focuses on the prevention and cure of blindness in the third world. They conduct educational, preventive, and surgical programs to deal with treatable blindness (the most common cause of which is cataract).  For $25 you can provide eye exams for 25 children, for $50 you can pay for cataract surgery, and for $150 you can pay for eye surgery for a child. www.seva.ca

The Stephen Lewis Foundation

SLF supports community-level organizations focused on HIV/AIDS in Africa by providing care and support to women, orphans, grandmothers, and people living with HIV/AIDS.  Stephen Lewis is the former Special Envoy for HIV/AIDS in Africa for the United Nations.  You can gifts of any amount and you can order Christmas cards from the organization. www.stephenlewisfoundation.org

David Suzuki Foundation

An environmental organization, DSF engages in scientific, educational, and advocacy related activities to highlight a wide variety of issues.  This year they have a tongue-in-cheek set of symbolic gifts based on the idea that the North Pole (site of a certain workshop) is melting. You can buy water wings for reindeer ($20), elf-sized hockey sticks ($20), an abominable snowmaker ($50), and a variety of e-Cards (ranging from Critter Passports to Green Belts) that help fund the organization’s activities. www.davidsuzuki.org

Kiva

You’ve heard about microcredit in developing countries. You may not know that you can be a microcredit bank yourself. Kiva allows you to deposit money with them, and then issue loans of $25 or more to projects and individuals of your own choosing. You look through the photographs and stories of people seeking funding, and simply click to loan a specific amount to the projects you want to support. Kiva sends you updates on the repayment of these loans. When your loan is repaid, you can lend the money to another project. This is a perpetual gift that allows you to help any number of people over time. www.kiva.org

Canada Helps

Want to contribute to a group not on the list? The website CanadaHelps.org makes it easy to contribute to any registered charity in Canada. You can browse the available charities and donate right on the site. You can also create your own “Cause Wish List” that others can look at when they want to give you a gift – kind of like a wedding registry. www.canadahelps.org

*     *     *

Don’t like any of these options? No problem. They’re just examples. There are thousands of ways of giving gifts to those who have no real needs – gifts that can benefit people and causes that fit with your values and those of your recipient.

What easier last-minute gift can you think of?

Online Course

The holiday season seems like a great time to take another look at our spending and finances, and to ask ourselves whether we might be using our money unwisely. To assist, PsychologySalon has developed a brand-new online course called How to Buy Happiness. There are more than 25 brief lectures and a guidebook of 60 pages to help you apply the ideas in your own life. The preview is below.

For over 60% off the regular course fee of $130 USD, use coupon code LAUNCH50 when you visit our host site, here. (In fact, this link takes you directly to the discounted fee.)

We also have courses for professionals and for the public entitled UnDoing Depression, What Is Depression, What Causes Depression, Diagnosing Depression, Cognitive Behavioral Group Treatment of Depression, and Breathing Made Easy. For the full list with previews and substantial discounts, visit us at the Courses page of the Changeways Clinic website.

Wednesday 29 July 2015

Should Physicians Read Journals? Given Current Standards, Maybe Not.

Hmm - no journals. Maybe that's best, for now.
The image is so familiar it is a stereotype: The physician’s desk, piled high with copies of medical journals, where she or he reads the latest research updates between patients. Medical science, it is said, progresses so quickly that if practitioners do not keep up their knowledge base will be obsolete within five years.

But is the reading of journals useful? Can it potentially inculcate misinformation as much as progress? Is the knowledge gained worthwhile?

Much has been written about the flaws of individual studies. In this blog I recently focused on an example: The infamous Study 329 on the use of paroxetine for adolescent depression. Reading studies such as these, with their swapped outcomes, hidden side effects, and often shockingly biased interpretations of data, may produce only misplaced beliefs in readers, and may actually result in less competent practice. If the physician reads only the summary abstract – as many do, being strapped for time – there may be little relationship between the reported outcomes and what the data actually say.

Imagine that such problems did not exist, however. Imagine that we live in a never-never land of balanced and dispassionate reporting of study results, that published work is competently done, that all outcomes are clear. This is hard to do, because it is so far from current reality. Could there still be a problem?

Well, yes. Several. But let’s just focus on one: Publication bias.

Most drugs and medical procedures are evaluated in multiple trials. Indeed, a single trial is seldom (-to-never) sufficient for a drug to be approved for use. Some trials are reported in the literature; others are not.

Imagine that you are a particularly diligent physician who reads all of the journals relevant to your field. Drug Z appears in four studies, and in all four it outperforms the inert placebo administered to the comparison group. Sounds good. Good enough, perhaps, to influence your practice. Here is a reliable treatment that generally seems to work very well.

The problem is that there are 10 trials of Drug Z, not 4. You can’t read the remaining six, because they have never been published. This isn’t a problem if publishing is something of a random process. Put 10 trials in a bag, then pull out four of them and print them: you will probably have something vaguely resembling a representative sample.

It has been amply demonstrated that this is not how publishing works, however. Submit two studies: one showing that Drug Z works, and one showing no difference from placebo. The study finding a difference between groups is much more likely to see print. Journals like publishing promising findings, not failures.

This understates the problem, however, because the entities carrying out the research have a vested interest (a clear, documented, and obvious conflict of interest) in seeing supportive studies published and negative trials suppressed. So the more accurate picture is that we give our bag of 10 studies to a person (or, say, a pharmaceutical company) who wants Drug Z to look good, turn our backs while they read them all over and hide the ones they don’t like under a mat, then pick four from the rest to submit to journals.

Now switch roles and become a practicing physician trying to do your best for your patients. You read the published literature and attempt to get an impression of the overall trend in results for Drug Z. You never hear the results of the negative trials and don’t even know the studies exist. As far as you are concerned, four trials have been conducted on Drug Z and the results are unanimous. Needless to say, you start including it in your prescribing habits, happy that you have been keeping up with recent developments.

Psychology is, to a great extent, the science of the disconnect between external reality and internal representations of that reality. Even with a perfectly balanced presentation of data, biases and distortions are bound to develop in any human mind. The last three patients to whom you gave Vitamin C recovered from their colds immediately, and as a result you have developed a gut conviction of its efficacy.

Added to the problem of fallible human information processing however, is a system of medical reporting that introduces extreme distortions in research before practitioners ever set eyes on it. In such a situation, erroneous convictions about treatment efficacy are inevitable. It is difficult to see how a system such as this could lead to reliable enhancements in practice – but easy to see how a reader could be misled by organizations that deliberately strive to slant the impression the reader gets.

So should physicians be reading journal accounts of pharmaceutical trials? It is difficult to see the worth in such an exercise – at least, as the journals operate currently. It is even possible that doing so will lead to decrements rather than improvements in clinical practice.

Is there a solution? Yes, an obvious one. Organizations carrying out clinical trials should have to register their study with journals prior to collecting the data, with a clear commitment to publish the results regardless of outcome. This wouldn’t solve the problems of distortion in the write-up or the downplaying of side effects, but it would help significantly with the publication bias problem.

If this is likely to help, why haven’t the journals already pledged to do this? Well, um, they did. Years ago. And in 2007 the practice was put into law in the United States. And then…very little changed. Journals have gone on publishing trials that were never registered, and null trials have continued to go missing.

A hopeful development

For many years this problem has been noted amongst scientists and practitioners in medicine and other health disciplines (clinical psychology is no less prone to this type of concern). More recently, it has gone beyond the health science nerd community and has begun to seep into the public consciousness. Medicine and other health disciplines have begun to feel the pinch of appropriate skepticism and disrepute.

As is often the case, shame motivates where ethics fail. With public exposure we may see improvements in the pipeline from laboratory to clinical practice.

One of the chief proponents of change has been Ben Goldacre, British physician, science writer (the excellent Bad Pharma, among others), and medical gadfly. He has been a chief proponent of alltrials.net, a website devoted to changing publication practice in the medical field. Here is a sample of his style, in a TED talk:


And the word is gradually seeping out. This week there are articles in The Economist (click to see this lovely piece of reporting) and elsewhere on the problem. As exposure continues, we can expect that the worm of shame may begin to do its work.

Is this important?

All of this returns us to a familiar two questions:

  • Are the lives of people with health concerns important?
  • Does the field of healthcare have any pretensions to being based on evidence?

If the answer to these two questions is “No,” then poor research, poor reporting, and poor practice are no great problem – though it would probably be more honest if we stopped pretending that healthcare believed in science or in the improvement of human welfare and treated it instead as a simple revenue-generating business. That's not why I got into it (in the allied and every bit as fallible field of psychology), but without a firm adherence to the principles of science it is difficult to argue that it is anything else to skeptics.

If the answer to either (or both) of these questions is “Yes,” however, then the current state of affairs is clearly unsatisfactory and needs considerable change – not just a pledge designed to calm the waters, but an actual commitment to responsible research and the balanced reporting of evidence.

Tuesday 7 April 2015

Publication Bias and Meta-Analyses: Tainting the Gold Standard with Lead

Cute, yes. But effective?
As Ben Goldacre notes in his excellent book Bad Pharma, for decades the gold standard for medical evidence was the review article - an essay looking at most or (hopefully) all of the research on a particular question and trying to divine a general trend in the data toward some conclusion ("therapy X seems to be good for condition Y," for example).

More recently, the format of review articles has shifted - at least where the questions addressed have leant themselves to the new style. The idea has been to look at the original data for all of the studies available, and in effect reanalyze them as though the research participants were all taking part in one gigantic study. By increasing the number of data points and averaging across the vagaries of different studies, a clearer finding might emerge.

The meta-analysis has gone on to be revered as a strategy for advancing healthcare. It has vulnerabilities, of course:

  • It depends on the availability of a number of original studies.
  • It can be distorted by a particularly strong result in one study with a lot of participants
  • It can only be as strong as the research design of its constituent parts. 

Nevertheless, if there are a number of well-designed studies with roughly similar formats addressing a similar question, the meta-analysis can provide a balanced, weighted result that points nicely toward treatment selection decisions.

But how are meta-analyses affected by unpublished studies? 

In my last post I discussed how a publication bias (most commonly, a bias against publishing negative results) leads to a situation in the literature roughly equivalent to reporting only the participants who benefited from a treatment - and slipping under the rug the data from those who did not. And in fact there is a problem for meta analyses.

Imagine that we want to evaluate the effectiveness of a radical new therapy in which depressed individuals talk about their relationships with their pets to the therapist. I don't practice this form of therapy myself, you'll be happy to know, but I'm sure someone does. Call it "Talking About Cats Therapy," or TACT. Studies examining it compare participants' mood improvements from pre- to post-therapy with the improvement seen in a placebo therapy (PT; let's make it a sugar pill, for simplicity's sake, though you'd generally want something that looks more like the treatment being tested).

We look at the published literature and find that there are six published studies. By an amazing coincidence, all six had the same number of participants (100; 50 in each condition), roughly similar outcomes (TACT participants improved on average 4 points more on the Beck Depression Inventory than PT participants), and the same amount of variability in response (lots: in every case, some people improved a lot and some less; a few even worsened).

Given this wide variability, we'll imagine that only two of the studies meet the effect size necessary to achieve statistical significance. In the other four studies TACT was statistically no better than PT, despite still showing a 2-3 point advantage for TACT.

We conduct our meta-analysis, combining the subjects of the 6 studies into one analysis with 600 participants - 300 in TACT and 300 in PT. We've averaged the greater gains made by the participants in TACT - which comes to 4.0 points overall. But because we now have 300 people per group, our study is more powerful - and that 4-point difference is enough to reach statistical significance - at a higher level (p>.01) than the two original studies that were significant (both p>.05).

But there's a secret.

In our fantasy universe there weren't just 6 studies of TACT versus PT. There were 10. In 4 of the studies the results suggested that TACT actually made people worse, and the people receiving sugar pills improved a little due to expectancy (about the same amount as they did in the published trials).

Those four studies, like most of the many unsupportive studies of antidepressant medication discussed in my last post, were not published.

The developers of TACT, who firmly believe in the therapy (and stand to make big money from a well-supported therapy via training workshops), decided that there must be some flaw with these negative studies. In retrospect, the therapists weren't perhaps so well-trained, and somehow there were a lot of people who didn't actually like their cats in the TACT condition. And anyway, the journals surely wouldn't be interested in publishing articles about therapies that are worse than placebo, so no point in trying.

But this unpublished data is important.

If we conducted a meta-analysis on all 10 studies, we would find that the positive-ish and negative studies average out, leading to a difference between TACT and PT of 0.00: a complete null effect. The unavailability of negative trials causes our state-of-the-art meta-analysis to misperceive a null therapy as effective.

Why does this matter?

When negative studies go unpublished, and when meta-analyses depend only on the published work, the problems of biased data are not averaged out; they are combined. The result can be a stronger finding for a null or harmful therapy than was found in ANY of the studies upon which the meta-analysis was based (stronger, that is, in terms of significance level). Theoretically, it would be possible to obtain a significant meta-analysis of a hundred studies, none of which had reached significance on their own.

Meta-analysis is often viewed as a way of averaging out results and flaws in constituent studies. The lack of representativeness brought about by the nonpublication of negative data (which is the most common type of publication bias) is not compensated for by combining the published studies - it is made worse.

The researchers working with the Cochrane Collaboration, a group dedicated to creating systematic reviews of medical therapies, attempt to correct this problem by locating research trials that have gone unpublished. The results are frequently at variance with the conclusions that would be reached by a review of the published data alone - largely because researchers (or funders) frequently opt not to publish trials that are unsupportive.

Does this really matter? After all, if you are arguing that it is possible for a human to climb Mount Everest without oxygen, it takes only one positive result to make your point. It is irrelevant how many previous attempts resulted in failure.

In healthcare research, however, it matters a great deal. We are looking to see not whether it is possible for a given therapy or approach to benefit at least one person who gets it. Every therapy - whether it is past-life regression, Vitamin C, or high-colonic enemas - will appear to have helped someone, whether because of expectancy, spontaneous recovery, or pure chance. It is for this reason that patient testimonials are not considered to be valid evidence in favour of health-related procedures.

The question we are always asking is whether a therapy is effective (or damaging) for a group of people, be they male airplane phobics, all diabetes sufferers, or post-transplant patients on immunosuppressive drugs. We look at the variability (versus consistency) of response across individuals in our target group, the magnitude of effect, and the size of the effect once the influences of expectancy are removed (usually by comparing the treatment group with a placebo condition). This is precisely the type of judgement likely to be affected by examining only a subset of the data.

What this means is that although meta-analysis is a tremendously useful tool in healthcare research, it remains subject to one of the largest sources of research bias - the selective publication of results.

What should we do?

The obvious solution, arrived at by anyone who looks at the problem, is to create a registry for trials before they are carried out, with the understanding that only pre-declared trials will be published, and that all pre-declared trials will be published regardless of the results.

This initiative, at least for pharmaceutical trials, has been agreed upon and declared by a consortium of prominent journals, leading many of us to believe that a big part of the problem had been solved. (At least for medications commencing trials now - it is still not helpful in resolving the situation for medications already on the market). I have openly stated as much at numerous workshops on depression treatment.

Unfortunately, I may have spoken too soon. According to Goldacre, the solemn pronouncements of the editors of many of medicine's most prestigious journals have meant what few of us were cynical enough to fear: Nothing at all. The journals have gone on publishing unregistered trials much as they did before.

There's just one difference. Having seen and acknowledged a fundamental problem that compromises the validity of the research they promote, their actions constitute an overt and conscious (rather than simply neglectful) abandonment of the principles of science.

Whether it will be decided, perhaps by future editors, that the welfare of patients merits an improvement in practice remains to be seen. We can only hope.

Reference

Goldacre, Ben (2012). Bad Pharma. New York: Faber & Faber.

Tuesday 24 March 2015

Publication Bias: Does Unpublished Data Make Science Pseudo?

What can we do when the data are missing?
Way back in the 1970s when I first started studying psychology I heard about publication bias. It was easier to get a study published if it had significant results than if it didn't.

That made a certain amount of sense. A study producing only nonsignificant results (group against group, variable against variable, pretest versus post-test) might be badly designed, underpowered (too weak to detect a genuine effect), or simply misconceived. No wonder no one wanted to publish it. And who cares about hypotheses that turn out not to be true anyway?

Partly, of course, the problem is obvious: if positive studies are much more likely to be published than negative ones, then erroneous positive results will tend to live on forever rather than being discredited.

More recently the problem of publication bias has been shaking the foundations of much of psychology and medicine. In the field of pharmacology, the problem is worse, because the majority of outcome trials (on which medication approval and physician information is based) are conducted by pharmaceutical firms that stand to benefit enormously from positive results, and run the risk of enormous financial loss from negative ones. Numerous studies have found that positive results tend to be published, while negative ones are quietly tucked under the rug, as documented by Ben Goldacre in his excellent book Bad Pharma.

In a case examining outcome trials of antidepressants (Turner et al, 2008), 48 of 51 published studies were framed as being supportive of the drug being examined (meaning that the medication outperformed placebo). Of these, 11 were regarded by the US Food and Drug Administration as being questionable or negative but were framed as positive outcomes in publication.

So the published data look like this (P = positive, N = negative):

PPPPPPPPPPPPNPPPPPPPPPPPPNPPPPPPPPPPPPNPPPPPPPPPPPP

Given that a great number of readers only look at the study abstract or conclusion, or lack the skills to detect spin, they'll miss the reality that many of the positive trials aren't so positive. The real published data look more like this:

PPPPP?PP??PPNPP?PPPP?PPPPNPP??PPPP?PPPNPP?PPPP?PP?P

In contrast, only 1 of 23 unpublished studies supported the idea that the medication being tested was effective.

NNNNNNNNNNNNPNNNNNNNNNN

So the real picture is more like this:

NNPPPNPP?PNP??PPNPNP?PPNNPP?PNPPNPNNP
P??NPPNNNPPN?PPNPNPNP?PNNPPNP?PPN?PN

Given that physicians, who are urged to prescribe based on the research, only have access to published data, the result is likely to be a systematic exaggeration of drug benefits.

Smug psychologists (and others) have stood by smirking, unaware that their perspective is elevated only because they are being hoisted by their own petards. True, there are no billion-dollar fortunes to be made from a psychological theory or a therapeutic technique, but there remain more than enough influences to result in a publication bias for noncorporate research:
  • A belief (often justified) that journals are more likely to reject articles with nonsignificant results.
  • A tendency to research one's own pet ideas, and a corresponding reluctance to trumpet their downfall.
  • A bias to attribute nonsignificant results to inadequate design rather than to the falsehood of one's hypotheses.
  • Allegiance to a school of thought that promotes specific ideas (such as that cognitive behavior therapy is effective - one of my own pet beliefs) and a fear of opprobrium if one reports contrary data.
Does publication bias fundamentally violate the principles of science?

Although science can lead to discoveries of almost infinite complexity, science itself is based on a few relatively simple ideas.
  • If you dream up an interesting idea, test it out to see if it works.
  • Observation is more important than belief.
  • Once you've tested an idea, tell others so they can argue about what the data mean.
  • And so on.
Even science, in other words, isn't rocket science. One would think that in execution it would be about as simple as in explanation. But no. In practice, it's extremely easy for things to go wrong.

An early statistics instructor of mine showed our class an elementary problem with research design by discussing a study of telekinesis (the supposed ability to move things with the mind). The idea was to determine whether a talented subject could make someone else's coin tosses to come up "heads". As the likelihood of a properly balanced coin coming up heads is 50%, anything significantly above this would support the idea that something unusual was going on. And indeed, the results showed that the coin came up as heads more often than random chance would suggest. The instructor invited us to guess the problem in the study.

A convoluted discussion ensued in which we all tried to impress with our (extremely limited) understanding of statistics and research design - and with our guesses about the tricks the subject might have employed. Then the instructor revealed what the experimenters had done.

They knew that psychics reported sometimes having a hard time "tuning in" to a task. So if they used all of the trials in the experiment, they might bury a genuine phenomenon in random noise - like trying to estimate the accuracy of a batter in baseball when half the time he is blindfolded. Instead they looked for sequences in which the subject "became hot," scoring more accurately than chance would allow, and marked out these series for analysis. Sure enough, when compared statistically to chance, there were more 'heads' than random chance could account for.

We stared at the instructor, disappointed that his example wasn't a bit, well, less obvious. How could reasonably sane people have deluded themselves so easily? Clearly this little exercise would have nothing useful to teach us in future.

Try it yourself sometime. Flip a coin (or have someone else do so), and try to make it come up heads. One thing it will almost certainly not do is this:

HTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHTHT.

Instead, you'll get something like this (I just tried it and this is what I got):

TTTHTHTHTHHTTHTHTTTTHHHHTHHTTTHTHHHTTTHTTTTTTHHHHTTTHHHTHHTTTHHTHHHHTHHHTTTTHHHHHTHTTHHTHTTTTHHTHTHHHHTTHHTTHTTHHTHHHTTTHHTTTHT

Totals: Heads = 63; Tails = 64

Now imagine that you only analyze sequences of 6 or more where I seem to have been "hot" at producing heads.

TTTHTHTHTHHTTHTHTTTTHHHHTHHTTTHTHHHTTTHTTTTTTHHHHTTTHHHTHHTTTHHTHHHHTHHHTTTTHHHHHTHTTHHTHTTTTHHTHTHHHHTTHHTTHTTHHTHHHTTTHHTTTHT

Drop the rest of the trials, assuming that I must have been distracted during those ones, and analyze the "hot" sequences:

Heads: 43 Tails: 12

Et voila: Support for my nonexistent telekinetic skills.

Okay, so that feels belaboured because it is so completely obvious. Why bother with it?

Well, let's shift the focus from different periods of a single subject's performance, to between-subjects' performances.

Imagine a drug trial in which half the subjects receive our new anti-pimple pill ("Antipimpline") and half get a placebo. We'll compare pre-to-post improvement in those getting the drug to those not getting it. And we'll look at a variety of demographic variables that might have something to do with whether a person responds to the drug: gender, age group, frequency of eating junk food, marital status, income, racial group.

Damn. Overall, our drug is no better than placebo. But remember that data are never smooth, like HTHTHTHTHT. They're chunky, like HTTTHTHTTH. Trawl the data enough and we are sure to find something. And look! White males under 25 clearly do better on the drug than on placebo! The title of our research paper practically writes itself: Antipimpline reduces acne amongst young Caucasian males.

Okay, well even that causes some eye-rolling. Surely no one would be foolish enough to allow for a fishing expedition like this one. Or if they did, they would demand that you replicate the finding on a new sample to verify that it didn't just come about as a result of the lumpiness of your data.

Well, wrong. Fishing expeditions like this appear throughout the literature.

The point, however, is that if we are looking for an effect, we will almost always find it in at least some of our subjects.

So what?

Let's shift again - from comparing subject by subject data to study by study. We'll do 20 studies of antipimpline, each on a hundred subjects. We'll use the .05 level of statistical significance (meaning that we will get a random false positive about once in every 20 comparisons). Then we'll define three primary outcomes (number of pimples, presence/absence of 5 or more severe lesions, and subject reports of skin pain) and two secondary outcomes (nurse ratings of improvement, reported self-consciousness about skin).

If these outcomes are not correlated with one another, we've just inflated the probability of getting at least one positive outcome to nearly 5 in 20 comparisons, or 25%. Nowhere will you see a study stating that the actual error rate is 25%, however. (In fact, the defined outcomes probably are correlated, so perhaps we've really only inflated our odds of success from 5% to 15% or so).

And what happens? Imagine we count as positive (we'll denote that as 'P') any study that is superior to placebo on at least one outcome measure, and negative ('N') if no measure is significantly better than placebo. Here's what we get from our 20 studies:

PNPNNNNNNNNNPNNPNNNN

From our 20 studies we get 4 showing antipimpline to be superior to placebo on at least one outcome measure. We publish those studies, plus one more (at the insistence of a particularly vociferous researcher). The others we dismiss as badly done, or uninteresting, or counter to an already established trend. Something must have gone wrong.

Publication is how studies become visible to science. So what's visible? Five studies of antipimpline, of which 4 are positive:

PPPNP

Fully 80% of the published literature is supportive, so it seems likely we have a real acne therapy here. Antipimpline goes on to be a bestseller. What's missing? This:

NNNNNNNNNNNNNNN

Lest we nonpharmacologists reactivate our smugness, swap out "mynewgreat therapy" for antipimpline and we can get the same outcome.

Way back in introductory stats class we could not believe that our instructor was giving us such a blatantly bad example of research. Obviously the deletion of trials not showing the "effect" meant that the work could no longer be considered science. It was firmly in the camp of pseudoscience.

Switch to reporting only some subjects' data, and we have exactly the same thing: Pseudoscience.

And conduct multiple studies on the same question and publish only some of them? Once again: exactly the same problem. By deleting whole studies (and their statistical comparisons) we inflate the error rates in the published literature. And by how much? By an amount that cannot be calculated without access to the original studies - which you do not know about and cannot find.

As a result, without the publication of all studies on a similar question without systematic publication bias - it becomes impossible to know the error rate of the statistics. Without that error rate, the statistics lose their meaning.

References

Goldacre, Ben (2012). Bad Pharma. New York: Faber & Faber.

Turner, EH, Matthews, AM, Linardatos, E, Tell, RA, & Rosenthal, R (2008). Selective publication of antidepressant trials and its influence on apparent efficacy. New England Journal of Medicine, 358,  252-260.

Thursday 19 February 2015

Over the Falls Without a Barrel: The Patent Cliff and Prescriber Impartiality

The sound is deafening.
Imagine calling up your old college friend Science and asking her advice. She speaks softly, hesitantly, and with frequent caveats. You strain to hear what she says, but there’s a problem.

It’s a party line.

While you try to hear Science speak, two boisterous rowdies named Marketing are shouting at you, all but drowning her out. They butt in, answering your questions before she gets a chance, claiming to speak on her behalf. It’s tempting to give up and listen to them instead.

This is the situation for any prescriber. “Science” is out there, murmuring in the distance, while the boys from Marketing giggle, whoop, and exclaim in an attempt to attract attention. They sound like 15-year-old boys, and their understanding of Science is about as deep. There is a light at the end of the tunnel, however. At 16, they get bounced off the line.

Umm, what? A Primer on Drug Development.

When a pharmaceutical company discovers a potential new drug, they undertake a mammoth project designed to assess a) whether it really works, and b) whether the risks and side effects outweigh the benefits. The aim is to amass sufficient evidence that the Food and Drug Administration (in the USA) or other national organizations (such as Health Canada in, you know, Canada) will approve the sale of the drug and the type of disorders for which it can be openly prescribed – the so-called “on-label” uses.

The process of gaining approval generally takes 5 to 8 years and can cost from several hundred million to more than a billion dollars. The figures vary widely, in part depending on whether one includes the cost of researching drugs that don’t ever make it to market. For every drug eventually approved, there are four to ten that start human trials and don’t wind up on the pharmacy shelf. So the true cost of development is not just the enormous cost of the trials for successful Drug A, but also the costs of failures B, C, D, E, and F. During the development period, the company is bleeding money and can only get it back once the drug is available in the marketplace.

In order to encourage companies to undertake this risk, governments place a pot of gold at the end of the rainbow. This is patent protection. The company that develops the drug has the exclusive right to produce and market it – in effect, achieving a monopoly on that medication. They can set the price to reflect not only the cost of producing the drug, but also to repay the cost of drug development and testing. If they have a “blockbuster drug” – one that becomes a major seller – they can also make huge profits.

The problem with this system is that patients and governments wind up paying enormous prices for the drug. The solution is to impose limits on patent protection – after which rival companies can produce generic versions of the same drug. Generics companies don’t have the same development costs, and so they price their versions much lower, which the original developers are forced to come close to matching. So when a drug goes off patent, the price drops sharply (often by over 80%) and the advantage to the original developer vanishes.

The period of patent protection is typically 20 years from the date of initial filing of the request. Given the delays in the approval process, the actual period during which a company can market the drug exclusively varies, but is usually around 15 years.

The result is that FDA approval essentially fires the starting gun on a race to recoup development costs and turn a profit. The finish line is the end of patent – often called the patent cliff.

The Expensive Elephant in the Room

There’s one bit of accounting that’s often left off these sketched reviews of the process, and that’s the cost of marketing the drug once it has been approved. Most accounts indicate that pharmaceutical companies spend at least as much on marketing as they do on drug development research. One account (below) suggests the figure is almost twice as high for marketing.

Well, fine. What’s the point of developing a new treatment if no one finds out about it and patients go untreated?

The marketing costs do not include the means by which prescribers are alleged to receive their knowledge about practice: reading journal articles. Here’s an estimate of total costs from Gagnon & Lexchin (2008) – all figures in billions of US dollars:

  • Samples (the provision of free packages of medication to prescribers, who can offer them to patients in the office to get them started, after which they will continue on paid prescription): $15.9 b.
  • Detailing (essentially, visits by sales representatives to physician offices, plus the provision of pamphlets, swag, and related products): $20.4 b
  • Direct to consumer advertising (advertisements in newspapers, magazine, television, and internet that is directed to the general public rather than to healthcare providers, often with the advice to “ask your doctor”): $4 b
  • Sponsorships, displays, and the provision of speakers to professional meetings: $2 b.
  • E-promotion & mailings (generally to prescribers), promotion-related post-approval clinical trials: $0.3 b
  • Journal advertising (advertisements in publications aimed at professionals): $0.5 b
  • Unmonitored promotion (estimated; includes amounts that do not appear in other categories including promotion to unmonitored physicians or in unmonitored journals and miscellaneous other marketing strategies): $14.4 b 

Of particular note here for the nonprescriber is the contrast between the amount for direct to consumer advertising (the seemingly ubiquitous and presumably expensive “ask your doctor” ads) at $4 billion, and the marketing to physicians, a much smaller group of people, at a cost 5 times as great at $20.4 billion.

One might be tempted to wonder how the firms can possibly spend so much money marketing to prescribers. Imagine that instead of producing television ads they sent the salesperson directly to your home to act out the advert in person – while making you lunch and providing golfing fees.

But all this ends?

Once a medication “goes off the patent cliff,” profits decline precipitously and the motivation to promote the med to physicians and the public decline as a consequence. The drop can have a major impact on a pharmaceutical firm – just search using the terms “antidepressant patent cliff” to turn up a variety of business analyses like this one:

http://www.insidermonkey.com/blog/eli-lilly-co-lly-what-does-the-patent-cliff-mean-for-this-drugmaker-206308/

Let’s take a look at the names and patent expiration dates of some of the most well-known antidepressants.

  • Prozac (fluoxetine), 2001
  • Paxil (paroxetine), 2003
  • Zoloft (sertraline), 2006
  • Remeron (mirtazapine), 2010
  • Effexor XR (venlafaxine), 2011
  • Lexapro, Cipralex (escitalopram), 2012
  • Cymbalta (duloxetine), 2013
  • Wellbutrin (bupropion), 2013
  • Pristiq (desvenlafaxine), 2022 – of note, desvenlafaxine is marginally different from Effexor and is often regarded as little more than a patent extender on Effexor.
  • Viibryd (vilazodone), 2022

Overall dollar sales figures for antidepressants have tended to fall as the medications have gone over the patent cliff. Revenue for antidepressants peaked at about $15 billion per year in 2003, and is projected to decline to $6 billion in 2016. Depression, once a major money-maker for pharmaceutical corporations, is fading as a revenue opportunity.

Why is this important?

Remember our party line? The marketing rowdies drowning out our friend Science get laid off when their drugs go off patent. The drugs are still around, but the distorting influence of their promotional activities (disguised as science) largely end. The air clears, and our prescriber is left undisturbed to examine the literature once more.

Of course, this doesn’t solve all the problems. Prescribers are likely still to have received most of their psychopharmacology education from the marketing team. And having been subjected to endless lectures about the wonders of various drugs, they may feel that they already understand the scientific backing and neglect to look again. As well, most of the articles in the scientific journals are funded and written by (guess who!) the pharmaceutical companies themselves, and there has been a marked publication bias favoring positive studies over negative ones.

Nevertheless, the end of patent means fewer free samples, fewer visits from pharmaceutical reps, fewer paid lunches, fewer “opinion leaders” making the rounds showing the companies’ own slides, fewer conference symposia lavishly funded by patent owners, and less general noise distorting the environment. Medical decision-making is likely to be based at least somewhat more on science than on enthusiasm and hoopla.

What about newer antidepressants?

There are still a few preparations on patent, and in the past there have always been new medications coming online. Once Prozac went off patent, everyone finally shut up about it and more profitable medications picked up the baton and continued the charade. Won’t that continue?

Well, that’s the thing.

The pipeline is just about empty. Some firms have given up on depression, and in the past few years a variety of preparations have bitten the dust before reaching the approval stage. There is a noticeable lack of enthusiasm about upcoming products, though the few that remain on patent will doubtless be promoted to death. So there will still be some rowdies on the line trying to drown out science, but there will be fewer of them and they seem to have become somewhat muted. Not even the pharmaceutical reps seem able to muster much enthusiasm for the newer products, and the usual claims of “revolutionary advances” are not being heard.

So, Science. It’s been a while. At last we’re (almost) alone. Whisper your ambivalence. We can finally hear you.

Resources

Gagnon M-A, Lexchin J (2008) The cost of pushing pills: A new estimate of pharmaceutical promotion expenditures in the United States. PLoS Med 5(1): e1. doi:10.1371/journal.pmed.0050001

York University. "Big Pharma Spends More On Advertising Than Research And Development, Study Finds." ScienceDaily. ScienceDaily, 7 January 2008. <www.sciencedaily.com/releases/2008/01/080105140107.htm>.

Thursday 5 February 2015

In Praise of the Nervous Breakdown

Perhaps not beyond repair.
Even the most level-headed individual can be rendered insufferable by taking an introductory psychology class. Suddenly the neophyte student will become an arrogant expert, deriding the ignorance of friends, family, and dinner companions.

The use of the term “nervous breakdown” is a case in point. Uttering the words is a bit like blowing a dog whistle: Intro Psychology graduates will converge from miles around to clarify that there is no such thing.

In this case, however, the phenomenon is not restricted to sophomores. Mental health professionals of every stripe will nod in agreement. Nerves don’t spontaneously break or, if they do, they don’t cause the most common forms of mental distress. The term does not appear in the DSM-5 nor, for that matter, in DSMs I through IV. Doesn’t exist; never did.

Pay attention to the people so corrected, however. They respond with bemused tolerance or finger-tapping impatience, but seldom with gratitude or thanks at being better informed. In discussing their (or their family member’s) nervous breakdown, they were not asserting an etiology of distress, nor providing a psychiatric diagnosis.

Instead, they were describing a period of time during which the sufferer became less capable of managing the vicissitudes of life and instead withdrew inward while experiencing some form of psychological pain. The emotional tone may have been characterized more by anxiety, or depression, or embitterment. They may or may not have exhibited transient psychotic symptoms. Perhaps they were hospitalized; perhaps not.

From Nervous Breakdown to the 21st Century

“Nervous breakdown” became an informal way of describing transient psychological difficulty early in the 20th century (Barke, Fribush, & Stearns, 2000), and persisted (with peaks and valleys) to its end. It was the preferred term in familial gossip about others, and it was often the way that people would describe their own mental blips – when they weren’t calling them “crack-ups” (as F. Scott Fitzgerald did in his 1936 essay about his own experience).

Today there have been enough students of psychology (and public education by experts) that the term has faded somewhat, though we continue to hear of celebrities being hospitalized for mysterious “nervous exhaustion” (or the abbreviated “exhaustion”) – likewise a term never to be found in any diagnostic manual. Try going up to triage nurses in a busy emergency ward and saying "I'm exhausted." Watch their expression.

Instead, we hear about depression – often, about how it is a chronic and relapsing illness caused by mysterious and never-named (or, umm, found) biochemical imbalances. We understand that mental distress is “an illness like any other illness,” though the people who say this would be hard pressed to define the essential characteristics of an “illness,” so this becomes a bit meaningless. We subdivide anxiety into half a dozen primary “anxiety disorders” to be distinguished from depression, anger, disillusionment, grief, and other difficult emotions by ... by ... well, by a belief system that is impolite to describe as more theological than scientific.

In the process we have created a bogus corpus of common knowledge that exceeds that known by those who have actually read the literature or examined the data. We have also learned to characterize someone who has had episodes of difficulty as being forever defined by their least functional period. Thus, a person who has once experienced a major depressive episode qualifies, from that moment until death, as having Major Depressive Disorder. People are defined by their pathologies more than by their recoveries.

Recent controversies over the development of the DSM-5, as well as the failure of some etiologic theories of disorder (like the monoamine hypothesis for depression), have tarnished the image of formal diagnosis somewhat. Many of us in the field wonder if our diagnostic attempts have been more pathologizing than enlightening or helpful.

In this atmosphere, maybe it’s time we dusted some older ideas off the shelf for a second look. We could do worse than to start with “nervous breakdown.”

What’s so good about nervous breakdowns?

Consider the similarities between nervous breakdown and skin breakdown.

When I was younger I spent some time working in a rehabilitation hospital for people who had suffered spinal injuries. The nerve damage would often prevent clients from perceiving the normal discomfort that might formerly have prompted them to shift positions. As a result, unless they were mindful they would sit in the same position for hours and develop pressure sores – essentially the breakdown of skin and subcutaneous tissue. These would have to be carefully managed but would eventually heal.

So the characteristics of skin breakdown are:

  1. A reduction of functioning in a certain system (in this case, skin), 
  2. Caused by external stimuli (like a poorly padded wheelchair), plus 
  3. Inattention to self-care (like timed posture adjustments whether one feels uncomfortable or not), which is
  4. Expectable in normal individuals (they do not reveal that one was born with defective skin), and are
  5. Manageable or treatable, and 
  6. Once resolved is no longer called a skin breakdown (because the skin has healed), and 
  7. Serves as a reminder that one may be at risk of this problem (having had it before) and may need to engage in closer self-care in future.

All of these are useful ideas in the case of most psychological conditions. If we transfer the concepts to, let’s say, depressive episodes, we have:

  1. A reduction in behavioral or emotional coping or functioning,
  2. Typically brought on in part by external events (losses, work stresses, role conflicts, relationship issues), plus
  3. A disruption or deficit in self-care (exercise, diet, sleep factors, the role of social contact, making personally meaningful activity a priority). These are
  4. Expectable in normal individuals (meaning that they do not require evidence of a biological defect in advance of the development of the disorder and do not provide evidence that one is a defective human being), and are
  5. Manageable and treatable (most depressive episodes are self-resolving and most can be resolved more quickly with coaching to enhance self-care and life balance using behavioral activation – and sometimes medication), and
  6. Once resolved should no longer be called depression (any more that a person recovered from a bout with flu should be defined as a “flu sufferer”), and
  7. Can serve as a reminder that one may be at risk of a recurrence (having once had the problem may indicate or produce a higher susceptibility of the problem in future, therefore mandating closer attention to self-care in the future).

Although one could easily quibble with a few points, this perspective is considerably more useful and in accord with the data (at least for most people in depression) than the disease model that the mental health system attempts to impose instead.

The Key Distinctions

For me, the most important differences between the current model of disorder and the “nervous breakdown" idea are (at the risk of some repetition of the above):

Episodic nature. One says “I had a nervous breakdown” rather than “I am a nervous breakdown” or “I have nervous breakdown disorder, even though right now I feel fine.” It was an event, not a characteristic of the person. It does not define them.

Symptomatic Vagueness. The term allows the user to disclose a period of psychological distress without necessarily revealing all of the intimate details. Critics may complain that this lacks the specificity of formal diagnosis, but formal diagnosis is often useful only insofar as it guides treatment selection – which current diagnostic methods do remarkably poorly. In any case, no one is suggesting that the entire diagnostic system be replaced by a pamphlet with the words “nervous breakdown” on it.

Trigger-Based. When people talk about nervous breakdowns they tend to focus on the factors that brought them about. “I was under enormous pressures at work, I had a health scare, and my spouse left me.” This is in accord with the data on most psychiatric conditions – they tend not to appear out of the blue. By contrast, the dominant model most clients are presented with is defect-based and noncontextual. “It’s just a biochemical imbalance, it could have happened at any time and had little to do with your life.”

Nonsensical. The strained smiles of those who are informed that “nerves don’t break” tells the tale. Most people understand that the words in the term “nervous breakdown” are the product of heritage, not science, and are not intended to be taken literally, any more than the terms “radical conservative” or “viral meme.” At best, all of these are metaphorical or allusive in nature. The disorder-based terms we currently use, on the other hand, bring with them unhelpful and often inaccurate baggage.

Recoverable. Once diagnosed with depression, people are defined as suffering from Major Depressive Disorder and are frequently informed that they must be in some form of treatment henceforward, even if they appear to be symptom-free. The evidence for the benefit of ongoing post-episode treatment is poor. Nervous breakdowns are typically viewed, by their episodic nature, as events that are resolvable – perhaps with rest, a reduction in stress, or a systematic reworking of one’s life circumstances.

Recurrence-Aware. The nervous breakdown idea acknowledges that most episodes of mental distress can be expected to resolve quite well with good inter-episode recovery. (The evidence appears to be accumulating that depression became a more-frequently chronic disorder with the onset of chronic treatment.) But it also allows that a person may be more vulnerable to such episodes than others, therefore meriting greater vigilance for stress, lifestyle imbalance, and early warning signs of destabilization.

The Capacity/Stress Model

Perhaps best of all, the nervous breakdown idea hints at the notion of a “breaking point” and at the interaction between the person and the environment in a way that seems to fit with evolving conceptions of distress episodes.

Put simply, people appear to have a capacity for processing demands, stresses, and losses imposed on them by their environment. If they challenge themselves gently and allocate sufficient resources to self-management (getting adequate sleep, leisure, exercise, a proper diet, and so on – the need for which may vary based on individual factors), then they are generally able to cope with these demands. Further, their capacity may gradually increase over time – just as exercising with increasing weights may result in greater muscle capacity, within limits.

When circumstances overload a person or crowd out self-care, the processing capacity appears to shrink. What once was a manageable level of demand now exceeds the person’s ability to cope, driving coping abilities lower still. Thoughts and behavior may become disorganized and chaotic as the person thrashes away at their circumstances or retreats inward from a sense of defeat.

The “breaking of nerve” is not a literal event, but a decline in the person’s ability to manage things at their former level. Recovery typically involves rest, a rethinking of the circumstances that led to the collapse, and the gradual reintroduction of elements of the person’s life (perhaps with pharmacological assistance along the way).

Exceptions

I am not arguing for the wholesale abandonment of diagnostic specificity, nor the blending of all distress episodes into a single term. Clearly there is some usefulness (at least to the care team) in knowing whether a person’s contact with reality has been lost, or whether anxiety or despair predominate, or whether suicidality is present. But these important aspects are typically found in the case description rather than in the diagnostic label in any case.

Certainly there are people in whom a biological predisposition to episodes of distress or decompensation is a major factor. Certainly there are people for whom a purely medical approach is necessary or more helpful than a nonmedical one. And certainly there are individuals whose illnesses will prove chronic rather than episodic.

But to suggest that the idea of nervous (or “mental”) breakdown is necessarily a more primitive concept than the inaccurate or faux-precise diagnostic categories with which we currently diagnose people seems false. If we look at the utility to the sufferer, I suspect that the older and less formalized perspective may be superior.

If only we could get psychology students to agree.

References

Barke, M, Fribush, R, & Stearns, PN (2000). Nervous breakdown in 20th-century American culture. Journal of Social History, 33, 565-584.

Carey, B (2010). On the verge of “vital exhaustion”? New York Times, June 1. http://www.nytimes.com/2010/06/01/health/01mind.html?ref=health?8dpc&_r=0

Online Course

Is there an alternative to a medication-based approach? If someone takes medication, is there more they can do in order to maximize the effect? Consider seeking the help of a qualified psychotherapist trained in cognitive behaviour therapy.

In addition, our clinic has developed a cognitive behavioral guide to self-care for depression. Though not a substitute for professional face-to-face care, UnDoing Depression may be a useful adjunct to your efforts.  The preview is below. For 50% off the regular fee of $140 USD, use coupon code “changeways70” when you visit our host site, here.

We also have courses for professionals and for the public entitled What Is Depression, What Causes Depression, Diagnosing Depression, Cognitive Behavioral Group Treatment of Depression, How to Buy Happiness, and Breathing Made Easy. For the full list with previews and substantial discounts, visit us at the Courses page of the Changeways Clinic website.

Tuesday 20 January 2015

Why is Depression Becoming More Common: Some Likely Cultural Factors

In a recent post I pointed out that having an effective treatment for a disorder should have a positive impact on at least some epidemiological variables: prevalence, chronicity, disability – something. And yet relative to the 1950s, major depression is not on the run. It is becoming more common, there are more people on depression-related disability, it appears earlier in people’s lives, it seems more likely to recur than ever, and recovery between episodes is less complete than historical accounts would suggest.

Something’s going wrong.

Many, including Robert Whitaker (author of Anatomy of an Epidemic) suggest that our treatments themselves are producing much of the increase – and he outlines some fairly good research to support this point of view.

It’s likely, however, that multiple factors come into play – and some of these are societal factors. Today let’s look at some of these.

Physical fitness

We know that exercise is an effective treatment element for depression, and that poor physical fitness is a risk for future depression. Standardized population studies examining fitness over the decades are hard to come by, but indirect measures (the proportion of jobs that are sedentary, the decline in physical education in schools, the increase in childhood and adulthood obesity) support the limited direct data indicating a general decline in physical fitness from the 1950s to the present. If true, this should account for some of the increased incidence of depressive disorders and subclinical depression.

Obesity

Overweight individuals are more likely to be depressed, but the direction of causality is open to question and likely bidirectional: obese individuals appear to be  at higher risk of developing depression, and mild to moderate depression appears to increase the likelihood of a person becoming overweight. As well, obesity may be a marker of physical activity, with individuals who are less active developing both increased body weight and reduced mood. Whether there is an independent relationship between obesity and the subsequent development of depression is at this point somewhat unclear.

Social Isolation

In his book Bowling Alone (2000), Robert Putnam describes the decline of participation by Americans (and citizens of other western nations) in organized social groups such as fraternal organizations, sports leagues, and so on. Humans are social animals, and isolation is a known risk factor for depression. Social contact has declined on almost every conceivable measure over the past 60 years, and it is likely that this has contributed to rates of depression. For example, the average number of people per household in the USA has declined from 3.3 in 1960 to 2.5 in 2013, and the number living alone has correspondingly increased. Loneliness tends to predict future depression levels more than the reverse (Cacioppo et al, 2010)

Screen Time

Increasing numbers of people point with pride to their lack of ownership of a television, or to their recent cancellation of cable services. Nevertheless, traditional television viewing remains at a US average of 34 hours per week (Nielson, 2013), to which we can add Internet use (26 hours per week on average) and computer gaming (averaging 13 hours per week for Americans 12 to 24; NPD Group 2010). These hours, which take up the majority of leisure time for most people, represent “empty hours” for fulfillment in the same way that soda pop provides “empty calories” without additional nutritional content.

News Media Saturation

In the 1960s, an era of almost unprecedented social change, assassinations, war, and upheaval, most people made do with a half-hour news broadcast that they might watch on an intermittent basis. In 2015 we have multiple competing 24-hour news channels, plus the constant Internet news feed providing information from virtually every news outlet on Earth. Many people report that they check the online news five or six times a day. The constant exposure to viscerally-reported tragedy and mayhem is likely to affect one’s mood (Dobelli, 2013), though controlled research is, as yet, lacking.

* * *

Those are just a few of the societal indicators that may be partly responsible for an increase in societal rates of depression. I have made no attempt to be comprehensive here; instead, I have just highlighted a few possible factors. Feel free to nominate your own in the comments feed.

No matter how many we collectively come up with, however, we are unlikely to explain the sheer magnitude of the leap in depression rates over the past 60 years. Other factors are coming into play as well – and in coming weeks we’ll consider some of the other influences.

References

Cacioppo, JT, Hawkley, LC, & Thisted, RA (2010) Perceived social isolation makes me sad: Five year cross-lagged analyses of loneliness and depressive symptomatology in the Chicago Health, Aging, and Social Relations Study. Psychology and Aging, 25, 453-463.

Dobelli, R (2013). News is bad for you – and giving up reading it will make you happier. The Guardian, April 12. http://www.theguardian.com/media/2013/apr/12/news-is-bad-rolf-dobelli

Nielsen Company (2013). Free to Move Between Screens: The Cross-Platform Report, March 2013. New York: The Neilson Company.

NPD Group (2010). Gamer Segmentation 2010 Report. May 2010.

Putnam, R (2000). Bowling Alone: The Collapse and Revival of American Community. New York: Simon & Schuster.

Whitaker, R (2010). Anatomy of an Epidemic. New York: Crown.

Online Course

Want to consider this subject? Consider taking our online course, What Causes Depression? The preview is below. For 60% off the regular fee of $50 USD, use coupon code “cause20” when you visit our host site, here.

We also have courses entitled UnDoing Depression, What Is Depression, Diagnosing Depression, Cognitive Behavioral Group Treatment of Depression, How to Buy Happiness, and Breathing Made Easy. For the full list with previews and substantial discounts, visit us at the Courses page of the Changeways Clinic website.