percolation communications


1 Comment

Is Adherence Your Prisoner’s Dilemma? Patients, Pills, and Game Theory

By Anna Lau, PhD, Medical Writer

Chronic diseases (eg, asthma, diabetes, hypertension) are persistent conditions that can be controlled but not cured. Without treatment, chronic diseases increase the risk of death, but with treatment, patients can enjoy fairly normal lives. Yet getting patients to take their medications as directed (adhere to a treatment plan) is a widespread problem. For example, direct costs of nonadherence to treatments for diabetes, hypertension, and hypercholesterolemia together exceed $105 billion annually in the US. It’s estimated that only 50% of patients with chronic conditions are adherent to treatment. What gives?

Can game theory shed light on poor adherence to chronic medication?

Game theory is the study of strategic decision-making. The prisoner’s dilemma is a well-known game of strategy that explores how two parties balance cooperation and competition in decision-making. Game theory has also been used to understand how decisions are made in conflict situations. By drawing an analogy to fighting a battle, let’s see what we might learn by applying conflict strategy to coping with a chronic disease. In his book The Strategy of Conflict (1960), Thomas Schelling suggests potential advantages to limiting one’s own options in a conflict situation. For example, in a hypothetical battle between grasshoppers and ants, a savvy ant general who positions its army at the edge of a cliff is more likely to win (Figure 1). Given the choice between certain death and fighting, the army ants are more likely to fight (and fight hard!).

The opposing grasshopper army is unlikely to charge forward and attack, because doing so would assure mutual destruction.
Figure 1

Figure 1. Army ants versus grasshoppers on a cliff.

But what if the choice were between fighting and something less than certain death (Figure 2)? Would the army ants fight as hard? Probably not.

Figure 2. Army ants versus grasshoppers near a mud pit.

Patients who experience acute illnesses of sudden onset and short course, such as heart attacks, may experience severe symptoms. And on average, about 15% of heart attacks are fatal. It’s hard to imagine that a patient in the midst of a heart attack would willfully refuse treatment. Given the choice between possible death and accepting treatment, patients are highly motivated to literally fight for their lives.

But patients with chronic conditions may not feel sick or experience overt symptoms most of the time. The motivation to faithfully take medications may be diminished, because the choice for them is between bearing the burden of chronic medication or living with an asymptomatic (or mildly symptomatic) condition. And if the medication causes side effects, or is painful to administer (like injections), inconvenient to take, or expensive, the choice may be easier. In addition, if effective rescue therapies are available, the motivation to adhere to chronic medication may be even lower. Patients might choose to take rescue medications occasionally over chronic medications daily.

Okay, now what about decision theory?

Decision theory explores factors that go into decision-making. It’s related to game theory, except that it focuses on how individuals, not multiple parties, make decisions. In their paper Choices, Values, and Frames (1984), Kahneman and Tversky describe loss aversion, the observation that people dislike losing something more than they like gaining something. For instance, Kahneman recounts asking students in his class to gamble on a coin toss. Given the condition that they would lose $10 if the coin toss turned up tails, how much money would they insist on winning in order to agree to gamble? The answer was usually more than $20. Loss aversion stems from the endowment effect, the tendency of people to value something more once they own it. For instance, if you ask home sellers and buyers to estimate the value that the other party would put on a home for sale, sellers tend to overestimate the value and buyers tend to underestimate. Loss aversion often leads to status quo bias, the preference of people to do nothing instead of making a change. In a set of experiments, people were more willing to accept an electric shock than take the chance of reducing that shock by pressing a button.

Asymptomatic patients may not be willing to change their current lifestyles to take chronic medications, even if the potential payoff is better health or delayed disease progression. This phenomenon has been termed “patient inertia.” Researchers believe that nonadherence to treatment may have less to do with lack of understanding of drug benefits and more to do with this tendency to do nothing (maintain the status quo).

Certainly there are other barriers to adherence…?

Of course, and those barriers include provider-related factors (eg, prescribing an overly complex regimen, failure to communicate effectively with the patient), patient-related factors (eg, poor understanding of disease or benefits/risks of treatment), and healthcare system–related factors (eg, access to insurance coverage or to medication). These factors affecting treatment adherence are better studied than factors related to game theory or decision theory. Given how applicable game theory and decision theory are in explaining economic behaviors, it may be worthwhile to further investigate their potential role in treatment adherence.

As the term “inertia” implies, patients can be coaxed to adopt new behaviors such as taking daily medication by making those behaviors routine, easier, and more accessible. For example, tie the act of pill-taking to a daily event, such as eating breakfast or brushing teeth. Prescribe preloaded syringes to eliminate a step in treatment administration. And (if safe) leave medicine bottles in easy reach instead of inside a medicine cabinet. These small changes in behavior can improve adherence and possibly save hundreds of millions of dollars or more each year in healthcare costs.


2 Comments

Breakthrough of the Year – Cancer Immunotherapy?

By Heather Lasseter, PhD, Medical Writer

Are advances in cancer immunotherapy a top scientific achievement of the year? 

Editors of Science certainly think so, heralding “cancer immunotherapy” as 2013’s Breakthrough of the Year: “This year marks a turning point in cancer, as long-sought efforts to unleash the immune system against tumors are paying off – even if the future remains a question mark.”

Picture4

Awareness ribbons representing lung, breast, cervical, brain, kidney, prostate, and colon cancer, and melanoma (adapted from MesserWoland, wikimediacommons.org).

With June as Cancer Immunotherapy Month (designated by the Cancer Research Institute), cancer immunotherapy is a hot topic in oncology – especially as research here may lead to major treatment breakthroughs for many cancers. But currently approved treatment comes with a hefty price tag: approximately $120,000 for a round of therapy. Given the potentially modest impact on survival (for instance, on the order of several months), does cancer immunotherapy truly represent a key breakthrough of 2013?

 

Immune destruction – a hallmark of cancer

Hallmarks

Hanahan and Weinberg’s hallmarks of cancer (Cell. 2011;144:646-674).

Avoiding immune destruction was recognized as one of the emerging hallmarks of cancer by Hanahan and Weinberg in 2011. Cancer cells survive and thrive through their well-known characteristics of ceaseless proliferation and invasion, as well as their ability to  evade the body’s natural defenses. While the immune system is tasked with cleaning up cells gone rogue, cancer cells not only escape these defense mechanisms, but hijack the immune system for their own advantage (i.e., tumor angiogenesis).

Cancer immunotherapy – the idea of targeting the body’s own immune system to attack cancerous cells – has been around for about 20 years. But with poor understanding of mechanisms for immune activation and hesitancy from pharmaceutical companies, cancer immunotherapy has only recently gained traction.

Early work by cancer immunologist James Allison (published in Science in 1996) sparked the renaissance in cancer immunotherapy. He demonstrated that administering antibodies against cytotoxic T-lymphocyte antigen 4 (CTLA-4) in vivo caused the rejection of tumor cells. However, it was not until 15 years later that ipilimumab – a monoclonal antibody that targets CTLA-4 on T cells – became the first immunotherapy approved for treating metastatic melanoma. While cancer cells release antigens that activate the immune system, they also produce proteins that bind to CTLA-4, thereby preventing a full-out immune attack. Ipilimumab blocks this inhibitory signal, letting the immune system do its job. The result: cancer cell death.

Clinical trials are currently underway to assess efficacy of ipilimumab and other agents targeted to  reverse immune checkpoint pathways, which cancer cells exploit to inhibit the immune system. These are being assessed in the treatment of melanoma, lymphoma, lung, breast, gastric, and prostate cancers. Such agents include antibodies against programmed death-1 (PD-1), a molecule on T cells that puts the brakes on T-cell activation. Moreover, combination therapy of an anti-CTLA-4 plus anti-PD-1 has produced a “rapid and deep tumor regression” in patients with advanced melanoma, according to a study published in the New England Journal of Medicine. In addition, more recent advances involve genetically engineering T cells to express chimeric antigen receptors (CARs). CARs permit T cells to specifically target antigens on tumor cells.

One exciting possibility may be personalized immunotherapy

In a recent study in Science, CD4+ T helper cells were identified that responded to a mutated antigen found in the cancer cells of a patient with metastatic epithelial cancer. These mutation-reactive immune cells were extracted, amplified in cell cultures, and transfused back to the patient using a technique called adoptive cell transfer. The results were remarkable: the tumor regressed and then stabilized until 13 months post-transfusion. Following disease progression and a second round of immunotherapy, the tumor again regressed (last follow-up of 6 months).

Adoptive T-cell therapy (Simoncaulton, wikimediacommons.org).

Broadening the cancer immunotherapy pool

In the last several years, many groups have launched themselves into this line of research. As part of its Moon Shots Program, the University of Texas MD Anderson Cancer Center has collaborated in 2014 with four large companies – Johnson & Johnson, Pfizer, GlaxoSmithKline, and AstraZeneca’s MedImmune – to accelerate the development of immunotherapies and reduce cancer deaths. And cancer immunotherapy was a hot topic at the ASCO Annual Meeting, with large pharmaceutical companies sharing promising clinical data:

  • Bristol-Myers Squibb: Combination of the PD-1 immune checkpoint inhibitor nivolumab plus ipilimumab in a phase 1b trial shrank tumors in 42% of patients with advanced melanoma and produced 1- and 2-year overall survival of 85% and 79%, respectively.
  • Merck: In a phase 1b trial, the anti-PD-1 antibody pembrolizumab (MK-3475) reduced tumors in 51% of patients with PD-ligand 1 (PD-L1)-positive advanced head and neck cancer and produced a best overall response rate (ORR) of 20%.
  • RocheIn a phase 1 trial, the anti-PD-L1 therapy MPDL3280A shrank tumors in 43% of patients with PD-L1-positive metastatic bladder cancer and produced an ORR of 52%.
  • AstraZeneca: Although the clinical data were limited, AstraZeneca described the phase 1 dose-escalation trial assessing the PD-L1 inhibitor MEDI-4736 plus CTLA-4 inhibitor tremelimumab in patients with advanced solid tumors.

Moving cancer immunotherapy forward

While these results are promising, cancer immunotherapy only benefits a certain population of patients – and the reasons why are poorly understood. For instance, treatment efficacy may be impaired by the presence of mutations in a patient’s tumor that confer protection against antitumor immunity. As stated by Tjin et al in Cancer Immunology Research, “Immunotherapy is a promising strategy… but the modest clinical responses so far call for improvement of therapeutic efficacy.”

So what’s next for cancer immunotherapy?

It will be necessary to develop biomarkers for patients who will show a clinical benefit, increase the proportion who respond to treatment, and develop more potent treatment strategies. Treatment strategies will likely involve combination therapies, adding an immune system booster, and streamlining the development of targeted treatments. Progress to broaden and optimize the benefits of cancer immunotherapy is needed to make it worthy of being called the “Breakthrough of the Year.”

 

 

 

 


Leave a comment

Autism: putting the puzzle pieces together

by Alison Wagner, PhD, Medical Writer

Autism awareness ribbon of the Autism Society

Autism awareness ribbon of the Autism Society

Recently, the Centers for Disease Control and Prevention (CDC) released a report updating the prevalence of autism spectrum disorder (referred to as “autism” for the purposes of this article)* in the United States. Shockingly, the rate of autism diagnoses in children aged 8 years more than doubled in less than a decade: from 6.6 per 1000 (~1 in 150) in 2002 to 14.7 per 1000 (1 in 68) in 2010.**

Autism 2

Such a dramatic increase is an astonishing leap in prevalence. So this is the point where we are reassured that kids are being over-diagnosed – right?

Not exactly.

The reporting of autism in children is based on reports of services received for the condition. With increased awareness of autism, diagnosis and treatment also increase, leading to more reported autism cases. But prevalence rates vary widely by region: for example, according to the CDC report, the rate in Alabama is 5.7 in 1000 and in New Jersey is 21.9 in 1000. Research on autism does not support environmental causes for differences in prevalence rates; therefore, these geographical differences are likely more reflective of community awareness and resources available than actual prevalence. Unfortunately, that means that of those prevalence rates, the higher one is probably closer to the truth – the differences we see more likely stem from under-diagnosis in one state relative to another. We may not have had a dramatic increase in actual autism prevalence in the last decade, but an increase in awareness and screening.

This means the actual prevalence of autism might be higher than even these new numbers reported by the CDC. That’s the bad news.

The good news?
We’re making progress on both detecting and understanding autism.

Although many autism symptoms are undetected until children fail to achieve developmental milestones around the age of 2, research has indicated that subtle signs of autism are present earlier and that the disorder originates during prenatal development. Numerous studies have indicated that autistic children have enlarged brains, a phenomenon primarily observed in the neocortex that disappears by adulthood. Additionally, pruning of neuronal connections, a process that occurs prior to adulthood and is necessary for normal neural development, is believed to be reduced in autism. Due to the limited period in which these abnormalities are seen, few studies have been able to examine brain pathology on a cellular level. Also, postmortem brain tissue from children is generally much more difficult to obtain for research – families rarely agree to donate the bodies of deceased children to science. Therefore, most studies of autism at the cellular and systems level are in adult brains and may not detect what is happening early in development.

However, a recent study published in the New England Journal of Medicine did examine postmortem brains of children with and without an autism diagnosis. Using markers for different types of brain cells, the researchers found that although overall numbers of neurons did not differ between the groups, children diagnosed with autism had significantly fewer excitatory neurons in patches of disorganization in the neocortex (more specifically, the frontal and temporal lobes) that spanned multiple layers of cortical tissue. Interestingly, the researchers only took small samples of tissue, yet found these pathological cortical patches in 91% of the autism samples versus 9% of the control samples, which suggests the patches are likely prevalent throughout the neocortex of the autism samples. Despite the pervasiveness of this pathology, the location of the patches varied across the samples, which could help explain the broad range of symptoms seen in autism. In fact, the symptoms of autism are often so disparate from patient to patient that it’s striking that any sort of neuropathology would be consistent across patients. These findings could shed light on a potential unifying mechanism of autism and support the leading theory that autism originates prenatally.

Autism brain

Patches of disorganization were found in the frontal and temporal lobes of the brains of children diagnosed with autism.

Although understanding when autism begins is important, a prenatal etiology sounds pretty dismal.
After all, if autism begins before birth, how can we hope to treat it, especially when it’s often not diagnosed until children are 2-4 years old?

Early intervention ­(the earlier the better), such as applied behavioral analysis, has long been known to provide substantial benefits to children with autism. This is believed to be enabled by the brain’s plasticity, which decreases with age. Extrapolating from the New England Journal of Medicine study, for example, early intervention could theoretically utilize the brain’s plasticity to compensate for the patches of disorganization in the neocortex – the brain could find alternate routes to performing the same function. The challenge is to detect autism before the brain has lost that ability.

The results of another recent study may help in detecting autism much earlier than the current norm. Researchers validated a simple screening method for infants as young as 9 months. The method, which consists of measuring head circumference and evaluating the head tilt reflex, is noninvasive and easily applied. It could allow intervention much earlier than autism is normally identified, when it would be most beneficial.

Head tilt reflex: This test is normally done on developing infants. It involves picking up and then tilting the infant slowly to one side. When the head tilt reflex is normal, the infant keeps his/her head vertical. When the reflex is abnormal, the infant keeps his/her head in line with the body.

Assembling the pieces

These studies touch on different aspects of the latest in autism research. It is vital to accurately measure the prevalence of autism to provide resources for awareness, diagnosis, and potential treatment. Likewise, understanding the etiology of autism will help determine prevention (if possible) and better treatment of diagnosed children. The early detection of autism enables intervention that maximizes effectiveness. All of these factors are encouraging pieces in the puzzling search for successful treatment of children with autism.

Autism summary 2


*The varying degrees and types of autism and where they fall on the ASD spectrum are significant considerations but beyond the scope of this article.

**This study was based on a specific subset of centers in a collective network that is not necessarily representative of the entire United States in terms of socioeconomic, racial, or other demographic parameters.

 


Leave a comment

Should the “Gold Standard” Be the Old Standard? A New Approach to Randomized Clinical Studies

By Anna Lau, PhD, Medical Writer

From 1975 to 2005, the average cost to develop a single new drug rose from $100 million to $1.3 billion (adjusted to year 2000 US dollars). At least 90% of this staggering cost is consumed by executing phase 3 clinical studies. But as phase 3 studies have gotten larger, longer, and more complex, fewer patients meet the eligibility criteria and more participants quit mid-study. In addition, about 70%-75% of drugs fail this phase of development because of safety concerns, lack of efficacy, or other reasons. No wonder there’s frustration!

Is it time to change the way phase 3 studies are conducted?

Conventional phase 3 studies are usually randomized and controlled. The salient study design feature is the equal probability that participants will be randomized to intervention or control. This randomization scheme minimizes selection bias more effectively than any other study design element, making it the “gold standard” of study designs.

__________

Although phase 3 randomized studies assume a priori that the null hypothesis is true (that the intervention and the control are equal), patients generally assume that the intervention is superior to control. What do you think is the basis for this bias that “new is better”?

__________

But this randomization element is precisely why up to 31% of patients decline to participate in clinical studies. Clinical investigators could opt for a 2:1 randomization ratio, but this would only increase the odds of—not guarantee—assignment to the intervention arm. Also, if a two-arm randomized controlled study shows no advantage of intervention over control, then it’s back to the drawing board to plan a brand new study protocol, a process that could take years. What’s the alternative?

Enter the outcome-adaptive randomization study design.

With outcome-adaptive randomization, randomization probabilities can be adjusted based on experimental results already observed—in other words, patients are more likely to be assigned to the more effective study arm (if there is one) over the course of the study (Figure 1).

study designFigure 1. Diagrammatic representation of fixed and outcome-adaptive randomization schemes. *Probability of assignment to intervention arm = [(probability that intervention is superior)a] ÷ [(probability that intervention is superior)a + (probability that control is superior)a], where a is a positive tuning parameter that controls the degree of imbalance (a is usually a value between 0 and 1; when a = 0, randomization is fixed at 1:1; when a = 1, the probability of randomization to intervention equals the probability that intervention is superior). Whew! That’s enough math.

Outcome-adaptive randomization study design is based on Bayesian analysis, an inductive approach that uses observed outcomes to formulate conclusions about the intervention. For example: Given the observed experiment results, what is the probability that intervention is superior to control? In contrast, a classical statistical analysis uses deductive inference to determine the probability of an observed outcome assuming that the null hypothesis is true. For example: What is the probability of observing these experimental results, if the intervention and the control are equally effective?

So, outcome-adaptive randomization must be better for patients, right?
Short answer: not always.

In a recent report, Korn and Friedlin used simulations to compare fixed balanced randomization (1:1) and outcome-adaptive randomization. The authors assumed: (1) there are two study arms, one of which is a control arm; (2) outcomes are short-term; (3) outcomes are binary (response vs. no response); and (4) patient accrual rates are not affected by study design. The authors determined the required sample sizes, and average proportions of patients with response to treatment (responders) and numbers of patients without response to treatment (nonresponders), assuming a 40% response rate with intervention vs. 20% response rate with control. They found that the adaptive design did not markedly change the proportion of responders but increased the number of nonresponders, compared with 1:1 randomization design. The authors found outcome-adaptive randomization to be “inferior” to 1:1 randomization and to offer “modest-to-no benefits to the patients.”

What?!

These results may seem surprising, but there are general downsides to outcome-adaptive randomization:

  • Adaptive randomization studies are logistically complicated. For instance, they require that treatment assignment be linked to outcome data.
  • Early results from the study could bias enrollment into the study. For instance, would patients enroll in a study in which control looks better than intervention?
  • If control does outperform intervention, the study sponsor might end up funding a study in which the majority of patients are assigned to the control arm.

But there must be upsides! And indeed there are.

First, the assumptions made by Korn and Friedlin (only two study arms, patient accrual rates not affected by study design, etc), limit the applicability of their conclusions to situations outside the described scenarios (full description of limitations here).

Further, the advantages of this design are highlighted in studies with more than two arms (eg, multiple regimens, doses, or dosing schedules). In these, a superior intervention could be identified without enrolling equal numbers of patients in each arm. This could allow shorter clinical studies that evaluate more interventions. And if one arm performs poorly, randomization to that arm could be limited or the arm could be dropped altogether, depending on the inferiority cutoffs built into the study design. Furthermore, outcome-adaptive randomization combined with biomarker studies at baseline can potentially identify relationships between response and biomarker status, which cannot be achieved with fixed randomization.

Take the phase 2 BATTLE study, the first completed prospective, outcome-adaptive, randomized study in late-stage non-small cell lung cancer. Patients underwent tumor biomarker profiling before randomization to one of four treatment arms: erlotinib, sorafenib, vandetanib, and erlotinib + bexarotene. Subsequent randomization considered the responses of previously randomized patients with the same biomarker profile. Historically, the disease control rate (DCR) at 8 weeks for this patient population is about 30%. But in BATTLE, overall DCR was 46%. Even better, the study showed that biomarker profile correlates with response to specific treatments. This was important because, at the time, biomarkers for those drugs had not been validated. Overall, BATTLE demonstrated the possibility and feasibility of personalizing treatment to patient.

Right now, the phase 2 I-SPY 2 study is trying to do for breast cancer what the BATTLE study did for lung cancer. In I-SPY 2, newly diagnosed patients with locally advanced breast cancer will undergo biomarker profiling before randomization to one of up to 12 neoadjuvant therapy regimens. The study design will allow clinical investigators to graduate (move forward in development when outcome fulfills a Bayesian prediction), drop, or add drugs seamlessly throughout the course of the study without terminating the study and starting over. This could drastically reduce the time it takes the study to move from one drug to another.

Should outcome-adaptive randomization become the new “gold standard”?

Short answer: Not quite yet. But the potential advantages and disadvantages of these study designs must be recognized. Based on the attention that the BATTLE and I-SPY 2 studies have gotten, though, expect to see more outcome-adaptive randomized studies in the future. The potential for shorter clinical studies evaluating more than one intervention at a time could mean considerable cost and time savings in drug development.