Patients often ask me, are we winning or losing the War on Cancer? It is a worthy question, but notoriously difficult to answer. To start with, the very definition of "winning" comes into question. How does one measure victory, or defeat?
Take, for instance, a seemingly obvious proposal about the measurement of progress. What if we created a catalog of all major forms of cancer — lung, breast, prostate, colon, and so forth — and measured the fraction of patients who survive five years, or one year, after being diagnosed with each form. Has that fraction changed between 1970, say, and 2010? If only 5% of patients were surviving at 5 years in 1970, and that number is now 20%, can we legitimately use that change as a measure of victory?
No — because using survival rate as a guide for progress is inherently sensitive to biases.
To understand these biases, imagine identical twins living in neighboring houses — call them Hope and Prudence. Now imagine that a new diagnostic test is introduced that detects early breast cancer. Hope chooses to be screened by the test. Prudence, suspicious of medicine, chooses to forego screening.
Unbeknownst to Hope and Prudence, identical forms of cancer develop in both twins at the exact moment in time, in 1990. Hope's tumor is detected by the screening test in 1990, and she undergoes surgical treatment and chemotherapy for five years. She dies in 2000. Her survival period after diagnosis is 10 years.
Prudence, in contrast, is not screened with the test. She only detects her tumor when she feels a growing lump in her breast in 1999. She, too, has treatment, with some marginal benefit, and then relapses and dies at the same moment as Hope in 2000. Her survival time is one year.
At the joint funeral, as the mourners stream by the identical caskets, an argument breaks out between Hope's and Prudence's doctors. Hope's physicians insist that she had a 10-year survival: her tumor was detected in 1990 and she died in 2000. Prudence's doctors insist that her survival was one year: her tumor was detected in 1999 and she died in 2000. Yet both cannot be right: the twins died from the same tumor at the exact moment in time. The solution to this seeming paradox, called "lead-time bias," is immediately obvious. Using survival as an endpoint for a screening test is flawed because early detection pushes the clock of diagnosis backwards. Hope's tumor and Prudence's tumor possess exactly identical biological behavior. But since doctors detected Hope's tumor earlier, it seems, falsely, that she lived longer and that the screening test was beneficial.
There is a simple way to avoid this bias. Rather than measuring survival rates, one can measure the overall mortality. In other words, one can calculate exactly how many people died of cancer in 1970, versus 1990, versus 2010, and so forth, and plot this as a graph.
But here too, there are methodological glitches. "Cancer-related death" is a raw number in a cancer registry, a statistic that arises from the diagnosis entered by a physician when pronouncing a patient dead. The problem with comparing that raw number over long stretches of time is that the American population (like any population) is itself gradually aging, and the rate of cancer-related mortality naturally increases as well. Old age inevitably drags cancer with it, like flotsam floating on a tide. A nation with a larger fraction of older citizens will seem more cancer-ridden than a nation with younger citizens, even if actual cancer mortality has not changed.
To compare samples over time, some means is needed to normalize two populations to the same standard — in effect, by statistically "shrinking" one into another.
The statistician John Bailar has performed exactly this form of analysis. He has normalized all populations from 1980 onwards, and measured cancer-specific death over time. And his analysis is accurate, but also sobering. Cancer-specific deaths are, in fact, trending downwards. They have been doing so for nearly a decade. But the downwards trend is not a steep slope towards zero.
The figure below provides an even more sophisticated answer to this question: it divides cancer mortality by age groups and trends these over time. Once again, the answer is obvious. In men and women in the age groups 45-49 and 55-59, there has been a distinct downward trend (from 2 per 1000 to 1 per 1000 in men and women age 50-59). In contrast, for older men and women (75-79), the rates have risen, plateaued, and finally have begun to drop. Much of this rise is the result of the increase in lung cancer due to increased smoking in the 1960s.
The epidemiologist Lester Breslow proposes yet another alternative metric to measure progress. If chemotherapy cures a five-year old child of leukemia, then it saves a full 65 years of potential life (given an overall life expectancy of about 70). In contrast, the chemotherapeutic cure in a 65-year-old man contributes only five additional years, given a life expectancy of 70. The metrics described above — age-adjusted mortality — cannot detect any difference in the two cases. A young woman cured of lymphoma, with 50 additional years of life, was judged by the same metric as an elderly woman cured of breast cancer, who might succumb to some other cause of death in the next year. In contrast, if "years of life saved" are used to judge progress, then the picture changes again: Breslow notes, "In 1980, cancer was responsible for 1.824 million lost years of potential life in the United States to age 65. If, however, the cancer mortality rates of 1950 had prevailed, 2.093 million years of potential life would have been lost."
Whether you call this "progress" or "victory" is a personal decision. But measuring it turns out to be far more complex than we might have imagined.