Category Archives: Controversies

Hormonal replacement therapy in post-menopausal women and breast cancer

The following is a bit off the topic of cancer tests, but it goes along with the general theme of individualizing patient treatment.

In yet another long term follow up of the massive Women’s Health Initiative study, which included a randomization between placebo and hormone replacement therapy (“HRT,” with estrogen + progesterone), an increased incidence of breast cancer and increased breast cancer deaths are being reported.

What is being (erroneously) disseminated in the media is the following message: “Not only does prolonged use of hormone replacement therapy raise the risk of breast cancer, new research finds, but it also ups the risk for more severe forms of the disease and increases a woman’s chances of dying.”

This most recent WHI paper only looks at (1) breast cancer mortality and (2) all cause mortality after a diagnosis of breast cancer.  It does not report all cause mortality!  Therefore, it cannot be stated that HRT “increases a woman’s chances of dying.”

An earlier study published by the same authors showed a slight (non-significant) reduction in all cause mortality with HRT and a significantly decreased incidence of colon cancer and hip fractures.

The risk of death following hip fractures has been stated to exceed the risk of death following breast cancer in post menopausal women.

I watched and listened to the lead author (Dr. Rowan Chlebowski, who is a medical oncologist with the Los Angeles Biomedical Research Institute at Harbor-UCLA Medical Center) on the PBS News Hour on October 19, 2010, and I read the JAMA editorial which accompanied the Chlebowski paper and I felt that both exhibited much greater certitude in discouraging HRT use than the data support.

Risks and benefits must be individualized. Thin post menopausal women, with physically active lifestyles, non-hyperdense breasts, low to negligible alcohol consumption, lack of family history, negative mammograms, low fat diets, no tobacco use, low risk of coronary artery disease, and severe osteopenia are at high risk for high lethality hip fractures, but at low risk for breast cancer, lung cancer, and thromboembolic disorders.  Might not the risk benefit ratio of lower dose, transdermal HRT be favorable?

I just use the above as an example. The absolute risk magnitude of breast cancer attributable to HRT remains very low, and there other threats to health and happiness in post-menopausal women beyond breast cancer.

I understand that a continuing series of articles from this study is planned for the future.  Were I a reviewer, I’d demand that all cause mortality data be published with each and every one of these forthcoming studies.

In an earlier, previously-reported study from the same authors, the risk of non-small cell lung cancer was cited as a possible complication of HRT.  But the absolute magnitude of risk was very small, and was not close to being significant in the case of never smokers. As noted above, yet another prior study had shown a significant decrease in the incidence of colon cancer, in women receiving HRT, compared to placebo. Along with the substantial reduction in hip fractures.

Yet study authors, editorialists, and especially the media are now making it appear that prescription of HRT to post-menopausal women is virtually tantamount to malpractice.  I strongly disagree with this position.  HRT has a risk benefit ratio, along with everything else in medicine. And this risk benefit ratio is hugely different in different subsets of patients.

Which is why therapy should be individualized and why broad, one size fits all pronouncements and messages should be discouraged.

– Larry Weisenthal

2009 ASCO Breast Cancer Symposium

I’m currently attending the 2009 American Society of Clinical Oncology (ASCO) breast cancer symposium, in San Francisco.

The Keynote Address (by Martine PIccart-Gebhart of the Jules Bordet Institute) was very relevant to Individualized Tumor Response Testing (ITRT).  The speaker made the point that only 8% of new drugs entering Phase I trials ever make it to marketing and that this percentage is even lower for cancer drugs, because “current drug testing is inefficient, with many drugs failing late in development,” with “these expensive failures” owing, in large measure,  to “ineffective drugs” and “poor patient selection (i.e, lack of prognostic and predictive markers for response to therapy).”

The speaker went on to note that “little progress has been made in identifying which therapeutic strategies are likely to be effective for individual patients.” The speaker concluded that “identifying markers that can predict response to a particular drug remains a great challenge.”

To my knowledge, there is absolutely nothing being presented at this meeting which reports any progress at all in drug selection through the use of molecular profiling.  Perhaps I’ve missed something.  There is another half day remaining.

When microarrays and high throughput RT-PCR emerged  some years back, I’d have predicted quite a bit of progress by now.  Instead, there were a couple of early clinical studies, and virtually no follow up by now. Certainly, the sum total of the cell culture-based literature is vastly superior, taken as a whole, to the sum total of the molecular profiling-based literature for drug selection.  One would think that there would be at least equal interest in cell culture-based approaches.  Sadly, no.

I presented what I think was an important study in a poster session.  It attracted virtually no interest. I’d like to briefly go through the most important points here.  I’d be very interested in feedback, comments, and/or criticism, preferably on this blog, but, if preferred, as a private email: mail@weisenthal.org

The problem I addressed is that of so-called “triple negative” breast cancer (TNBC), which is: Estrogen Receptor Negative (ER-), Progesterone Receptor Negative (PR-), and Her2 Negative (H2-).

When breast cancer presents as locally advanced disease, it is customarily treated with neoadjuvant (preoperative) chemotherapy, followed by definitive surgery. At the time of surgery, the specimen is assessed to determine if all visible tumor has been destroyed by chemotherapy.  When this happens, it is said to be a “pathological complete response” (pCR). The Kaplan-Meier curves below show survival in Non-TNBC and TNBC, as a function of whether or not preoperative chemotherapy produced a pCR.

In the figure below, the yellow line is the survival for non-TNBC patients who achieved a pCR; the blue dashed line is the survival of TNBC patients who achieved a pCR; the black dashed line is for non-TNBC who did not achieve a pCR; and the red dashed line is for TNBC patients who did not achieve a pCR.

weisenthal_beaumont0933These data show that obtaining a pCR is everything. Get a pCR, and the survival is excellent. Don’t get a pCR and the survival, especially for TNBC patients, is very poor.

So the issue is this: what can we do to improve the pCR rate in TNBC?

I began by going through our database of breast cancer cell culture assays (using cell death endpoints) and trying to see if there were any drugs which appeared to be uniquely active in TNBC vs non-TNBC.  Here’s a representation of what I found:

weisenthal_beaumont0941The horizontal bars show percent control cell survival for drug exposed cultures. The shorter the bar, the more active the drug.  Error bars represent 95% confidence limits. These are representative data for a few of the more important drugs.

Essentially, there are no major differences between the activity of most drugs in TNBC vs non-TNBC, with one glaring exception: cisplatin, which was dramatically more active in TNBC than in non-TNBC.

I went on to present a lot of data further dissecting which specific markers were most associated with TNBC sensitivity to cisplatin.  These data will shortly appear on the ASCO website, when they post slide presentations from the 2009 Breast Cancer Symposium. But the following were the most important “take home” messages:

Major markers for platinum sensitivity in breast cancer were:

1. Estrogen Receptor negativity

2. Bloom-Richardson (Nottingham) score of 9/9 (i.e. very poorly differentiated tumors.

Minor markers for platinum sensitivity in breast cancer were:

1. Her2 negativity

2. Progesterone receptor negativity

The following puts the findings into a useful perspective:

weisenthal_beaumont1011

These data compare TNBC to other types of tumors.  It’s known that renal cell carcinomas are very resistant to cisplatin (less than 10% response rate), and that is reflected by the cell culture (cell death endpoint) data. It’s known that previously-untreated, poorly differentiated ovarian cancers tend to be very sensitive to cisplatin (70% response rate), and that’s also reflected by the cell culture data. When ovarian cancer patients relapse soon (0 to 6 months) after discontinuation of chemotherapy,  they have only a 25% response rate to re-treatment with platinum.  When ovarian cancer patients relapse greater than 6 months following discontinuation of chemotherapy, they have a 50% response rate to re-treatment with platinum.  These clinical findings are also nicely recapitulated by the cell culture assay data.

Now, breast cancers which are either estrogen receptor positive and/or more than very poortly differentiated (Bloom Richardson score of 4 to eight) tend to be even more resistant to cisplatin than are previously treated ovarian cancer which relapse soon (0 to 6 months) after discontinuation of chemotherapy.  In contrast, Triple Negative Breast Cancers tend to be as sensitive or more sensitive (especially when also Bloom Richardson 9/9) to cisplatin than are previously-untreated, poorly-differentiated ovarian cancers.

I think that these data clearly show the utility of cell culture assays in “targeting” chemotherapy to patient sub-groups who are most likely to benefit from treatment with given individual drugs.  It’s hard to see how “molecular” profiling tests could have produced similar insights.

The following were my summary conclusions, comparing cell culture profiling versus “molecular” profiling:

weisenthal_beaumont1031

So the question is this: Why is there absolutely no interest in cell culture tests?

– Larry Weisenthal/Huntington Beach, CA

Chemosensitivity versus Chemoresistance

A just-published study in ovarian cancer raises the question of the distinction, if any, between “chemosensitivity” and “chemoresistance” (or “drug resistance”).  It has been my position, for a long time, that these distinctions are largely semantic and not of useful clinical relevance.  Resistance is the relative absence of sensitivity and sensitivity is the relative absence of resistance.

A number of authors have tried to claim superiority for one type of assay over another, when, in reality, both assays are attempting to make the same distinction — which drugs are the more promising and which are the less promising for a given clinical application?  One good example of this attempt to claim the semantic high ground is the very complicated argument made by John Fruehauf and Dave Alberts in a letter published in the Journal of Clinical Oncology. (In Vitro Drug Resistance Versus Chemosensitivity: Two Sides of Different Coins, DOI: 10.1200/JCO.2005.05.281).

The letter by Fruehauf and Alberts is ironic in the context of the just-published paper by Matsuo, K. et al.  (Low drug resistance to both platinum and taxane chemotherapy on an in-vitro drug resistance assay predicts improved survival in patients with advanced epithelial ovarian, fallopian, and peritoneal cancer.  Int. J Cancer, 2009  DOI: 10.1002/ijc.24654).  Fruehauf and Alberts extol the virtues of the drug resistance side of the coin, specifically referring to the soft agarose, tritiated thymidine assay of Kern, with statistical definitions of drug resistance originated by me (Kern and Weisenthal, JNCI, 1990).  Yet both the Matsuo study (above) and another recent study (just presented at the June, 2009 ASCO meeting) show no useful correlation between the “extreme drug resistance” endpoint and any clinical outcome in ovarian cancer, while the Matsuo study shows highly (and independently) significant correlations between the “low drug resistance” (which could just as easily be called “chemosensitive”) endpoint with both progression-free and overall survival.

Correlations between “Sensitive” vs “Resistant” cell death assay results and clinical response to chemotherapy in published studies

Correlation between "sensitive" vs "resistant" cell death assay results and clinical response to chemotherapy

Several things are evident:

1. In all individual studies, patients treated with drugs classified as “sensitive” had a higher response rate than that for all the patients in each individual study.

2. In all individual studies, patients treated with drugs classified as “resistant” had a lower response rate than that for all the patients in each individual study.

3. In all individual studies, patients treated with drugs classified as “sensitive” had a MUCH high response rate than for patients treated with drugs classified as “resistant.”

4. Averaging up all of the studies, patients treated with drugs classified as “sensitive” had an 8-fold higher response rate than patients treated with drugs classified as “resistant.”

I think that all of the above data point to the fact that differences between “chemosensitivity” and “chemoresistance” are entirely semantic and that the proper role for these assays is to direct attention towards agents in the “sensitive” (or “low resistance”) group and away from agents in the “resistant” (or “low sensitivity” group).

– Larry Weisenthal/Huntington Beach, CA USA

Medscape.com debate on clinical validity of individualized tumor response testing with cell culture assays

Last week, on the Medscape Hematology/Oncology website, I engaged in a debate with two different American medical oncologists over the current status of individualized tumor response testing utilizing cell culture assays.

http://boards.medscape.com/forums/.29f0fd39?@139.Y3YYauEtc0z@ (May require free registration to view).

The debate was stimulated the results of recent clinical trials, showing inferior outcomes for patients treated with combinations of bevacizumab with anti-EGFR drugs (cetuximab and panitumumab).  In the debate (reproduced below), many far-ranging issues related to clinical validation of cancer tests are discussed, including a comparison of the ITRT cell culture assays with tests such as estrogen receptor.

#1 of 18, Added By: paulroda, (Hematologist/Oncologist),  9:54PM Mar 16, 2009

This isn’t new — in thirty years in oncology I’ve seen many studies over the years in which a concept that sounds good “tanks”

Remeber m-BACOD, M-Bacod, and ProMace/Cytabam ? All of these were supposed to harness cell kinetics and be superior to CHOP for the treatment of DLCL. When a large randomized study was done, it was shown that nothing was superior.

So any new concept, no matter how promising, needs to be confirmed in a large trial

#2 of 18, Added By: LarryMWeisenthal, Oncology, Medical,  1:08PM Mar 25, 2009

>So any new concept, no matter how promising, needs to be confirmed in a large trial<<

In the late 1980s, the NCI, aided and abetted by herd mentality study sections, effectively closed down research into fresh human tumor cell culture methods for testing and optimizing chemotherapy. The proof of this is the complete lack of NIH-funded studies relating to this topic appearing in PubMed for the last 15 years. Instead, we have put all of our clinical trials resources into trying to identify the best treatment for the average patient — in a disease notorious for heterogeneity. Drug screening (including therapy screening) belongs in the laboratory, not in the clinic. All of a sudden, there is a belated recognition that “personalized” therapy is worthy goal — yet 100% of the effort is going into static profiling of molecular markers, as opposed to dynamic, functional profiling of tumor response ex vivo. It’s crazy/nuts, and, down the road, tomorrow’s translational researchers will shake their heads and say “what on earth were they thinking?” You want to know how to make progress in combined “targeted” therapy? Here’s an example: http://tinyurl.com/weisenthal-breast-lapatinib But there’s absolutely no support for work such as this. And now there are efforts under way to make it impossible to do this work in grass roots, private sector laboratories, for example:

http://www.cancertest.org/?cat=11

As I said, it’s crazy/nuts.

– Larry Weisenthal/Huntington Beach, CA

#3 of 18, Added By: 37dkaufman, Oncology, Medical,  2:18PM Mar 25, 2009
Replying to:

Reply to: #2 by LarryMWeisenthal

>So any new concept, no matter how promising, needs to be confirmed in a large trialIn the late 1980s, the…<

It may be crazy/nuts, but there has never been an iota of proof that ex vivo drug sensitivity testing is an effective method for selecting an individual’s therapy in any cancer. The only sources of enthusiasm for this concept are the companies continuing to test for profit. Continuing NCI funding for this effort after a huge expenditure, without proof of concept or new and better methods of testing would be a waste of taxpayer’s money.

#4 of 18, Added By: LarryMWeisenthal, Oncology, Medical,  3:20PM Mar 25, 2009

Now, 37dkaufman is misinformed.

I scarcely know where to begin. The traditional (and only) criterion used to evaluate laboratory (or similar predictive/prognostic) tests has been the predictive accuracy (sensitivity/specificity) of the test in question. Yet the 2004 ASCO review specifically EXCLUDED from consideration all studies reporting the predictive accuracy of the tests! In the words of the ASCO review authors: “We excluded reports that only reported correlations between assay results and clinical outcomes” (where “outcomes” are both response to treatment and patient survival). Instead, the ASCO authors included for consideration only old, previously-reviewed studies comparing outcomes of patients who had treatment based on assay results versus patients with empirically chosen therapy. On superficial consideration, the criteria of laboratory assay “efficacy” (as opposed to laboratory assay “accuracy”) sounds reasonable, but it is both unprecedented and unfair. 

To begin with, none of the available laboratory tests used in the selection of treatments for cancer patients have ever been tested for “efficacy,” and this includes estrogen receptor, progesterone receptor, Her2/neu, immunohistochemical staining for tumor classification, bacterial culture and sensitivity testing, CT, MRI, and/or PET scans to measure tumor “response” to treatment — as opposed to basing assessment of clinical response on simple/cheap history, physical, routine labs, routine radiographs, etc. All of these tests are used to guide treatment and drug selection no less than are CSRA, yet the only data supporting any of them relate to test accuracy, and there is a total lack of information regarding test efficacy. Likewise, no one is seriously proposing that any of the “molecular” tests now available (e.g. OncotypeDX, KRAS mutation) should have to be proven “efficacious” (as opposed to “merely” accurate) before they are used in clinical decisions regarding treatment selection. 

Additionally, the ASCO review may imply that there have been good studies done to examine the issue of “efficacy,” when the true situation is that the CSRA technologies are all public domain, non-proprietary and no private sector companies or individuals should reasonably be expected to pay for such unprecedented studies and none of the granting agencies or cooperative groups have been willing to support such studies, also. So it is hereby stipulated that there is no literature establishing clinical “efficacy” of CSRA, because the costs of such clinical trials are prohibitive, support is non-existent, and no other analogous tests have been or will likely ever be subjected to such an unreasonably high bar criterion for clinical use, as well.

It should be noted that, while the FDA doesn’t regulate clinical laboratories performing these tests, it does regulate test kits. In the 1990s, the FDA formally approved a Baxter test kit for CSRA testing, based entirely upon demonstration of acceptable test accuracy in a single, small published study, and did not require proof of “efficacy,” as, again, this remains an unprecedented criterion for evaluating any laboratory test. 

In point of fact, CSRA has been well proven to have predictive accuracy which compares very favorably with that of comparable tests, such as estrogen receptor, progesterone receptor, Her2/neu and the newer “molecular” tests. CSRA predicts for response and patient survival in a wide spectrum of neoplasms and with a wide spectrum of drugs. Close to 100 peer reviewed studies, collectively including about 4,000 patients; every one of these studies showing above average probabilities of clinical benefit for treatment with assay “positive” drugs and below average probabilities of clinical benefit for treatment with assay “negative” drugs, where clinical benefit included both response and, in more than 25 studies, patient survival. Much more to say, but I’m at 4000 characters.


#5 of 18, Added By: paulroda, Oncology, Hematology/Oncology
,  8:57PM Mar 25, 2009

All the more reason we, as physicians have to critically evaluate new concepts both in therapy as well as diagnosis.

I personally never bought into cell sensitivity assays. The concept sounded good, but it never panned out in clinical practice

#6 of 18, Added By: LarryMWeisenthal, Oncology, Medical,  6:50AM Mar 26, 2009

It is depressing how uninformed people are on this issue.

Let’s take this statement, by Dr. Kaufman:

>Continuing NCI funding for this effort after a huge expenditure, without proof of concept or new and better methods of testing would be a waste of taxpayer’s money.<<

Firstly, what is “proof of concept?” I would say that nearly 100 peer review studies of correlations between test results and clinical outcomes, showing that, on average, patients treated with test “sensitive” drugs are 8-fold more likely to enjoy a response than patients treated with test “resistant” drugs is a pretty good proof of concept. Note that all these studies were EXCLUDED from the 2004 ASCO review, and these types of data (correlations between test results and clinical outcomes) constitute the ONLY type of data existing to support all of the other tests we use as an aid in drug selection!

The reasons for the abandonment of research into cell culture testing are fairly simple to understand, in retrospect. There had been enthusiasm for developing cell culture testing methods in the late 1970s and early 1980s. These methodologies were all based on measuring inhibition of cancer cell growth as the test endpoint. Cancer was considered to be a disease of disordered cell growth and cancer drugs were thought to work by inhibiting cell growth. Assays based on measuring cell growth were plagued with problems and artifacts, which led to great disillusionment. The so-called “huge expenditure” went entirely to funding studies of assays with proliferation endpoints.

My own contribution to this field was the development of assays based on the alternative concept of cancer cell death (as opposed to inhibition of cancer cell growth). However, timing is everything, and it was not appreciated, at the time, that cancer is very much a disease of disordered cell death (as opposed to disordered cell growth) and that most of the anticancer drugs work by promoting cell death, as opposed to inhibiting cell growth. Faced with the unavailability of funding, American investigators were forced to leave the field of cell culture testing or were forced into the private sector.

Now you may be able to understand my charge that abandonment of support of cell culture tests was (and is) crazy/nuts. Study sections decided that one particular methodology fell short of expectations, and they made the command decision that all other methodologies, based on newly emergent concepts (e.g. apoptosis) had no possibility of meeting expectations. The certitude was breathtaking.

These technologies are largely non-proprietary and in the public domain. They are labor intensive, refractory to automation, and there is no serious money to be made by anyone in providing these tests as a service to patients. There is a compelling body of published data to indicate that these tests are usefully accurate in distinguishing between “good” and “bad” drugs, on an individual patient basis.

It is now possible to test virtually all of the major classes of anticancer drugs (with rare exceptions, such as pemetrexed/Alimta). We can test traditional cytotoxic drugs, biologic response modifiers, such as IL-2, the newer “targeted” kinase inhibitors (e.g. erlotinib, sorafenib), and we can test antivascular drugs, such as bevacizumab (Avastin) and small molecule antivascular drugs. We can use cell culture methodologies to custom tailor complex treatment regimens for individual patients, based on combinations for the above classes of drugs, where the tests indicate that such combination therapy would be most advantageous.

We can successfully report out results on 98% of the cytologically positive specimens which arrive through the door. If one makes a full time effort at something for 30 years, it is, indeed possible to make progress. But there has been no support for this work for two decades because uninformed reviewers had the certitude to predict in advance that it couldn’t work.

#7 of 18, Added By: LarryMWeisenthal, Oncology, Medical,  7:14AM Mar 26, 2009

Let’s consider a single disease (metastatic breast cancer) and a single test (the estrogen receptor):

The current treatment paradigm in cancer, which dates back to the 1950s, is to perform gargantuan prospective, randomized trials to determine the “best” treatment for the average patient, in a disease notorious for its heterogeneity, where what is “best” for one patient is often “worst” for a different patient, with ostensibly the same disease.

If prospective, randomized trials to identify the “best” treatment for the average patient were an effective strategy for making progress, then we should have seen marked improvement in the treatment of metastatic breast cancer, where probably in excess of 100,000 patients have been entered into these trials. Yet 35 years ago the median survival was 2 years and it is still 2 years today. And a single “best” drug regimen to give to the average patient has not been identified, as evidenced by the National Cancer Institute’s decription of “state-of-the-art” therapy, which lists 26 different drugs and drug combinations as being equally efficacious, with no conclusive data to indicate whether single agent or drug combination therapy is superior.

http://www.cancer.gov/cancertopics/pdq/treatment/breast/HealthProfessional/page8

So how are drugs selected, in the real world?

Two different studies have shown that drug selection has been correlated with the amount of profit made by the prescribing oncologist.

http://www.communityoncology.net/journal/articles/0307411.pdf

In summary, both in terms of cancer care affordability and cancer care effectiveness, what is urgently needed is some method of matching individual drugs and drug combinations to individual patients.

There is currently a vast effort to develop laboratory tests to match cancer treatment to cancer patient. However, all of the technologies under study in major universities, the pharmaceutical industry, and cancer centers are based on so-called “molecular” methodologies, which basically means studying the building blocks of the cancer cell (DNA, RNA, proteins), as opposed to taking a portion of the living (”viable”) tumor, putting living/viable cancer cells into a laboratory cell culture plate, and directly adding anticancer drugs to these cells to determine which drug(s) work best against an individual patient’s cancer. There are advantages and disadvantages to both approaches (”molecular” versus “cell culture”), but what is important is that all of the effort (and all of the funding) has, for the past 20 years, been going to the development and testing of “molecular” approaches, with nothing going to support the development and testing of the “cell culture” approaches. It’s crazy/nuts.

I’d like to consider the estrogen receptor test in the comment to follow.

– Larry Weisenthal/Huntington Beach, CA


#8 of 18, Added By: LarryMWeisenthal, Oncology, Medical
,  7:20AM Mar 26, 2009

Let’s examine one very relevant example in detail.

The estrogen receptor (ER) test is broadly accepted to be the number one prognostic test in all of clinical oncology, from the standpoint of drug selection. The test is used to make gravely important treatment decisions, generally between cytotoxic chemotherapy on one hand or hormonal therapy on the other hand or the combination of chemotherapy and hormonal therapy. In some situations, this test is used to determine if patients are to receive any drug treatment at all. In contrast, our tests are simply used to select between treatment regimens with otherwise equal efficacy in patient populations — situations in which the choice could be made by a coin toss or, more commonly, on the basis of remuneration to the treating physician, with equivalent results on a population basis, though certainly not at the level of the individual patient. So, if anything, the “bar” should be higher for the ER test than for our tests. So what data exist to “validate” the most important predictive laboratory test in clinical oncology?

The history of the ER test is that it was originally developed as a complicated biochemical test, generically called the “radioligand binding assay” (RLB assay). The RLB assay was “validated” in the 1970s and very early 1980s by means of retrospective correlations with clinical outcomes for patients treated with hormonal therapy. Overall, in retrospective correlations with hundreds (not thousands) of patients, the RLB assay was found to be about 60% accurate in predicting for treatment activity and 90% accurate in predicting for treatment non-activity. In other words, an RLB assay “positive” tumor had a 60% chance of responding to hormonal treatment. An RLB “negative” tumor had a 10% chance of responding to hormonal treatment.There were never any Phase 3 trials to show that either performing or not performing the test made a difference in treatment outcomes.

The RLB test was complicated and could only be performed by highly specialized laboratories. In the 1980s, the immunohistochemical (IHC) test was developed as an alternative and quickly replaced the RLB test. The IHC test was not independently validated as a predictor of response to hormonal therapy, but was merely compared to the RLB “gold standard” test in the highly specialized laboratories. Subsequently, the IHC test was “validated” in studies in which archival specimens were batch processed in the same time frame by a single team of laboratory workers. These are not real world conditions, in which specimens are accessioned, processed, stained, and read by different people, at different times, using different reagents. But the IHC test quickly moved out into hundreds (possibly thousands) of community hospital pathology laboratories. Various studies have shown that there is often a broad variation of results between different laboratories, in formal proficiency testing studies. And yet hundreds of thousands of cancer patients have had life and death treatment decisions based on these tests (the IHC test for Her2/neu is an even more egregious example, and the IHC test for EGFR is more egregious still, but I’ll confine the present discussion to the “best” predictive treatment selection test in oncology, namely the IHC ER assay).

Now, we finally have a published study on the ability of the IHC ER assay to predict for clinical response to hormonal therapy (Yamashita, et al. Breast Cancer 13:74-83, 2006). A total of 75 patients were studied. 20% of patients with a negative IHC ER test responded to treatment. 56% of patients with a positive IHC ER test responded to treatment. And these were data from a laboratory which certainly had above-average expertise in performing the test.

Now, can you begin to see the abject bankruptcy of the position of the ASCO tech review?

– Larry Weisenthal/Huntington Beach, CA

#9 of 18, Added By: paulroda, Oncology, Hematology/Oncology,  9:03PM Mar 26, 2009

Replying to:

Reply to: #7 by LarryMWeisenthal

>>Let’s consider a single disease (metastatic breast cancer) and a single test (the estrogen receptor): The current treatment paradigm in…<<

It’s always nice to see a nice name on the boards. But I have to disagree regarding the utility, or lack of same regarding the estrogen receptor.

The vast majority of my breast cancer patients are older, and thus er/pr positive. Even when presenting with mets, most live longer than two years. I would have to sit down and calculate it, but I suspect the median survival of ER positive patients in our group practice is approaching five years.

#10 of 18, Added By: paulroda, Oncology, Hematology/Oncology,  9:04PM Mar 26, 2009

Agree, however, that the median survival for bad actors isn’t nearly that good. So if the median is 2 years, it’s really a bimodal distribution, with ER positivity serving as the sorting point.

#11 of 18, Added By: LarryMWeisenthal, Oncology, Medical,  12:18PM Mar 27, 2009

Dr. Roda:

Chemotherapy treatment of metastatic breast cancer survival reference:

J Clin Oncol 26:1987-1992 (2008)

http://jco.ascopubs.org/cgi/content/abstract/26/12/1987 (see table 2)

Also, you misunderstand. I think that the ER test is usefully accurate, for certain things. However, in terms of predicting response to hormonal therapy of metastatic breast cancer, the best available evidence (Yamashita, et al. Breast Cancer 13:74-83, 2006) indicates that it’s got a 56% true positive rate and a 20% false negative rate. So it clearly identifies a population of patients who should have a relatively good prognosis, when treated with modern hormonal therapy. But what about patients with a “negative” ER test? Should they be denied hormonal therapy? Well, if hormonal therapy has a 20% chance of working and if, when given by excellent oncology practices such as yours, it can produce 5 year survivals, then what is the justification for ever “denying” hormonal therapy to any woman with metastatic breast cancer, given the fact that chemotherapy would appear to be a zero sum game (i.e. we’ve increased response rates over the last 30 years, but we haven’t improved overall survival. The inescapable conclusion is that, for every patient who’s life is prolonged by chemotherapy, there’s another patient who’s life is shortened).

My larger point was simply to compare and contrast the double standard when it comes to the appropriateness for utilizing the ER test on one hand and “functional profiling” (with cell culture methods) tests on the other hand.

The concept that the only relevant information regarding the “validity” of cell culture tests should be the results of prospective, randomized trials to prove “efficacy” (as opposed to accuracy, which is the standard applied to all other medical tests, including ER) has been disastrous with regard to the goal of personalized medicine in cancer treatment.

I’ll make some comments about prospective randomized trials which are relevant to Dr. Markman’s topic of disappointing results of combined “targeted” therapy in the next comment (character limit).

– Larry Weisenthal/Huntington Beach, CA

#12 of 18, Added By: LarryMWeisenthal, Oncology, Medical,  12:30PM Mar 27, 2009

About prospective, randomized trials:

We have been unsuccessful in attempting to convince cooperative oncology groups to partner in performing clinical trials to prove that these methodologies can improve the overall success of cancer treatment and to reduce the horrendous personal and system costs associated with ineffective treatment. As there is no serious money to be made in non-proprietary laboratory testing, there is negligible interest on the part of investors to sponsor such trials.

What is urgently needed, in order to motivate others, with superior resources and superior talent, to jump start the development and application of fresh tumor cell culture technologies, is an open, transparent clinical trial.

Proposed design:

The trial design I have in mind would be the following:

Select 6 different types of advanced cancer. I would recommend: (1) platinum-resistant ovarian cancer, (2) unresectable pancreatic cancer, (3) stage 4 adenocarcinoma/alveolar cell carcinoma of the lung, (4) relapsed acute non-lymphocytic leukemia, (5) stage 4 colorectal cancer, and (6) Stage 4 breast cancer. In each disease, randomize patients between receiving “physician’s choice” therapy and assay-directed therapy, where the therapy to be administered may consist of any FDA-approved drug or drugs, which could include traditional cytotoxic drugs, biologic response modifiers, “targeted” kinase inhibitors, antivascular drugs, and resistance modifying drugs, such as high dose tamoxifen, DMSO (for the antivascular drugs), and celecoxib. The endpoint would not be progression-free survival but would be overall survival, as patients randomized to each arm could receive 2nd and 3rd line “physician’s choice” or “assay directed” therapy, respectively. Other important endpoints would be toxicities, quality of life, and overall costs of treatment. Assuming, for example, that assay directed therapy proved to be superior, one could then calculate cost effectiveness, based on cost per year of useful life saved. All results of the study would be transparent (protecting patient privacy) and in the public domain. Note, also, that patients in the “physician’s choice” arm of the study would be eligible to have their tumors sent for any ancillary (e.g. “molecular”) tests desired by the patients’ physicians, they would only be enjoined from receiving cell culture-based tests.

I have been doing this, full-time, for 30 years. I know that it works. My referring oncologists know that this works. Our patients know that this works. These methodologies have the potential to immediately improve the results of cancer treatment, immediately improve the process of drug development and clinical trials, and contribute to containing the exploding problem of exploding costs associated with ineffective drug treatment. But no one else will believe it, without a rigorous clinical trial, which no one has yet been willing to support, despite the enormous human and financial upside, in the event of a successful outcome.

I estimate the overall cost of such clinical trials to be in the neighborhood of $10,000,000 per disease studied. The greatest share of this would go for the payment for the cancer drugs to be used in the treatment of the patients, as in many cases, private insurance or Medicare will be unwilling to pay for the drugs determined to be of greatest value to the patient, on the basis of the testing procedures, even though these agencies stand to realize substantial overall savings from the avoidance of costly, ineffective treatments.

The issue is this: where will the money come from? It’s easy for Genentech to demand that clinical laboratories sponsor the same sorts of clinical trials as are performed for new drug approval. Their proprietary cancer drugs cost a few dollars per dose to produce and sell for $5,000 to $10,000 per patient per month and produce billions of dollars per year in profits.

But public domain lab tests?

#13 of 18, Added By: paulroda, Oncology, Hematology/Oncology,  8:56PM Mar 27, 2009

Dr. Weisenthal has placed great weight on a study showing on a 56% pos predictive value and a 20% neg predictive value for an IHC for the ER receptor

First question, what percent of patients with a neg ER receptor will respond to hormonal therapy. To the best of my knowledge, that is now less than 1 or 2% (ie false negatives)

Similarly, my personal experience suggests a much higher response rate to hormonal therapy in women who are ER positive

I’m wondering if my perception is wrong, or…..

#14 of 18, Added By: LarryMWeisenthal, Oncology, Medical,  4:48PM Mar 28, 2009

Dr. Roda, I believe that your perception is wrong.

I want to point something out. You’ve expressed two perceptions here. The first being that cell culture assays aren’t any good. The second being that the percentage of patients with negative estrogen receptor assay who respond to hormonal therapy will be only 1% or 2%.

I dare say that most oncologists probably feel that the efficacy (as opposed to merely “accuracy”) of the ER assay has been “proven” in prospective, randomized clinical trials.

Well, show me that data. Or, to put it another way, many a beautiful theory has been ruined by an ugly fact.

Here’s another study: Elledge RM et al. Estrogen receptor and progesterone receptor by ligand binding assay compared with ER, PR and pS2 by immunohistochemistry in predicting response to tamoxifen in metastatic breast cancer: a Southwest Oncology Group study. Int J Cancer 89:111-117, 2000.

Like all studies correlating IHC ER and therapy response, this was a totally non-real world, retrospective study in which archival specimens were all batch processed in the same time frame by the same (“crack”) team of researchers (as opposed to the real world studies of cell culture assays, wherein specimens are accessioned and tested in real time, over a period of years, in real world conditions).

Despite the above technical advantages which would maximize reproducibility (and therefore accuracy), here were the SWOG study correlations:

Response rates to tamoxifen as a function of ER by IHC (as tested in the expert Baylor College of Medicine IHC lab).

ER negative: 5 of 20 patients responded (25% response rates to tamoxifen in ER negative patients)

ER intermediate: 25/54 responded (46% response rate)

ER high: 86/131 responded (66% response rate)

Why would any oncologist deny hormonal therapy to a patient with metastatic breast cancer, based solely on ER/PR status by IHC assay?

These data and the more recent Yamashita data represent, to my knowledge, the most definitive studies in the literature regarding the ability of the IHC ER assay to predict reponse to hormonal therapy of metastatic breast cancer. I just now did a PubMed search, entering “estrogen [titl] AND receptor* [titl] AND metast* [titl]” in the subject line. I also looked for studies which cited these above two studies and which PubMed felt were “related” to these studies and nothing relevant came up.

There are HUGELY more data which exist to “validate” the cell culture tests as predictors for response of metastatic cancer to chemotherapy than exist to “validate” the IHC ER test as a predictor of response in metastatic breast cancer, yet all these (cell culture) papers were, once again, EXCLUDED from evaluation for purposes of the 2004 ASCO tech review!

To make this again relevant to the subject of this “two targeted therapies” discussion (by Dr. Maurie Markman), I think that Dr. Markman deserves much of the historical blame for popularizing the notion that the only relevant criterion for evaluating cell culture tests should be proof of “efficacy” (as opposed to accuracy) in prospective randomized clinical trials. This wrongheaded and intellectually challenged way of thinking has had a disastrous effect on the goal to match best treatment to each patient in a disease notorious for its heterogeneity.

– Larry Weisenthal/Huntington Beach, CA

#15 of 18, Added By: 37dkaufman, Oncology, Medical,  6:00PM Mar 28, 2009

We are indeed moving rapidly to an era of cancer treatment defined by individualized or personalized therapy, as already evidenced by k-ras and EGFR mutation assays and Oncotype DX. In the near future we expect genomic and proteomic assays to guide therapy. But adaption of these assays for the prospective determination of “best” therapy requires proof of concept. Dr. Weisenthal does not accept “efficacy” as a valid goal for an assay (as opposed to “accuracy”?) The only valid proof of validity is that the test improves the outcome for the patient. Where are the data demonstrating improved survival among patients whose therapy was guided by sensitivity in the in vitro culture assays, in well designed prospective trials? Or PFS, or even RR? Lacking that, we are still discussing anecdotes.

#16 of 18, Added By: paulroda, Oncology, Hematology/Oncology,  10:34PM Mar 28, 2009

Well, I just read the article quoted —

Only five ER neg tumors responded to tamoxifen. Four were PR positive, and it is recognized that er-/pr+ patients will sometimes respond

They authors note that because of technical considerations, the last case was probably a false negative.

This reminds me of when we first started using tamoxifen, and a 10% response rate was quoted in ER – patients. This has changed with better assays.

In short, I’m not convinced that 26% of ER neg patients by IHC will respond to tamoxifen, though I will conceded that a small number of ER neg PR neg patients might be offered tam if failing chemo

#17 of 18, Added By: LarryMWeisenthal, Oncology, Medical,  1:42PM Mar 29, 2009

>Dr. Weisenthal does not accept “efficacy” as a valid goal for an assay (as opposed to “accuracy”?) The only valid proof of validity is that the test improves the outcome for the patient. Where are the data demonstrating improved survival among patients whose therapy was guided by sensitivity in the in vitro culture assays, in well designed prospective trials? Or PFS, or even RR? Lacking that, we are still discussing anecdotes.<<

If the only valid proof of validity is that the test improves the outcome of the patient, then where is there “proof of validity” for estrogen receptor, progesterone receptor, Her2/neu, immunohistochemical staining for tumor classification, KRAS, Oncotype DX, bacterial culture and sensitivity testing, CT, MRI, and/or PET scans to measure tumor “response” to treatment — as opposed to basing assessment of clinical response on simple/cheap history, physical, routine labs, routine radiographs, etc? All of these tests are used to guide treatment and drug selection no less than are CSRA, yet the only data supporting any of them relate to test accuracy, and there is a total lack of information regarding test efficacy.

The comment about “only discussing anecdotes” is a straw man. I wasn’t discussing anecdotes, I was discussing what are the relevant criteria for evaluating laboratory and other diagostic tests. The ONLY criteria EVER used have been those relating to test accuracy. This goes for every single test we use today in the management of cancer patients. Yet ASCO (loudly cheered on by Dr. Markman, who gets the “credit” for introducing the concept that the only relevant data for evaluating cell culture tests should be prospective, randomized trials that neither his own cooperative oncology group (the GOG) nor anyone else has been willing to support), intentionally IGNORED all the publications relating to test accuracy in their technology assessment.

We are talking about public domain laboratory tests. We aren’t talking about proprietary pharmaceuticals.

I don’t want anyone thinking that I’m trying to “lawyer” my way out of doing Phase 3 trials (which would constitute an utterly unprecedented bar to the acceptance of a laboratory test, as explained above). Here’s the problems, however. Firstly, I have tried to do such trials. I had two national trials approved and funded. The first was a 31 institution Veterans Administration trial (VA CST-280) in multiple myeloma. This trial consumed three years of my life, in planning, grant writing, meetings, funding procurement, two national investigators’ meetings, where all 31 institutional representatives were flown to a central location (St. Louis and Baltimore) for instruction and coordination. The upshot was that the study was closed after 6 months, because of poor accrual and protocol violations in the standard therapy arm of the study, which had absolutely nothing at all to do with the assays. The second was an Eastern Cooperative Oncology Group trial in non-small cell lung cancer (EST-PB 585), which included more than 50 ECOG hospitals and which was closed after 6 months, because the participating institutions weren’t entering patients onto the trial. The most egregious offender, however, is the Gynecologic Oncology Group, which has been utterly unwilling to even consider my proposals, as documented by correspondence as far back as 1992 and as recently as 2007.

The last paragraph was included not as an argument for anything, but simply to explain to colleagues who may review this that they need to be realistic in their demands and to use some common sense and consistency in evaluating their laboratory tests and to recognize their own conflicts of interest.

– Larry Weisenthal/Huntington Beach, CA

#18 of 18, Added By: LarryMWeisenthal, Oncology, Medical,  1:54PM Mar 29, 2009

Getting back to the ER issue:

In the Yamashita study, 3 of 17 patients (18%) who were ER negative AND PR negative responded to hormonal therapy (2 of 15 with tamoxifen and 1 of 2 with aromatase inhibitor). And I’ll remind you that the Baylor/SWOG study (which I think supports the concept that metastatic breast cancer patients should not be denied hormonal therapy on the basis of today’s hormone receptor studies) was a “non-real world” study, in which archival specimens were batch processed, as opposed to being accessioned in real time over years. This was one of the best laboratories in the world, testing specimens under artificially ideal conditions, and they still came nowhere near achieving “98%” accuracy in identifying patients who wouldn’t benefit from hormonal therapy.

I have my own personal theory as to one of the reasons why the median survival of patients with metastatic breast cancer was 24 months in 1970 and was still 24 months 30 years later. It’s because, in the pre-ER era, everyone got hormone therapy — and hormone therapy was really maxed out. Oopherectomy. DES. Androgens. Adrenalectomy. Today there are lots of patients who are never treated with hormonal therapy, because of the wrong-headed notion that we can trust real world ER/PR lab results enough to exclude patients as candidates for hormonal therapy — and because chemotherapy makes a lot more money for us oncologists than does hormonal therapy.

– Larry Weisenthal/Huntington Beach, CA