Tumor Cell Profiling...the tests show if your cancer cells were killed by exposure to one or more of the 20 or so different anti-cancer drugs that might otherwise have been considered as possible treatments for your type of cancer. A test that can help determine which cancer drugs would appear to be the best treatment plan. He has an office and lab and is currently providing tests in this area.
To have the test done, first of all you need a biopsy to delivery the cells for the study. Fees for a complete 20 to 25 drug Functional Tumor Cell Profiling analysis will be in the neighborhood of $5,000. The procedure is covered by Medicare and some insurers as well. BD
Today's online edition of the Journal of Internal Medicinereports discovery of the first practical laboratory test to guide the use of new-generation drugs that kill cancer cells by cutting off their blood supply. The new test, called the Microvessel Vascular (MVV) assay, was developed by Larry Weisenthal, MD, PhD., a medical oncologist who operates a cancer testing laboratory in Huntington Beach, California. The test works by measuring drug effects upon endothelial cells which make up blood vessels. Its use could prolong lives, save money, and spare patients exposure to harmful side-effects of ineffective chemotherapy treatments.
According to Dr. Weisenthal, therapeutic levels of ethanol in the bloodstream theoretically could be achieved simply by drinking wine or another alcoholic beverages in prescribed doses concurrent with receiving angiogenesis-inhibiting drugs. The concept might please some patients and alarm others but Dr. Weisenthal finds support in actual case studies reported in the medical literature. However, he warns that further clinical studies are required.
Dr. Weisenthal says that he would like to see the test become available to patients worldwide through service agreements with larger laboratory companies or with a biotechnology company which might develop a testing kit for sale to hospitals and laboratories. He also would like to license the test to pharmaceutical companies for use in new drug development.
Cancer Physician Invents Test For New Drugs That Cut Off Tumor's Blood Supply
Thanks for the information on cancer. A medicine that could cut off the blood supply to a tumor would be amazing!
ReplyDeleteWe recently wrote an article on cancer and viruses at Brain Blogger. Recently, Hepatitis C, Hepatitis B, and certain papilloma viruses have been linked to concer. These were the first viruses ever to be linked with cancer.
We would like to read your comments on our article. Thank you.
Sincerely,
Kelly
Jennifer--here is what Wikipedia had to say about the Rous Sarcoma Virus:
ReplyDeleteRSV was discovered in 1911 by Peyton Rous, working at Rockefeller University in New York City, by injecting cell free extract of chicken tumour into healthy chickens.
The extract was found to induce oncogenesis in Plymouth Rock chickens. The tumour was found to be composed of connective tissue (a sarcoma).
Rous was awarded the Nobel Prize for the significance of his discovery in 1966.
You can see that your virus cancer list does not include the "godfather" virus, Rous Sarcoma Virus.
As for Dr. Weisenthal, I read his website and abstract of the journal article.
While the technology is promising, there are a number of statements made which imply that the test has been studied in the clinical setting, which it has not.
There is a difference between theory and practice. Nowhere in Dr. Weisenthal's abstract or website is any mention made of research correlating his endothelial cell findings to clinical patient care. Yet, the website information makes many statements implying that patients need this test to predict which angiogenic therapy to use. I just don't see how that follows.
It sure is tempting for me to get out there and put up a website advocating a test or treatment based on any theory I can imagine. What's not so easy to do is to conduct well-designed clinical trials to answer clinical questions scientifically.
The problem with testing theoretical proposals like endothelial cell profiling is that there is a chance they will be discovered to be clinically useless. Commercial entities trying to turn a profit on these technologies have little interest in objective testing, so they rely on marketing techniques and narratives (testimonials and such) to make their point. The greatest example of this was the "Airbourne" product--it was "created by a teacher." The narrative was that the teacher was tired of getting sick all the time, so she invented this product. Turns out, on testing, it didn't work. So in reality, narrative alone is not enough to determine if a product works; it takes scientific testing, such as clinical trials.
Sorry, Dr. W, but I'll wait until you have validated your test clinically before jumping in. If you can show some predictive capability of the test, I will be the first in line to use it.
I have already started using CYP 2D6 testing for tamoxifen sensitivity, since I think it predicts which patients to use tamoxifen on.
InteractMD.com
In 2006, Medicare officially recognized cancer chemosensitivity tests as a special test category in Federal Regulations (42 CFR 414.510 (b)(3), 71 FR 69705, 12/01/2006) as Oncologic In Vitro Chemoresponse Assays.
ReplyDeleteThe assay endpoints used for this clinical trial (DISC/MTT/ATP) have been approved for Medicare reimbursement following a year-long tech assessment by National Heritage Insurance Company (NHIC), a contractor that administers Medicare programs in California and elsewhere.
NHIC established a positive coverage policy for these assay tests, for a tumor specimen from a Medicare patient obtained anywhere within the United States, but submitted for testing by one of the approved laboratories located withing Southern California. The decision had been made that these assays are a perfectly appropriate medical service, worthy of coverage on a "non-investigational" basis.
They abandoned the artificial distinction between "resistance" testing and "sensitivity" testing and are providing coverage for the whole FDA-approved kit. Drug "sensitivity" testing is merely a point a little farther along on the very same continuum where "resistance" testing resides.
A relatively recent (1999) CMS MCAC panel of experts had determined that clinical response was an appropriate endpoint for clinical correlation studies. Therefore, the CMS contractor, considered studies with (acute or initial) clinical response as the clinical correlation. NHIC closely followed program manual guidelines and the MCAC findings and even CMS guidelines relative to clinical practice patterns.
Laboratory tests are judged by accuracy and reproducibility and never by their effect upon treatment outcomes. Most tests used today in oncology have comparable "sensitivities" and "specificities."
I was made aware of a controversy regarding our recently-described test for detecting drug-mediated death of tumor infiltrating endothelial cells. We have made no claims as to the degree to which this test correlates with clinical outcomes. We are not "marketing" the test beyond the confines of a clinical trial, which will be the most transparent clinical trial in the history of oncology, as all results are going to be reported, in real time, on a week by week, patient by patient basis, on our website. - Larry Weisenthal http://weisenthal.org
ReplyDeleteThanks, Dr. Weisenthal, for the comment about the test. I look forward to updates on your clinical research, and if you manage to give us the information you indicate, you will have an innovative new clinical trial platform to build upon.
ReplyDeleteMr. Pawelski seems to bring up the point that CMS payment for chemoresponse assays implies that these tests are clinically appropriate. You have made this point on the internet before, using some of the same language that appears here.
Though I don't dispute this point, I would point out the recent (2004) ASCO Technology Assessment Statement as evidence that not every organization agrees on the scientific validity of these tests.
My personal experience with chemosensitivity assays is this: they are interesting, and I use them sometimes. It allows us to be rational when sometimes there is no rational way of planning treatment. It may fall under the category of "treating the doctor," since we really have no way of knowing if the tests are clinically worthwhile. Kudos to the Oncotype DX people for at least trying to clinically correlate their sensitivity testing.
Far more useful, I think, are randomized clinical trials looking at which medications performed the best in patients. In my mind, this remains far stronger evidence of efficacy than an in-vitro chemosensitivity assay, no matter how many insurance carriers pay for it.
Finally, getting back to the topic at hand, this new assay by Dr. Weisenthal is not strictly a sensitivity assay, since it actually looks at dead endothelial (blood vessel) cells in tumor specimens and circulation. I'm not sure I really understand how this finding will be (or can be) correlated with clinical outcomes, but I look forward to more information in any case.
InteractMD.com
InteractMD makes 2 points:
ReplyDeleteFirst, he discusses the 2004 ASCO statement on chemosensitivity and resistance assays (CSRAs). Second, he praises the "Oncotype DX people for at least trying to clinically correlate their sensitivity testing."
I scarcely know where to begin.
The traditional (and only) criterion used to evaluate laboratory (or similar predictive/prognostic) tests has been the predictive accuracy (sensitivity/specificity) of the test in question. Yet the ASCO review specifically EXCLUDED from consideration all studies reporting the predictive accuracy of the tests! In the words of the ASCO review authors: "We excluded reports that only reported correlations between assay results and clinical outcomes" (where "outcomes" are both response to treatment and patient survival). Instead, the ASCO authors included for consideration only on old, previously-reviewed studies comparing outcomes of patients who had treatment based on assay results versus patients with empirically chosen therapy. On superficial consideration, the criteria of laboratory assay "efficacy" (as opposed to laboratory assay "accuracy") sounds reasonable, but it is both unprecedented and unfair.
To begin with, none of the available laboratory tests used in the selection of treatments for cancer patients have ever been tested for "efficacy," and this includes estrogen receptor, progesterone receptor, Her2/neu, immunohistochemical staining for tumor classification, bacterial culture and sensitivity testing, CT, MRI, and/or PET scans to measure tumor "response" to treatment -- as opposed to basing assessment of clinical response on simple/cheap history, physical, routine labs, routine radiographs, etc. All of these tests are used to guide treatment and drug selection no less than are CSRA, yet the only data supporting any of them relate to test accuracy and there is a total lack of information regarding test efficacy. Likewise, no one is seriously proposing that any of the "molecular" tests now available (e.g. OncotypeDX, KRAS mutation) should have to be proven "efficacious" (as opposed to "merely" accurate) before they are used in clinical decisions regarding treatment selection.
Additionally, the ASCO review may imply that there have been good studies done to examine the issue of "efficacy," when the true situation is that the CSRA technologies are all public domain, non-proprietary and no private sector companies or individuals should reasonably be expected to pay for such unprecedented studies and none of the granting agencies or cooperative groups have been willing to support such studies, also. So it is hereby stipulated that there is no literature establishing clinical "efficacy" of CSRA, because the costs of such clinical trials are prohibitive, support is non-existent, and no other analogous tests have been or will likely ever be subjected to such an unreasonably high bar criterion for clinical use, as well.
It should be noted that, while the FDA doesn't regulate clinical laboratories performing these tests, it does regulate test kits. In the 1990s, the FDA formally approved a Baxter test kit for CSRA testing, based entirely upon demonstration of acceptable test accuracy in a single, small published study, and did not require proof of "efficacy," as, again, this remains an unprecedented criterion for evaluating any laboratory test.
In point of fact, CSRA has been well proven to have predictive accuracy which compares very favorably with that of comparable tests, such as estrogen receptor, progesterone receptor, Her2/neu and the newer "molecular" tests. CSRA predicts for response and patient survival in a wide spectrum of neoplasms and with a wide spectrum of drugs. About 50 peer reviewed studies, collectively including about 3,000 patients; every one of these studies showing above average probabilities of clinical benefit for treatment with assay "positive" drugs and below average probabilities of clinical benefit for treatment with assay "negative" drugs, where clinical benefit included both response and, in more than 20 cases, patient survival. All of these studies specifically EXCLUDED from consideration according to the ASCO working group methodology. Again, the only criterion ever utilized to evaluate laboratory tests (including the OncotypeDX test) and studies of test accuracy were EXCLUDED from consideration!
There are a number of reviews of these peer-reviewed publications; one of the most comprehensive appears on my website (it was written as an invited review for ONCOLOGY, but the editors refused to publish it because they decided that they didn't want their journal used as a platform to point out inconvenient truths regarding what precisely goes into drug selection decisions in what was then the real world of oncologists making a lucrative living not by being doctors, but by being retail pharmacists).
http://weisenthal.org/oncol_t.htm
There have been additional studies published since then (2002), all of them supportive of the consistent findings that there are superior response and survival rates of patients treated with drugs testing "positive" (active) in assays with cell death endpoints.
In light of the true situation, in which there is precious little in the way of guidance from clinical trials with respect to "best" empiric therapy (e.g. a great example being recurrent/metastatic breast cancer, where the NCI's web site lists 24 equivalent but different drug regimens as being acceptable, "state of the art" therapy and where the only thing which has been proven to correlate with chemotherapy drug selection is reimbursement to the prescribing oncologist (as determined in peer reviewed research published by the Harvard and U of Michigan Schools of Public Health). If you believe that the way to make progress is with prospective randomized trials to identify the best treatment for the average patient, then it is interesting to consider that hundreds of thousands of women have been entered onto these trials since 1970 and the median survival is still the same (about 24 months) and none of the myriad empiric regimens studied have proven to be better than 1970-style chemotherapy with CMF. If a tiny fraction of that effort had been devoted to the application of cell culture testing in this disease, there would have been substantial progress, both in technology improvement and in treatment improvement.
Let's look at what is claimed for our tests:
It is claimed that the particular type of tests we perform accurately sort drugs into categories of (1) above average probability of providing clinical benefit and (2) below average probability of providing clinical benefit. We are talking about short term (3-6 day) three dimensional cell culture assays, using cell death endpoints. These latter assays have been shown (in each and every one of close to 50 published studies in an aggregate total of nearly 3,000 patients) to identify the difference between drugs with above average probabilities of providing clinical benefit on one hand and drugs with below average probability of providing clinical benefit on the other hand, based both on tumor response and patient survival, without any exception whatsoever and without any controversy whatsoever. No one has ever challenged these findings or failed to confirm them. These papers were specifically excluded from analysis a priori by the now-famous 2004 ASCO review, because the review specifically excluded papers dealing with test accuracy (as opposed to "efficacy"), while test accuracy is the only criterion ever used to evaluate laboratory tests which impact on cancer treatment decisions!
The ASCO reviewers effectively demanded Phase 3 trials as the criterion for documenting the utility of the assays. As noted above, this is an absolutely unprecedented criterion for evaluating laboratory tests.
Let's examine one very relevant example in detail. The estrogen receptor (ER) test is broadly accepted to be the number one prognostic test in all of clinical oncology, from the standpoint of drug selection. The test is used to make gravely important treatment decisions, generally between cytotoxic chemotherapy on one hand or hormonal therapy on the other hand or the combination of chemotherapy and hormonal therapy. In some situations, this test is used to determine if patients are to receive any drug treatment at all. In contrast, our tests are simply used to select between treatment regimens with otherwise equal efficacy in patient populations -- situations in which the choice could be made by a coin toss or, more commonly, on the basis of remuneration to the treating physician, with equivalent results on a population basis, though certainly not at the level of the individual patient. So, if anything, the "bar" should be higher for the ER test than for our tests. So what data exist to "validate" the most important predictive laboratory test in clinical oncology?
The history of the ER test is that it was originally developed as a complicated biochemical test, generically called the "radioligand binding assay" (RLB assay). The RLB assay was "validated" in the 1970s and very early 1980s by means of retrospective correlations with clinical outcomes for patients treated with hormonal therapy. Overall, in retrospective correlations with hundreds (not thousands) of patients, the RLB assay was found to be about 60% accurate in predicting for treatment activity and 90% accurate in predicting for treatment non-activity. In other words, an RLB assay "positive" tumor had a 60% chance of responding to hormonal treatment. An RLB "negative" tumor had a 10% chance of responding to hormonal treatment.There were never any Phase 3 trials to show that either performing or not performing the test made a difference in treatment outcomes.
The RLB test was complicated and could only be performed by highly specialized laboratories. In the 1980s, the immunohistochemical (IHC) test was developed as an alternative and quickly replaced the RLB test. The IHC test was not independently validated as a predictor of response to hormonal therapy, but was merely compared to the RLB "gold standard" test in the highly specialized laboratories. Subsequently, the IHC test was "validated" in studies in which archival specimens were batch processed in the same time frame by a single team of laboratory workers. These are not real world conditions, in which specimens are accessioned, processed, stained, and read by different people, at different times, using different reagents. But the IHC test quickly moved out into hundreds (possibly thousands) of community hospital pathology laboratories. Various studies have shown that there is often a broad variation of results between different laboratories, in formal proficiency testing studies. And yet hundreds of thousands of cancer patients have had life and death treatment decisions based on these tests (the IHC test for Her2/neu is an even more egregious example, and the IHC test for EGFR is more egregious still, but I'll confine the present discussion to the "best" predictive treatment selection test in oncology, namely the IHC ER assay).
Now, we finally have a published (albeit retrospective) study on the ability of the ICH ER assay to predict for clinical response to hormonal therapy (Yamashita, et al. Breast Cancer 13:74-83, 2006). A total of 75 patients were studied. 20% of patients with a negative IHC ER test responded to treatment. 56% of patients with a positive IHC ER test responded to treatment. And these were data from a laboratory which certainly had above-average expertise in performing the test.
Now, can you begin to see the abject bankruptcy of the position of the ASCO review? Here we have the most universally admired and utilized predictive test for treatment selection in all of clinical oncology, and it is validated only by the most retrospective and limited of data, and, even then, the predictive accuracy of the test is vastly inferior to that of the tests we perform. What in the world is the justification for claiming that the "bar" should be higher for using our test to choose between docetaxel and 5FU (or capecitabine) in breast cancer, than to use the ER IHC test to select between tamoxifen and paclitaxel/cyclophosphamide in breast cancer?
Now, I'll help InteractMD by providing an argument for him/her.
The argument that he might use is the following:
"Well, Weisenthal, show me your data documenting the predictive accuracy for your assays -- specifically for docetaxel and 5FU and all the other drugs you test, specifically in recurrent breast cancer."
The short answer is that I can't provide this level of detailed correlations. Look again at the breast cancer ER IHC situation. This is a comparatively simple challenge. One disease; one form of treatment (hormonal therapy). I do tests in literally hundreds of forms of cancer (considering various histological subtypes) and literally scores of drugs and drug combinations.
What is a reasonable way to approach a problem like this? An acceptable method in other situations is to take population samples and to study these samples to get an overall idea of the general prevalence of some parameter (in this case, the prevalence with which predictions are accurate).
The general methodology of the assays we use is as follows. A drug is selected. It is tested at a variety of drug concentrations to produce a scatter of results in a population sample. Statistical cut offs are determined (for simplicity, let us say above the mean for "sensitive" and below the mean for "resistant"). Now, the perfectly ordinary hypothesis is that an above average drug effect in cell culture will be associated with an above average probability of clinical benefit when the drug is used in the patient. And vice-versa.
The above hypothesis has been tested by broad "sampling" in a great many types of human tumors, including ovarian cancer, breast cancer, colon cancer, gastric cancer, lung cancer, adult acute leukemia, adult chronic leukemia, and childhood acute leukemia, with hundreds of published correlations in each of the above situations. Additionally, there are scores of published correlations in many additional forms of human cancer. In terms of different classes of drugs, the following have been heavily "sampled," using the above methodology: traditional alkyating agents (cyclophosphamide, melphalan, ifosfamide, chlorambucil), anthracyclines (doxorubicin, epirubicin), platinums (cisplatin, carboplatin), fluoropyrimidines, glucocorticoids, vinca alkaloids, and many more in smaller numbers. In each and every case where there is a sufficiently large dataset to allow for statistical testing, the above (very ordinary) hypothesis has been confirmed: assay "positive" drugs produce an above average probability of clinical benefit (based both on tumor response and patient survival), while assay "negative" drugs produce a much below average probability of clinical benefit. On meta-analysis, treatment with assay-"positive" drugs had a greater than 7 fold probability of providing clinical benefit than did treatment with assay-"negative" drugs, when tested over a broad sampling of both tumors and drugs.
Given the utter impossibility of precisely documenting predictive accuracy for all drugs and all tumors (again, consider the much simpler example of the estrogen receptor assay), the methodology by with our cell culture assays have been validated is entirely reasonable, given that the information provided by these tests is, again, used to choose between otherwise reasonable treatment alternatives.
Our methodology is particularly stringent, in that we apply three different cell death endpoints in evaluating the activity of the various drugs against the tumor. We do this because the individual endpoints, while correlating well with each other in optimum situations (a dead cell is, after all, a dead cell, and there are a large number of methods for distinguishing between living and dead cells, but the functional parameter being measured -- cell death -- is the same), the different endpoints have certain technical advantages and disadvantages when used in different specimens. When we have agreement between the different methods, I have great confidence in the results and my treatment recommendations reflect this confidence (and vice versa, in cases where there is some degree of disparity; but this is no different than any other medical test, where confidence is always greater in the results of optimum tests, as opposed to suboptimum tests). The analogy, again, is with the estrogen receptor. Certainly, were a laboratory to perform both IHC and RLB endpoints on each specimen, the overall reliability of the result would be improved. Given the documented poor performance of this test (despite the test being accepted as "the best" predictive test for drug selection in oncology) and given the importance of the results of this test to treatment decisions, I strongly think that laboratories should use both endpoints, wherever possible, but my own standards are particularly high.
The point is, yes, after 29 years of full time effort, it is possible to learn how to do something correctly.
I'm about finished, but I want to address one more important issue. I don't want anyone thinking that I'm trying to "lawyer" my way out of doing Phase 3 trials (which would constitute an utterly unprecedented bar to the acceptance of a laboratory test, as explained above). Here's the problems, however. Firstly, I have tried to do such trials. I had two national trials approved and funded. The first was a 31 institution Veterans Administration trial (VA CST-280) in multiple myeloma. This trial consumed three years of my life, in planning, grant writing, meetings, funding procurement, two national investigators' meetings, where all 31 institutional representatives were flown to a central location (St. Louis and Baltimore) for instruction and coordination. The upshot was that the study was closed after 6 months, because of poor accrual and protocol violations in the standard therapy arm of the study, which had absolutely nothing at all to do with the assays. The second was an Eastern Cooperative Oncology Group trial in non-small cell lung cancer (EST-PB 585), which included more than 50 ECOG hospitals and which was closed after 6 months, because the participating institutions weren't entering patients onto the trial. The most egregious offender, however, is the Gynecologic Oncology Group, which has been utterly unwilling to even consider my proposals, as documented by correspondence as far back as 1992 and as recently as 2007.
Let me tell you why:
What we do is entirely non-proprietary, public domain diagnostic testing. There is no serious money to be made by anyone in this. It would be like trying to get Venture Capital to finance a medical practice. Each of the tests takes 6 hours of my own personal time (8 hours for our tests for antimicrovascular agents, such as Avastin). It can't be mass produced or packaged. It is in every way a medical service, no different than an 6 hour debulking procedure by ovarian cancer surgeons. It's worth it to the patient, the same way that a 6 hour debulking procedure is worth it to the patient. But it's not a proprietary, potentially billion dollar a year drug. In short, no one is going to pay for a 3 million dollar phase III trial! Particularly in an age (now), where there are 800 pharmaceuticals in the clinical trials pipeline and there are not enough clinical trials patients to go around, and where the pharmaceutical clinical trials are enormously remunerative to the participating clinical trials groups, and where they are much simpler to carry out. I'm willing to offer "free" assays (which aren't at all "free" to me, as each one requires 6 hours of my time and hundreds of dollars of my own, out of pocket, money). I can't pay the GOG anything. And no investor with an ounce of sense would give me the money to do so, either.
The last paragraph was included not as an argument supporting the use of the tests, but simply to explain to colleagues who may review this that they need to be realistic in their demands and to use some common sense and consistency in evaluating their laboratory tests and to recognize their own conflicts of interest.
P.S. Relating to the OncotypeDx test (much admired by InteractMD):
All of these studies with all of these genomic tests (not just OncotypeDx, but others as well) suffer from a truly fatal flaw, that absolutely no one understands. The validation studies mostly have the following design: A large number of archival specimens are batch processed, within a very narrow time frame (with the OncotypeDx test, this was about two weeks), by the same "crack" team of technicians, scientist, and pathologists. So all the technical variables are minimized. All specimens are treated in the same way. Now, half of the specimens are used to establish the "positive" and "negative" cut offs. In other words, what are the gene expression patterns which segregate with a "positive" or "negative" outcome. This has a certain correlative accuracy. Then they take these cut-offs, determined "retrospectively," and apply them "prospectively," to the remaining half of the specimens. Typically, they get a somewhat lower, but still significant correlation between test result and clinical outcome.
But this GROSSLY overstates the true clinical accuracy of the test. In the real world, specimens are not all batch processed in the same two week period of time, by the same team of "crack" technicians, scientists, and pathologists. In the real world, Fred Jones calls in sick, and Rita Smith is therefore overworked, and Bertha Wilson hasn't yet prepared the reagents, and the machine hasn't been warmed up sufficiently and the timing of one incubation is just a little off, and the RNA isn't kept cold enough at all stages, and the microcentrifuge is unbalanced, and it's lunch break before a given step is completed, and the micropipette is lost and another has to be borrowed and it hasn't been calibrated, and a hand slips pipetting 1 microliter and the droplet doesn't go where it is supposed to and on and on and on and on. So what happens is that, in the real world, a specimen tested on June 13 won't necessarily give the same result as one tested on August 24, which won't give the same result as one tested on November 6. So the predictive accuracy is 100% guaranteed to be worse than in the initial study, where everything is optimum and, more importantly, where everything is uniform.
No one is publishing "real world" studies. No one, that is, but the laboratories (including my own) performing cell cuture-based tests, which can only do real world studies, because our studies require fresh, viable tissue, which must be accessioned and tested in real time, under "real world" conditions.
- Larry Weisenthal
http://weisenthal.org
InteractMD points only to an antiquated ASCO tech assessment as his evidence about the scientific validity of these tests. NHIC's transparent tech assessment had looked at ASCO's summary and numerous other material:
ReplyDeleteSources of Information and Basis for Medicare Decision
Gallion H et al. (2006) Progression-free interval in ovarian cancer and predictive value of an ex vivo chemoresponse assay. Int J Gyn Cancer 16:194
Jordan C et al. (2006). Cancer stem cells. NEJM 355:1253.
Loizzi V et al (2003) Survival outcomes in patients with recurrent ovarian cancer who were treated with chemoresistance assay-guided chemotherapy. Am J Obstet Gynecol 189:1301
National Cancer Center Network (NCCN, 2006) Updated guidelines for ovarian cancer, 3/6/2006
Parker et al (2004) A Prospective Blinded Study of the Predictive Value of an Extreme Drug Resistance Assay in Patients Receiving CPT-11 for Recurrent Glioma. J Neuro-oncol 66:365
Samson et al (2004) Chemotherapy Sensitivity and Resistance Assays: A Systematic Review. J Clin Oncol 22:3618
Tewari KS et al (2005) Conservation of in vitro drug resistance patterns in epithelial ovarian carcinoma. Gyn Oncol 98:360
Ugerel S et al (2006) In vitro drug sensitivity predicts response and Survival after individualized sensitivity-directed chemotherapy in metastatic melanoma. Clin Cancer Res 12:5454
Administrative Law Judges have very consistently found that certain tumor assay tests meet Medicare's criteria for medical necessity (for example, guidance in a broadly used American textbook such as DeVita, Principles and Practice of Oncology, 2001).
DeVita DT et al. (2001) Cacner: Principles and Practice of Oncology, Lippincott.
Fortunately, receptive people are not so threatened by ideas which dare to challenge and question one-size-fits-all, widgets-on-an-assembly-line medicine. It's certainly not more comforting to see patient after patient succumb, not to the cancer, but to an early demise thanks to wrong-therapy/wrong-dose cookie-cutter treatment.
There are limitations involved with randomized clinical trials. Perhaps the greatest limitation is that it is predictive of population trends, and is not definitive. Clinical trials provide few black and white answers. The problem with the empirical approach is it yields information about how large populations are likely to respond to a treatment. Doctors don't treat populations, they treat individual patients.
Because of this, doctors give treatments knowing full well that only a certain percentage of patients will receive a benefit from any given medicine. They subject patients to one combination chemotherapy after another, just going from one journal paper to another journal paper. They need information about the characteristics that predict which patients are more likely to respond well. The empirical approach doesn't tell doctors how to personalize their care to individual patients.
Here's the link to our clinical trial of individualized combination antivascular therapy, with drug combinations selected with the Microvascular Viability Assay (MVVA):
ReplyDeletehttp://www.weisenthalcancer.com/Study%20Pages/TrialHome.htm