Looking at Data of Clinical Trials

Ahmed Bendary*

Cardiology department, Benha university hospital, Benha faculty of medicine, Egypt

*Corresponding Author:
Bendary A
Cardiology department
Benha university hospital
Benha faculty of medicine, Egypt
Tel: 002013319014
E-mail: dr_a_bendary@hotmail.com

Received date: August 04, 2017; Accepted date: August 09, 2017; Published date: August 17, 2017

Citation: Bendary A (2017) Looking at Data of Clinical Trials. J Heart Health Cir 1:1.

Copyright: © 2017 Bendary A. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Visit for more related articles at Journal of Heart Lung and Circulation

Commentary

My father once told me when I was a child: “if you believe all what you read, do not read!” Later on, I realized that this’s true, especially in the era of evidence based medicine. Recently, data fabrication has emerged as a nail in a coffin for clinical trials [1]. There is a natural tendency to assess the results of randomized clinical trials as either positive or negative according to whether the P value for the primary outcome measure is <0.05 or >0.05. However, such an interpretation is overly simplistic. The primary endpoint result is just a starting point in the comprehensive evaluation of the totality of the clinical evidence. If the trial endpoint came positive, some questions should be asked. First, what the P value exactly is, a P value of < 0.001 means strong evidence without any reasonable doubt, in contrast to a P value of just 0.04. For example, the PARADIGM–HF trial [2] the P value for endpoint was 0.001. This prompted the regulatory authorities to approve neprilysin inhibitor for use in heart failure with reduced ejection fraction. Second, what’s the absolute risk reduction (ARR) and number needed to treat (NNT)? For example in FOURIER trial [3] of a new PCSK-9 inhibitor to manage dyslipidemia, there was just 1.5% ARR corresponding to a NNT of about 70, making many editorialists to question the cost-effectiveness of the drug (costs about 14.000 USD/ year). Third, is the endpoint a clinical one or a surrogate one? The classic example here is the ACCORD trial [4] although the surrogate endpoint of HBA1C reduction reached statistical significance, the clinical endpoints showed no significant change and in fact mortality tended to increase in intensive blood glucose control arm! Fourth, what about data of subgroups? Clinical research has always taught us to be extremely skeptical when interpreting data of subgroups but we could learn much from them. Look here at the PLATO trial [5] in which the benefit of Ticagrelor was not so robust in the USA subgroup, a finding that could be explained by the high dose of Aspirin used by USA population. This lead to FDA recommendations to limit Aspirin dose to 100 mg when taken with Ticagrelor. Fifth, what is the sample size? A small trial lacks power, and positive treatment effects are susceptible for exaggeration. I remember here a small trial published in New England Journal testing effect of N-acetyl cysteine (NAC) on reducing Contrast-induced nephropathy (CIN) [6] this trial showed a 90% relative risk reduction of CIN with NAC. Almost nothing in world has such a massive treatment effect. A subsequent meta-analysis including much larger number of patients confirmed the neutral effect of NAC on reduction of CIN [7]. Now, we came to the other side of the coin, the trial endpoint is negative, what can be done? How can we salvage the trial? And what went wrong? Some important questions should also be asked. First, was the population you are testing appropriate? I refer here to the story of Ivabradine. In BEAUTIFUL [8] and SIGNIFY trials [9], done in population with ischemic heart disease it provided a little bit disappointing results. It was only when they shifted to patients with heart failure in the SHIFT trial [10] that the results came positive making the way for the drun into guidelines. Second, could we claim for non-inferiority? A good example here is the VALIANT trial [11] in which the confidence interval excluded a noninferiority margin of 1.13, allowing authors to claim for noninferiority of Valsartan compared to Captopril. Third, can alternative analysis method help? A good illustration here could be the difference between intention-to-treat (ITT) and as treated analyses. Please, look with me at STICH trial [12] that showed no difference between bypass surgery and medical therapy for patients with ischemic LV dysfunction when analyzed by ITT, but when data were analyzed by as-treated analytical method, results came back significantly favoring bypass surgery in those patients. Forth, are the endpoints accurately defined? A typical example here could be the CHAMPION PLATFORM trial [13] that showed odd ratio for Cangrelor versus Clopidogrel in the right direction but with a P value of 0.17. The authors realized that they had a major problem with the definition of Myocardial Infarction especially periprocedural Myocardial Infarctions. Therefore, CHAMPIONPHOENIX trial [14] asked essentially the same question but with clearer adjudication of Myocardial Infarctions and the results came significant with a P value of 0.005. These were some highlights on how to look at data of clinical trials. The space in this short commentary is not wide enough to get all points surrounding this issue. Have a nice time; this is Ahmed Bendary signing off and good bye.

References

Select your language of interest to view the total content in your interested language

Viewing options

Flyer image

Share This Article

https://www.thefinestreplica.com/https://www.watchfactorynoob.com/https://www.ppffactory.com/https://www.perfect-replica.com/https://www.perfect-replica.com/https://www.jffactory.net/https://ritzz.net/c/makrome-ipi/
https://www.efozon.com/rtl/https://www.tvpinto.com/pro7-prosieben/