Sato Iwamoto Part 1: Clinical trials

In late 2012, two of us (MB, AA) had an email conversation about meta-analyses, little knowing it would lead to an investigation of a research group’s work that remains incomplete more than 14 years later. We were discussing why some meta-analyses reached different conclusions even though they included the same trials. AA mentioned three trials in the field of osteoporosis with identical outcomes for falls, and asked what we knew of the research group of Dr Sato from Japan that conducted the trials, since osteoporosis is our research interest. About five years earlier, she had alerted two groups undertaking Cochrane reviews and JAMA Internal Medicine about the issue. No investigations in Japan occurred, but a letter published at the time in JAMA Internal Medicine [1] also highlighted many integrity concerns. Still no investigation resulted.

Even a superficial reading of several of the trials in 2012 raised many concerns and questions, so we decided to investigate further by systematically reviewing all the published trials of this group.

Systematic review

We identified a large number of concerns about trials by Sato, Iwamoto and colleagues including fairly incredible productivity (33 human randomized clinical trials over 15 years and close to 300 publications overall) and recruitment rates, implausibly positive outcome data, concerns about ethical oversight, plagiarism and many logical and other errors in their papers. One striking feature of several trials was the similarity of baseline characteristics for the randomised groups. Previously, Carlisle identified fraudulent randomised trials by Fujii and colleagues by comparing the distribution of baseline characteristics in the trials with the expected distributions that would arise by chance [2]. We applied similar statistical approaches to the body of trials published by Sato, Iwamoto and colleagues. Three different approaches showed that the baseline data presented in papers were not consistent with the treatment groups being formed by chance (ie that randomisation took place). We considered that there was overwhelming evidence of compromised publication integrity, but were not sure what to do next. There were many journals involved, the strength of the evidence was most obvious when considered in its entirety, and our concerns needed to be independently assessed.

Contacting the journals: JAMA

We opted to draft a manuscript of the systematic review that documented the detailed concerns for the 33 trials and emailed it to the JAMA editor in March 2013 asking him to review the concerns and consider publication of the manuscript. We chose JAMA because 3 papers were published in the JAMA family of journals, including one in JAMA itself, and those papers had had the highest impact. We also thought that JAMA might be best placed of the 18 affected journals to coordinate and lead an investigation. Naïvely, we expected that an internal review of the affected papers in the JAMA family journals would take place, and if our concerns were validated, JAMA would contact the editors of the affected journals, and a co-ordinated investigation would take place. There was a precedent for doing this: Anaesthetic journal editors collectively assessed then addressed problems in the work by Fujii.

Because journals are very rarely (and regrettably) forthcoming in their explanations of processes undertaken in assessing integrity concerns, we have very limited details about what actually happened at JAMA. However, we do know that they internally reviewed the manuscript and within 6 weeks agreed that there were substantive concerns. They asked Sato for a response but found it unsatisfactory, so in March 2014, fully a year after our initial contact, asked his institution to investigate, and eventually issued an expression of concern (EOC), but for only the JAMA paper and not until another year had passed (May 2015).

Initially, we had regular, polite email contact with JAMA, including a statement from the editors that “Although my primary concern must be papers published in the JAMA Network, I have [sic] nevertheless am trying to address the totality of your concerns.” However, after JAMA contacted Sato’s institution, our enquiries (about every 6 months) went unanswered. Then in April 2015, in response to yet another request for an update, the editor said JAMA would be publishing a notice of concern the following month. When we asked about paper, the JAMA editor rejected our manuscript without any explanation (“We will not be publishing your manuscript”) and suggested we contact other journals with our concerns. We were very surprised about being asked to contact other journals, given JAMA’s earlier undertaking to address the larger body of concerns. By then, JAMA had had the concerns for more than 2 years. Retraction Watch covered the JAMA EOC, but we thought the comments by the JAMA editor were disingenuous: he said JAMA received our comments 15-18 months earlier, that an individual had raised the issue and that he had repeatedly asked that individual to contact other journals. In fact, JAMA had sat on the concerns for >2 years; a group of academics, not an individual, had raised the concerns, which were detailed and wide-ranging; we had repeatedly informed him that we had only notified JAMA of the issue (we were still hoping to get the manuscript published so readers could see the fully refereed, detailed statistical and other concerns), and he never asked us to contact any other journals.

In late 2015, 30 months after we contacted JAMA, we asked whether the journal would consider retraction of the JAMA paper and when correction notices would be posted for the two JAMA Internal Medicine papers. In response, the JAMA editor wrote:

we would like to be assured that you have made similar requests to the other journals in which Dr Sato and colleagues have published articles for which you and your colleagues have raised allegations about scientific misconduct. We would be most appreciative if you could forward those letters to us.

We replied that we felt the

most efficient and constructive way to address this difficult issue would be for the editors of affected journals to undertake a coordinated process that includes sharing of resources and information, with publication of our paper to allow readers to understand the issues and to promote academic discourse on analysis of concerns about data integrity. Such a collaborative strategy from journals would align with COPE guidelines…

In response, JAMA ventured that they

often hear from people how they think we should perform our responsibilities as editors

and offered the view that they are

sure those making the allegations spend time thinking about what they should do; but making allegations is simpler than responding to them; attempting to negotiation (sic) with authors and institutions; and being fair to all those involved. Ensuring that the scientific record is accurate is critically important, but so too is conducting a thorough and fair investigation.

The comments about making allegations and misconduct were quite frustrating. We had been exceedingly cautious not to suggest that any misconduct had taken place, instead limiting our comments to the reported data being inconsistent with randomisation. We later found out that it was common for editors, publishers, and institutional officials to equate raising issues and concerns about publications to an allegation that the authors committed misconduct, no matter what lengths we took to avoid making such “allegations”. In an unrelated case, we specifically requested the institution we notified not to treat our concerns as an allegation of misconduct, but even then, the institution did so and stated that using words such as “implausible” (in the context of productivity, or statistical analyses) was akin to an accusation of misconduct.

The JAMA editor’s assertion that it is simpler to raise concerns than to respond to them stuck in the craw even then. Subsequent events only reinforced the naivety of that view.

Of particular relevance to the editor’s final comment was that it was made 2 years and 9 months after the initial concerns were submitted to JAMA. From the beginning, there was never any mention of promptly alerting JAMA’s readers to integrity concerns.  

Journals- Next steps

After JAMA indicated that it would not publish our manuscript and would not notify other affected journals, we contacted JAMA Internal Medicine in mid-2015, whose staff appeared to be unaware of these concerns. It immediately rejected the manuscript and told us it would review the issues raised but then did not respond to any emails.

Next, we contacted the Journal of Bone and Mineral Research (JBMR), who undertook another investigation and promptly reviewed the manuscript but declined to publish it on the basis that doing so would contravene COPE (Committee on Publication Ethics) guidelines. We found this a surprising statement. We are not aware that the journal conferred with COPE in this regard, nor what guidelines our paper would breach. They issued an EOC for one of their 2 papers within 2 months, retracted it 4 months later, and retracted the second paper 15 months later. JBMR also informed Parkinsonism and Related Disorders (PRD) about extensive text duplication with their paper and PRD eventually retracted its paper about 9 months later. The Editor of PRD said that the final decision about retraction was made by an Elsevier Retraction and Removals committee. In this case, it took many months for that committee to approve the retraction, and the editor of PRD expressed frustration at the delay. As we subsequently learned, 9 months is a relatively short interval to retraction, and that sometimes publishers override recommendations to retract made by journal editors.

Journals- Last chance?

By this stage we were extremely frustrated, and we almost gave up. Raising concerns about the integrity of scientific research was not something we took lightly, nor is it simple (contrary to the view expressed by JAMA), but we had not expected it to proceed this way, and we feared that our statistical assessment and other findings would never be aired in public, despite being very detailed, statistically novel and validated by the outcomes of journal assessments. We previously had consulted senior academic colleagues for advice, but now consulted more broadly, including past and present senior members of COPE, but while sympathising, they had little practical advice to offer that we hadn’t already considered. We contacted Editors of another journal, Trials, that did not have any affected publications but ‘encompasses all aspects of the performance and findings of randomized controlled trials in health’. The Editors were quite positive and encouraged us to submit our manuscript, but ultimately felt that the issues should be investigated by the journals that published affected articles and that publication of the evidence underpinning the concerns was not appropriate in an unaffected journal. While at one level the view that unaffected journals should not enter the fray might seem reasonable, on the other hand commentaries frequently appear in journals different to those in which the source material is published, and this was a novel piece of academic work.

Thoroughly disillusioned by now, we turned to the journal Neurology later in 2015, which had three affected trial reports. Neurology had the manuscript reviewed and ultimately decided to publish it. They conducted yet another investigation and informed us in confidence that Dr Sato admitted that the papers on which he was the lead author were fraudulent but that co-authors were ‘honorary’ and innocent of wrongdoing. Presumably, this strong validation of our analysis contributed to the journal’s decision to publish the manuscript. In 2016, this led to 4 retractions in Neurology, although it was striking (but something we later found is standard practice) that the journal made no apparent effort to assess the integrity of the other 4 papers it had published by the Sato/Iwamoto group. Contemporaneously, but seemingly independently, the 3 papers in the JAMA family of journals were retracted, with notices that failed to detail the serious, wide-ranging data integrity concerns identified more than three years previously, only stating that there were ‘concerns regarding data integrity and inappropriate assignment of authorship’.

We were pleased to hear that Neurology informed all the remaining affected journals of the results of their investigation and in November 2016, published our manuscript [3], 3.5 years after we first raised the concerns to JAMA. At that stage, with 10/33 trials retracted, a published manuscript detailing overwhelming evidence of compromised research integrity, and a related editorial detailing the confession of fraud [4], we thought the house of cards constructed by Sato, Iwamoto and colleagues would quickly fall: other journals would also retract papers, publishers and institutional investigations would examine all the other publications from these authors and quickly publicly confirm that scientific misconduct had occurred and recommend further retractions. and ultimately many of the >300 papers by the group would be removed from the literature.

But instead, nothing happened.

References

One response to “Sato Iwamoto Part 1: Clinical trials”

Blog at WordPress.com.

Design a site like this with WordPress.com
Get started