The Great Betrayal: A Book Review

David Crowe
January 24th 2005

“The Great Betrayal” was written by HF Judson and published by Harcourt in 2004.

Horace Freeland Judson sketches science as a fortress rising out of the mists, focused solely on defending and enriching itself. It might pretend to worship the Goddess Truth but inside, out of sight, it is the God Mammon who is lusted after. We, the peasants, must give regular tribute or we will suffer war, famine and pestilence (especially pestilence)! Occasionally, the excesses of insiders result in token sacrifices, but rarely is more than one body cast over the walls to be savaged by the press. When this does happen, as often as not it is the person who pointed out the crime, not the criminal, who is sacrificed.

Many insiders claim that scientific fraud is not a big problem, but Judson shows it is large, perhaps immense. The actual size is unknown because of the lack of ability and lack of will to detect . Cases of fraud that come to light tend to be blatant and, even then, they are not resolved in anything approaching a satisfactory fashion.

Defining Scientific Fraud

In order to deal with fraud you must know what it is. Defining fraud is very difficult and controversial. Lawyers tend to focus on the intent to deceive (just as intent is the factor that differentiates first degree murder from second degree murder and manslaughter). This, Judson points out, is a very big problem, opening up the ‘sloppiness’ defence. It is very difficult to prove that the reason why a notebook was modified years after the fact was due to an intent to deceive and not just because the person discovered important notes on scraps of paper stuffed into a bottom drawer. Slicing off the sequence number of machine measurements can be brushed off as thoughtless behavior on the part of the researcher.

Judson does not accept this view. He would rather define fraud as actions that destroy the integrity of the scientific record. He believes that every scientist has the responsibility to maintain accurate records. Not maintaining records that are adequate to substantiate published results should be called fraud, no matter what the intent of the scientist. Another way to think of it is that the punishment for sloppiness should be the same as the punishment for intentional fraud – loss of research funding, redaction of papers, demotion and possible expulsion from their current position. Sloppiness is a form of scientific misconduct, akin to a surgeon who forgets to remove gauze and clamps before sewing a person back up. Nobody cares whether the surgeon intended to leave something behind, it is almost certainly assumed that they did not, but it is still seen as serious misconduct.

It is hard to argue that scientists should not have to adhere to higher standards of conduct because often people’s lives are at stake. This is obviously true in medicine where, unfortunately, most of the fraud seems to be happening. Judson quotes from Patricia Woolf, a Princeton sociologist, who investigated 26 cases of fraud that became public between 1980 and 1987. All but four (85%) involved medical research.

The types of frauds are limited only by the imagination of the perpetrators. The most common types are falsification of data (sometimes total fabrication) and theft of data, including plagiarism. Omission of data might not seem too important, but its importance can be seen when you consider the omission of side effects from a new drug as an example. A rarer type is sabotage of the research of others. Sometimes the fraud is so blatant that it is almost farcical. An example is the Polish chemical engineer Andrzej Jendryczko who published 140 articles in 13 years, of which at least 29 were nearly identical copies of articles from other countries. His plagiarism went undetected for so long because the articles were usually published in Polish.

Plagiarism is not the most serious type of fraud. It hurts scientists but not necessarily science. In fact, if Jendryczko translated the articles accurately, he was performing a service to scientists who only spoke Polish.

Falsification is far more serious because it always corrupts the scientific record. It is a crime against science, indeed a crime against all humanity, when it legitimizes science that is false. Not everyone agrees. Philip Handler, president of the National Academy of Sciences in 1981, testifying before a congressional subcommittee, stated that it is “a relatively small matter which is…normally effectively managed by…the scientific community”. Judson’s book shows that not only is fraud quite common, but that the scientific community is not good at managing it.


Even when fraud is clearly defined, detecting it is difficult. It is not clear that modern science has good tools for this. And, what tools do exist are not being used. Judson clearly does not belief Cornell chemistry professor Roald Hoffman who he quotes as saying in 1996 that fraud in science “is not a real problem…there are extraordinarily efficient self-corrective features in the system of science”.

Every Scientist is an Island

All authors of a paper are supposedly each responsible for the final work. But, when fraud comes to light, Judson demonstrates that they generally take no responsibility for the work of their co-authors. In many cases they did not participate in the development of the paper, but are mere ‘guest authors’. The guests, often well known senior scientists, benefit by padding their resume, and the junior scientists by gaining status by association, and a higher chance of publication.

Similarly, supervisors, to whom fraud is most commonly when first discovered, seem to also have remarkably little knowledge of the work of their underlings. When fraud is uncovered they also almost always deny any knowledge. This implies that honest mistakes would also not be uncovered, which is perhaps more worrying.

Peer Review is often claimed to be another check in the obstacle course of science. Judson fusses over the difference between true peer review (of grant applications) versus refereeing (peer review of papers before publication) although they are essentially the same, sharingthe same weaknesses. Judson musters a lot of evidence to show that this time honored ritual is, to be polite and whatever the name, useless. Issuing dice to granting agencies and editors would work just as well.

Peer review might actually be worse than useless because it gives a veneer of respectability to data. The anonymous reviewer can gain important clues about the work in competing labs or even steal the work. In one humorous instance Vijay Soman expropriated the work of Helena Wachslicht-Rodbard by recommending against acceptance of her paper and then a few weeks later submitting her work as his own for publication in a different journal. This was only discovered by coincidence when Wachslicht-Rodbard was asked to review Soman’s paper and recognized it as her own work! Sometimes the system works, in this case the perpetrator, Soman, was disgraced. But the victim also left scientific research after only four years as a publishing scientist, perhaps in disgust at the company she found herself keeping (Judson does not explain why).

Measure Once, Cut Twice

Replication of experiments is a way to verify published data, and could easily detect fraudulent results. It is unfortunately no longer given a priority compared to the generation of new data. Judson quotes a committee report published in NEJM in 1987 which reported that it is “impossible to obtain funding for studies that are largely duplicative”. It is also difficult to gain academic credit and publication.

Not as good as replication, but still very useful, is the reanalysis of raw data. This is difficult now because the data is usually not made public, often because of a desire to keep secret information that might be patentable. With papers becoming more and more compact, much is left to hand-waving. Readers have to take on faith that the work was actually done, let alone that it produced the published results. Judson does not discuss the Bluestone affair which established the rule that senior scientists have total control over the release and publication of data, probably because this was a case of censorship, not fraud. [Bell, 1992]

Scientific Paradise Lost

Judson quotes several members of an older generation of scientists who had a much more idealistic and ascetic view of their profession. Sadly, the views of men like Max Weber, Robert Merton and Max Delbrück are rare nowadays, at least among those hoping that their discoveries will be patentable and enrich them, probably not directly, but through the honors bestowed upon them by their company.


Reactions to fraud, at least judging by the examples in Judson’s book, are almost always inadequate.

The first response is usually denial. The second is “Shoot the messenger”. Even where the evidence of corrupt science would be obvious to an outsider, a tacit mutual defence treaty between insiders is the norm. The Baltimore case is one of the best examples of this and Judson, having been involved, gives it a thorough airing.

This case would not have been named after David Baltimore if he didn’t so stridently defend the accused senior author, Theresa Imanishi-Kari, who attached his name to it as part of her grand scheme for a new job. As a guest author he could have easily walked away claiming ignorance of the questionable experiments (probably quite honestly) but he simply could not.

Judson quotes the much less flamboyant Howard Temin, who shared a Nobel in 1975 with Baltimore, for the discovery of the reverse transcriptase enzyme. “When an experiment is challenged, no matter who it is challenged by, it’s your responsibility to check. That is an ironclad rule of science, that when you publish something you are responsible for it.”

Judging by the miserable track record documented by Judson it is much more likely to be the whistleblower who takes the fall. Less often it is the prime perpetrator. And, almost never are the co-authors forced to take any responsibility.

The reaction of scientific institutions that investigate these cases merely adds a layer of legalistic obfuscation to the process. Damning evidence can be withheld by a process that seems to demand absolute certainty from the accuser, while accepting the most pathetic excuses from the accused, even when it is widely accepted that the science was corrupt – experiments were never performed, the results were doctored, and so on. Very often fraud is defined so narrowly that only an abject confession could be construed as positive evidence. It is as if only first degree murder was a crime while someone who drove through a crowded street at 100 mph killing pedestrians left and right would be let off with a light scolding solely because of the lack of intent.

The Tip of the Iceberg?

Judson paints a dark picture of Fortress Science increasingly controlled by moneyed interests, yet insisting that all monitoring be internal. By defining those without PhDs or MDs as incapable of understanding scientific issues, the sweeping-under-the-carpet can be left to the experts.

I am left wondering how much bigger the problem is than Judson could possibly document, simply because most of it is either never discovered, or never becomes public. How much of the documentation for drugs that are later withdrawn from the market was based on fraudulent science?

But I am also left thinking that scientific fraud is also, in the grand scheme of things, a minor problem for science, by comparison to the distortions that are unlikely to be classified as fraud, even by the broader brush that Judson wants to use.

Pharmaceutical drug salesmen put a positive ‘spin’ on their drugs. Clever ones don’t even have to lie, merely be selective with the truth [Ziegler, 1995].

In clinical trials, a placebo arm can be ruled unethical, making it difficult to dethrone a ‘proven’ treatment, even if the proof was corrupt.

Surrogate markers can be used instead of meaningful clinical endpoints. [Fleming, 1996]

Pharmaceutical-company sponsored trials reach more positive conclusions than those sponsored by governments. [Friedberg, 1999]

Excuses are found not to publish certain trials (by mere coincidence the 50% of the trials that show an undesirable outcome).

Clinical guidelines are written by doctors and researchers who have their hands deep in the pockets of the drug companies, yet rarely reveal this financial conflict. [Willman, 2004]

Which of these practices can be ruled to be ‘fraud’ as opposed to ‘spin doctoring’? Medicine, in particular, relies heavily on the ‘standard of care’, that is: “If everyone else is doing it, I can’t be sued.” If other doctors are taking money from drug companies and from the government and from patients, why should I be any different?

Many of the most serious corruptions of the scientific record are due to accepted, institutionalized practice, and therefore are much more widespread and quite possibly more damaging.

Even if scientific fraud, usually a solitary pursuit, was outlawed, we might not be much further ahead. Much bigger changes are needed, perhaps even a ‘Napoleonic Code’ where it would be the responsibility of every scientist to prove that their work was reported accurately, not the responsibility of others to prove them wrong, in the absence of complete and verifiable records from the scientist. “Guilty, until proven innocent” may have to be the new rule.

A Recent Example

The recent Nevirapine scandal is a good example of the type of problem that straddles the line between scientific fraud and bad science.

The HIVNET 012 trial of nevirapine in Uganda was intended to show that a single dose of this drug could reduce the number of HIV-positive mothers having HIV-positive babies, leading to the obvious conclusion that it would save the lives of millions of babies, especially in Africa.

However, the trials were contaminated by a number of record keeping problems. Some people consider them mere “irregularities with record keeping”, as Clifford Lane, deputy director of NIAID was recently quoted in Science. But then, isn’t that what was the problem at Enron? What is a clinical trial without records anyway? Obviously, it wouldn’t be a clinical trial. As Judson points out time and again, it is the sanctity of the scientific record that is of ultimate importance. If the scientific record is manipulated, science is useless, perhaps worse than useless because it will divert resources into activities that will cause harm.

According to reports from Associated Press and other sources, thousands of adverse events went unreported as well as a number of deaths. A report by Dr. Betsy Smith, completed in January 2003, stated that there was incomplete reporting of adverse events and other safety concerns and patient records were “below the expected standards of clinical research”.

Jonathan Fishbein, an MD, was hired to improve clinical trials at the US National Institutes of Health, the sponsor of this trial. But, when he went public with his concerns over this trial, he was fired. He is documenting his ordeal at

One of the problems was that Edmund Tramont, head of NIAID’s Division of AIDS, was apparently more interested in covering up the problems than solving them (which might have involved scrapping the trial and starting over). He altered a critical memo, saving President Bush embarrassment, as his African AIDS initiative relied heavily on single-dose nevirapine for pregnant HIV-positive women.

Once the scandal broke other scientists and AIDS organizations leapt into the fray, all apparently with the aim of glossing over the problems and continuing to use the ‘life saving’ drug nevirapine. “There is no scandal” was their implicit message.

The head of HIVNET 012, Brooks Jackson, called Tramont “a man of integrity and common sense”.

Gregg Gonsalves of Gay Men’s Health Crisis claimed that “The data discrepancies don’t alter the fundamental findings”. Arthur Amman of Global Strategies for HIV Prevention said that the controversy would undermine efforts to prevent HIV transmission in Africa. Dr. Donald Gray of John Hopkins University asked rhetorically “Should we ban Nevirapine because of an administratively imperfect trial and double the number of HIV infected infants?” Rachel Cohen of Doctors Without Borders, apparently without irony, stated that “The truth about Nevirapine is getting widely misrepresented”. AIDS Treatment News moaned that “misleading nevirapine stories published around the world will cause patients, doctors, or even governments to reject single-dose nevirapine to prevent mother-to-child HIV transmission, in cases when no other treatment is possible”. The Elizabeth Glaser foundation chimed in that “It is important to understand that there are no data demonstrating that significant nevirapine-induced toxicities occur in women or infants receiving short-course nevirapine for PMTCT [prevention of mother to child transmission of HIV]”. Of course, if the the reports are lost, there would be no data! That doesn’t mean that there isn’t a problem.

Even if the HIVNET012 trial was managed perfectly it still might not be meaningful. It compared Nevirapine against AZT instead of against a placebo. AZT is a highly toxic drug (see ) so the claims that there were no adverse events really means that Nevirapine did not have more adverse events than AZT. Some people believe that the approval of AZT was a massive error, that is certainly the message of John Lauritsen’s book “AZT: Poison by Prescription” (now out of print) or the more recent “Debating AZT” by Anthony Brink (available freely online at ).

Assume for a moment that Lauritsen and Brink are right, and that AZT is highly toxic and should never have been approved for use. Then, HIVNET012 simply proves that Nevirapine is also highly toxic and should also never have been approved for use.

Another problem with HIVNET012 is that it measures the presence of HIV indicators in babies, not long term health (nor does it directly verify the presence of HIV). A recent trial in a ‘real-life situation’ in Kenya showed no statistically significant difference in the rate of HIV transmission with the HIVNET012 nevirapine regimen.

The HIV indicator used in Uganda was qualitative RNA PCR (‘viral load’). The test used, Roche’s Amplicor, “is not intended to be used as a screening test for HIV or as a diagnostic test to confirm the presence of HIV infection” [Roche, 1999]. Normally antibody tests would be used but they are widely accepted to be unreliable in infants before 18 months (and there is evidence they are unreliable beyond that date as well). This leaves PCR as the only game in town, even if it is not a very good game.

Does it matter if the trial is corrupt due to outright fraud or mere sloppiness? What matters is that it is conceivable that the trial was used to foist a highly toxic drug on unsuspecting mothers-to-be in the third world (not in America, because the manufacturer withdrew their application for approval).

To be fair to Judson, the higher standards he is calling for, would also catch situations like this. The questions would not be legalistic, they would merely be whether the data from the trial can be trusted. Since it cannot be trusted completely, it should be discarded entirely.

Like Science? Like Scientists?

One of the problems with science is the overuse of a reductionist approach. This has had spectacular success with some areas of science, but living beings are places where this approach has significant limits. While it is easy to dissect a dead body into parts, it is not possible to direct therapy only at a specific area in a living organism. Drugs may be directed at HIV, for example, but may also damage the blood supply, pancreas, liver, nervous system and bones (as many AIDS drugs do).

This reductionist approach is also reflected in how scientists organize themselves. They apparently believe that thousands of people (the quantum of the scientific universe) working alone can each produce microscopic ‘facts’, each of which can be placed in a pigeonhole (usually a journal article) waiting for the next scientist to come along and use it.

Perhaps this notion is behind the astonishing belief in the honesty of scientist’s partners, even when there are clearly major problems with their work. As David Baltimore said “One has to trust one’s collaborators”. To distrust one worker means that, logically, they should distrust all, and not take any published facts at face value. Taken to an extreme this would cause science to grind to a halt as scientists spent all their time verifying previously published work.

However, few perhaps recognize that science, medical science anyway, is reaching the opposite extreme, where scientists are so individualistic that they take no responsibility for the work of other scientists they work with as co-authors or co-investigators, or that they ‘supervise’.

Surely there is a middle ground where scientists do spend some time examining or replicating the work of others, and where trust has its limits.

Judson obviously believes that science must be refocused on the integrity of the putative ‘facts’ being pigeon-holed. In his world work would be replicated, and there would be nearly as much status in that as in the original research. Results do not need to be replicated by every scientist who uses them, but relying on unreplicated science should be frowned upon. The secondary goal of science (beyond the primary purpose of the investigation) should be to document the work so thoroughly that it could be replicated, and that logical flaws in the work can easily be identified. In a world like that, scientific fraud would be much harder to perpetrate, although it will never be completely eliminated.

Afterword: The Book’s Limitations

Judson’s book is an important contribution to science, shining light on some rather putrid dark corners. It is not without its flaws, however.

He devotes an entire chapter to open publication on the internet. Clearly Judson is an enthusiast of this, but the connection with fraud is weak, and Judson doesn’t even bother to make a case for it. In fact, he describes how electronic publishing will solve the two worst problems with print publication – long delays between submission and publication and lack of access to the raw data. These have only a tenuous connection with fraud (in fact facilitating more rapid publication might make the problem worse). This chapter does not belong in the book.

Judson starts the book with a discussion of modern financial scandals, like Enron. It isn’t clear that they have much in common with scientific fraud. Money is definitely one motivation for scientific fraud, but it is not the only one. Status, responses to pressure and laziness are also important factors. Perhaps this was an attempt to draw the reader into the book. I think that most readers have heard far too much about Enron, Global Crossing and Adelphia. I just wanted to skip this part.

He attempts to pinpoint the transition from private funding, before WW II, to public funding as one of the causes of scientific fraud, but he isn’t terribly convincing. Public funding did result in a massive increase in the size of the enterprise. It is more likely that, rather than the source of the funding, that is at the root of the increase in fraud. A little later he even contradicts himself by discussing some of the large private foundations, such as the Howard Hughes Medical Institute and the Wellcome Trust which still are hugely influential in research.

Judson also comes across as somewhat overly clergiable (that’s an old-fashioned word meaning learned and scholarly that I’m sure impresses you) using words like anodyne, desideratum, consilience, hermetic and eleemosynary when there are obvious, simpler and more common words available.

The book could also use a tabular summary of cases of fraud. After reading the book, finding a case for reference is difficult unless you remember the name of one of the major actors. There are so many cases in the book, most involving several scientists, that it becomes very difficult to keep them all straight.

Judson also tries a new method of including notes. Probably to avoid the superscripted numbers interrupting the flow of the text, Judson puts notes at the back of the book, organized by chapter, but without any identifying number. This is fine for people who generally don’t consult the notes, but makes it harder for others to find them.

These are not fatal flaws, the book is worth reading despite them. In fact, for people concerned about the integrity of the scientific enterprise, it is a must read.


[Bell, 1992] Bell RI. Impure Science: Fraud, Compromise, and Political Influence in Scientific Research. John Wiley. 1992.
[Fleming, 1996] Fleming TR et al. Surrogate End Points in Clinical Trials: Are We Being Misled? Ann Intern Med. 1996 Oct 1; 125(7): 605-13.
[Friedberg, 1999] Friedberg M et al. Evaluation of Conflict of Interest in Economic Analyses of New Drugs Used in Oncology. JAMA. 1999 Oct 20; 282(15): 1453-7.
[Roche, 1999] Amplicor HIV-1 Monitor Test. Roche. 1999.
[Quaghebeur, 2004] Quaghebeur A et al. Low efficacy of nevirapine (HIVNET012) in preventing perinatal HIV-1 transmission in a real-life situation. AIDS. 2004 Sep 3; 18(13): 1854-6.
[Willman, 2004] Willman D. The National Institutes of Health: Public Servant or Private Marketer? Los Angeles Times. 2004 Dec 22.
[Ziegler, 1995] Ziegler MG et al. The accuracy of drug information from pharmaceutical sales representatives. JAMA. 1995 Apr 26; 273(16): 1296-8.

Copyright © David Crowe, Friday, May 6, 2005.