Hot Best Seller

Science Fictions: The Epidemic of Fraud, Bias, Negligence and Hype in Science

Availability: Ready to download

A major exposé that reveals the absurd and shocking problems that pervade and undermine contemporary science. So much relies on science. But what if science itself can’t be relied on? Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for advice. Science Fictions reveals the disturbing flaws that undermine our understanding of A major exposé that reveals the absurd and shocking problems that pervade and undermine contemporary science. So much relies on science. But what if science itself can’t be relied on? Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for advice. Science Fictions reveals the disturbing flaws that undermine our understanding of all of these fields and more. While the scientific method will always be our best and only way of knowing about the world, in reality the current system of funding and publishing science not only fails to safeguard against scientists’ inescapable biases and foibles, it actively encourages them. From widely accepted theories about ‘priming’ and ‘growth mindset’ to claims about genetics, sleep, microbiotics, as well as a host of drugs, allergies and therapies, we can trace the effects of unreliable, overhyped and even fraudulent papers in austerity economics, the anti-vaccination movement and dozens of bestselling books – and occasionally count the cost in human lives. Stuart Ritchie was among the first people to help expose these problems. In this vital investigation, he gathers together the evidence of their full and shocking extent – and how a new reform movement within science is fighting back. Often witty yet deadly serious, Science Fictions is at the vanguard of the insurgency, proposing a host of remedies to save and protect this most valuable of human endeavours from itself.


Compare

A major exposé that reveals the absurd and shocking problems that pervade and undermine contemporary science. So much relies on science. But what if science itself can’t be relied on? Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for advice. Science Fictions reveals the disturbing flaws that undermine our understanding of A major exposé that reveals the absurd and shocking problems that pervade and undermine contemporary science. So much relies on science. But what if science itself can’t be relied on? Medicine, education, psychology, health, parenting – wherever it really matters, we look to science for advice. Science Fictions reveals the disturbing flaws that undermine our understanding of all of these fields and more. While the scientific method will always be our best and only way of knowing about the world, in reality the current system of funding and publishing science not only fails to safeguard against scientists’ inescapable biases and foibles, it actively encourages them. From widely accepted theories about ‘priming’ and ‘growth mindset’ to claims about genetics, sleep, microbiotics, as well as a host of drugs, allergies and therapies, we can trace the effects of unreliable, overhyped and even fraudulent papers in austerity economics, the anti-vaccination movement and dozens of bestselling books – and occasionally count the cost in human lives. Stuart Ritchie was among the first people to help expose these problems. In this vital investigation, he gathers together the evidence of their full and shocking extent – and how a new reform movement within science is fighting back. Often witty yet deadly serious, Science Fictions is at the vanguard of the insurgency, proposing a host of remedies to save and protect this most valuable of human endeavours from itself.

30 review for Science Fictions: The Epidemic of Fraud, Bias, Negligence and Hype in Science

  1. 5 out of 5

    Gavin

    Wonderful introduction to meta-science. I've been obsessively tracking bad science since I was a teen, and I still learned loads of new examples. (Remember that time NASA falsely declared the discovery of an unprecedented lifeform? Remember that time the best university in Sweden completely cleared their murderously fraudulent surgeon?) Science has gotten a bit fucked up. But at least we know about it, and at least it's the one institution that has a means and a track record of unfucking itself. R Wonderful introduction to meta-science. I've been obsessively tracking bad science since I was a teen, and I still learned loads of new examples. (Remember that time NASA falsely declared the discovery of an unprecedented lifeform? Remember that time the best university in Sweden completely cleared their murderously fraudulent surgeon?) Science has gotten a bit fucked up. But at least we know about it, and at least it's the one institution that has a means and a track record of unfucking itself. Ritchie is a master at handling controversy, at producing satisfying syntheses - he has the unusual ability to take the valid points from opposing factions. So he'll happily concede that "science is a social construct" - in the solid, trivial sense that we all should concede it is. He'll hear out someone's proposal to intentionally bring political bias into science, and simply note that, while it's well-intentioned, we have less counterproductive options. Don't get the audiobook: Ritchie is describing a complex system of interlocking failures. I need diagrams for that sort of thing. Ritchie is fair, funny, and actually understands the technical details. Supercedes my previous fave pop-meta-scientist, Ben Goldacre.

  2. 5 out of 5

    Julia

    This is one of the most important books I’ve read in the past few years. Ritchie skillfully examines the problems plaguing modern science, looks at the motivations that cause them, and posits solutions. Science Fictions drives home the importance for skepticism in all things, even science.

  3. 5 out of 5

    Andy

    This is an important topic, and the author does an excellent job explaining problems like p-hacking. But these issues are nothing new to scientists, so the main value of this book is if it engages and clearly explains things for the general public. And there, I’m afraid the author may end up just increasing confusion by trying to turn everyone into a scientist. In terms of solutions to bad science, I wonder if we don’t need to start by addressing the underlying culture of corruption and incompet This is an important topic, and the author does an excellent job explaining problems like p-hacking. But these issues are nothing new to scientists, so the main value of this book is if it engages and clearly explains things for the general public. And there, I’m afraid the author may end up just increasing confusion by trying to turn everyone into a scientist. In terms of solutions to bad science, I wonder if we don’t need to start by addressing the underlying culture of corruption and incompetence, of which bad science is just one symptom. Detroit: An American Autopsy Nerd addendum: With nutritional research, for example, he makes a good point that the news media do a bad job of hyping all these small or shoddy or irrelevant studies. His immediate solution is to try teach everyone how to read a scientific paper, and then whenever you hear about an interesting study in the news, you should go and somehow (even illegally) get a copy of the study and analyze it for validity. That seems nuts and unfair. According to the book, doctors and scientists and editors of scientific journals are widely incapable of this, so how is every citizen going to master this skill? And why should you? I think if people (scientists, doctors or otherwise) are really interested in nutritional epidemiology, they should go deep and read Gary Taubes, e.g. That gives you an understanding of the research literature going back decades, explaining what is wrong with the original studies that are often cited, and giving the implications in plain language. Then if you want, you can look up a few of the studies that he has detailed and you’ll be able to know what to look for and to verify whether they say what he says they say. You have to know stuff to learn stuff. What matters is not the latest news item, but the overall weight of the best available evidence. Another problem with his commentary on nutritional epidemiology is that he goes on from there to warn in general about all observational epidemiology, without pointing to when observational epidemiology does supply robust actionable evidence (trans fats, lung cancer, SIDS, etc., etc.). Other books to consider: .

  4. 4 out of 5

    Alvaro de Menard

    In 1945, Robert Merton wrote: There is only this to be said: the sociology of knowledge is fast outgrowing a prior tendency to confuse provisional hypothesis with unimpeachable dogma; the plenitude of speculative insights which marked its early stages are now being subjected to increasingly rigorous test. Then, 16 years later: After enjoying more than two generations of scholarly interest, the sociology of knowledge remains largely a subject for meditation rather than a field of sustained and metho In 1945, Robert Merton wrote: There is only this to be said: the sociology of knowledge is fast outgrowing a prior tendency to confuse provisional hypothesis with unimpeachable dogma; the plenitude of speculative insights which marked its early stages are now being subjected to increasingly rigorous test. Then, 16 years later: After enjoying more than two generations of scholarly interest, the sociology of knowledge remains largely a subject for meditation rather than a field of sustained and methodical investigation. [...] these authors tell us that they have been forced to resort to loose generalities rather than being in a position to report firmly grounded generalizations. In 2020, the sociology of science is stuck more or less in the same place. I am being unfair to Ritchie (who is a Merton fanboy), because he has not set out to write a systematic account of scientific production—he has set out to present a series of captivating anecdotes, and in those terms he has succeeded admirably. And yet, in the age of progress studies surely one is allowed to hope for more. If you've never heard of Daryl Bem, Brian Wansink, Andrew Wakefield, John Ioannidis, or Elisabeth Bik, then this book is an excellent introduction to the scientific misconduct that is plaguing our universities. The stories will blow your mind. For example you'll learn about Paolo Macchiarini, who left a trail of dead patients, published fake research saying he healed them, and was then protected by his university and the journal Nature for years. However, if you have been following the replication crisis, you will find nothing new here. The incidents are well-known, and the analysis Ritchie adds on top of them is limited in ambition. The book begins with a quick summary of how science funding and research work, and a short chapter on the replication crisis. After that we get to the juicy bits as Ritchie describes exactly how all this bad research is produced. He starts with outright fraud, and then moves onto the gray areas of bias, negligence, and hype: it's an engaging and often funny catalogue of misdeeds and misaligned incentives. The final two chapters address the causes behind these problems, and how to fix them. The biggest weakness is that the vast majority of the incidents presented (with the notable exception of the Stanford prison experiment) occurred in the last 20 years or so. And Ritchie's analysis of the causes behind these failures also depends on recent developments: his main argument is that intense competition and pressure to publish large quantities of papers is harming their quality. Not only has there been a huge increase in the rate of publication, there’s evidence that the selection for productivity among scientists is getting stronger. A French study found that young evolutionary biologists hired in 2013 had nearly twice as many publications as those hired in 2005, implying that the hiring criteria had crept upwards year-on-year. [...] as the number of PhDs awarded has increased (another consequence, we should note, of universities looking to their bottom line, since PhD and other students also bring in vast amounts of money), the increase in university jobs for those newly minted PhD scientists to fill hasn’t kept pace. By only focusing on recent examples, Ritchie gives the impression that the problem is new. But that's not really the case. One can go back to the 60s and 70s and find people railing against low standards, underpowered studies, lack of theory, publication bias, and so on. Imre Lakatos, in an amusing series of lectures at the London School of Economics in 1973, said that "the social sciences are on a par with astrology, it is no use beating about the bush." Let's play a little game. Go to the Journal of Personality and Social Psychology (one of the top social psych journals) and look up a few random papers from the 60s. Are you going to find rigorous, replicable science from a mythical era when valiant scientists followed Mertonian norms and were not incentivized to spew out dozens of mediocre papers every year? No, you're going to find exactly the same p<.05, tiny N, interaction effect, atheoretical bullshit. The only difference being the questionable virtue of low productivity. If the problem isn't new, then we can't look for the causes in recent developments. If Ritchie had moved beyond "loose generalities" to a more systematic analysis of scientific production I think he would have presented a very different picture. The proposals at the end mostly consist of solutions that are supposed to originate from within the academy. But they've had more than half a century to do that—it feels a bit naive to think that this time it's different. Finally, is there light at the end of the tunnel? ...after the Bem and Stapel affairs (among many others), psychologists have begun to engage in some intense soul-searching. More than perhaps any other field, we’ve begun to recognise our deep-seated flaws and to develop systematic ways to address them – ways that are beginning to be adopted across many different disciplines of science. Again, the book is missing hard data and analysis. I used to share his view (surely after all the publicity of the replication crisis, all the open science initiatives, all the "intense soul searching", surely things must change!) but I have now seen some data which makes me lean in the opposite direction. Ritchie's view of science is almost romantic: he goes on about the "nobility" of research and the virtues of Mertonian norms. But the question of how conditions, incentives, competition, and even the Mertonian norms themselves actually affect scientific production is an empirical matter that can and should be investigated systematically. It is time to move beyond "speculative insights" and onto "rigorous testing", exactly in the way that Merton failed to do.

  5. 5 out of 5

    Steve

    Really enjoyed this description of some of the big problems in science today. This is not in any way an anti-science book, Ritchie makes clear that he wants to improve science, not to dispense with it. Along with describing problems he also describes much of the process of science which I enjoyed. He spends a lot of time on the reproducibility crises, p-hacking and other statistical cheating, and many other issues that one hears about when science problems get in the news. This book has been wel Really enjoyed this description of some of the big problems in science today. This is not in any way an anti-science book, Ritchie makes clear that he wants to improve science, not to dispense with it. Along with describing problems he also describes much of the process of science which I enjoyed. He spends a lot of time on the reproducibility crises, p-hacking and other statistical cheating, and many other issues that one hears about when science problems get in the news. This book has been well reviewed in general publications, but I was curious how science journals would review it. The only review I found in a professional science periodical (in the ultra-prestigious Nature) was basically positive with a few criticisms.

  6. 5 out of 5

    Sophia

    I highly recommend this book for anyone planning (or considering) to do science, either a bachelors, masters or more. It's a great overview of how science is actually practiced, and how it can so easily go wrong. I also recommend this to current scientists, because it's a humbling reminder of what we're doing wrong, and also a quick update on things we might have been taught as facts has actually been disproven in the meantime. The book is exceptionally well structured, very clear writing, very I highly recommend this book for anyone planning (or considering) to do science, either a bachelors, masters or more. It's a great overview of how science is actually practiced, and how it can so easily go wrong. I also recommend this to current scientists, because it's a humbling reminder of what we're doing wrong, and also a quick update on things we might have been taught as facts has actually been disproven in the meantime. The book is exceptionally well structured, very clear writing, very engaging, switching between as much information as needed to understand a given concept, then compelling examples, and discussion as to why it matters, what people might object to, etc. Really really good. However, the author fails to give proper due to the main strength of science: it's ability to self-correct. This book is described as an "expose", but in reality all of what he mentions has been known for decades, and in fact every single example he gives of fraud, negligence, bias, or unwarranted hype was not uncovered by external journalists but rather other scientists. It was the peers who read papers that looked suspicious and did more digging, or whole careers built around developing software and tools for automatically detecting plagiarism, statistical errors, etc. It was psychology itself that "wrote the book" on bias that was fundamental to exposing the biases of scientists themselves. And more often than not, it was just a future study that tried something better that should have worked but didn't that disproved a flimsy hypothesis. Sure; fraud, hype, bias, and negligence are dragging science down, but science isn't "broken", it's just inefficient. Wasting a lot of money on bad experiments and scientists needs to be avoided, but in the end, a better truth tends to bubble up regardless. Anyone who has had to defend science against religious diehards will be particularly aware of this. Also missing is proper consideration as to why these seemingly blindingly obvious problems have been going on for so long. As an insider, here are some of my answers: - All this p-hacking (trying different analyses until something is significant). Scientists are not neatly divided into those that immediately find their results because of how fantastically well they planned their study, and those that desperately try to make their rotten data significant. Every. Single. Study. has to fine tune their analysis once they get the data, not before. Unless you are in fact replicating something, you have absolutely no idea what the data will look like, and what's the most meaningful way to look at it. This means you can’t just tell scientists "stop p-hacking!", you need an approach that acknowledges this critical step. Fortunately, an honest one exists that can be borrowed from machine learning: splitting your data into a "training" and "testing" dataset, where you fine-tune your analysis pipeline on a small subset, then you actually rely on the results applied to a larger one, using only and exactly the pipeline you previously developed, without further tweaking. - The file drawer problem (null results not getting published). I think especially in the field of psychology, statistics courses are to blame for this; we don't reeeally understand how the stats work, so we rely on Important Things To Remember that we're taught by statisticians, and one of these is that "you can't prove a null hypothesis". This ends up getting interpreted in practice in "null results are not real results, because nothing was proven". We are actively discouraged from interpreting "absence of evidence as evidence of absence", but sometimes that is in fact exactly what we should be doing; for sure not with the same confidence and in the same way with which we interpret statistically significant positive results, but at some point, a study that should have found something but didn't is a meaningful indication that that thing might not in fact be there. A useful tool to help break through this narrow-minded focus on only positive results is a new statistical tool called similarity testing, where you test not whether two groups are different but whether they are statistically significantly "the same". This is a huge shift in mindset for many psychologists, who suddenly learn that you can in fact have a legitimate result that there was no difference to be found. Knowing this I suspect will make people less wary of null results in general. - Proper randomization (and generally the practicalities of data collection). The author at some point calls it a mistake that a trial on the Mediterranean Diet had assigned to the same family unit the same diet, thus breaking the randomization. For the love of God, does he not know how families work? You cannot honestly ask members of the same family to eat differently! Sure, the authors should have implemented proper statistical corrections for this, but sometimes you have to design experiments for reality, not a spherical world. - Reviewers nudging authors to cite them. This may seem like a form of blatant self-promotion, but it's worth mentioning that in reality, the peer reviewers were specifically selected as members of the EXACT SAME FIELD, and so odds are good that they have in fact published relevant work, and odds are even better that they are familiar with it enough to recommend it. That is not to say that some of it is for racking up citations, but this is not true unless proven otherwise, because legitimate alternative explanations exist. Other little detail not mentioned by the author is that good science is f*cking hard. For my current experiment, I need a passing understanding of electrical engineering to run the recording equipment, a basic understanding of signal processing and matrix mathematics to clean and analyze the data, a good understanding of psychology for experimental design, a deep understanding of neuroscience for the actual field I'm experimenting in, a solid grasp of statistics, sufficient English writing skills, separate coding skills for both experimental tasks and data analysis in two different languages, and suddenly a passing understanding of hospital-grade hygiene practices to deal with COVID! There's just SO MUCH that can go wrong, and a failure at any point is going to ruin everything else. It's exhausting to juggle all that, and honestly, it's amazing that we have any valid results coming out at all. The only real solution to this is to have larger teams; focus less on individual achievements. The more eyes you have on scripts, the fewer bugs there will be; the more assistants available to collect data, the fewer mishaps. The more people reading the paper beforehand, the fewer mistakes slip through. We need publications from labs, not author lists; it can be specified somewhere the exact contribution of each, but science needs to move away from this model of venerating the individual, because this is not the 19th century anymore: the best science comes from groups. On CVs, we shouldn’t write lists of publications, we should write project descriptions (and cite the paper as “further reading”, not as an end in and of itself). ~~~ Scientists need the wakeup call from this book. Journalists and interested laymen also greatly benefit from understanding why a healthy dose of scepticism is needed towards any single scientific result, and how scientists are humans too. But the take-home message that can transpire from this book and is not actually true, is that scientists are either incompetent or dishonest or both. The author repeatedly bashes poor science and science communication that has eroded public trust in science, but ironically this book is essentially highlighting this with neon letters and making sure trust in science is eroded. To some extent it is warranted, but the author could have done more to defend the institution where it is deserved, and as an insider, could have done more to talk about the realities an individual scientist faces when they make these poor decisions. It's worth mentioning that science has not gotten worse, we're still making discoveries, still disproving our colleagues, and still improving quality of life. We could just be doing it more efficiently.

  7. 4 out of 5

    Chris Boutté

    Incredible book that I binged in a day. As an influencer who often references psychological studies but also knows how much bad science is out there, I’m always trying to learn more about this subject. This author did a great job not just giving examples of bad science, but he explains WHY it’s happening and offers solutions. Absolutely loved this book and hope some journalists read it as well before they keep reporting on hyped up science.

  8. 5 out of 5

    Cam

    Essential reading for graduate science researchers. Although, much of the material will hopefully be familiar to them. Ritchie writes clearly. He's likeable and scientifically and statistically literate, but doesn't take himself too seriously. He's a great science populariser even when he is denigrating science! Ritchie helped kick off the well-publicised replication crisis in social science in 2012 when he attempted and failed to replicate a para-psychology paper. The original paper by Bem purpo Essential reading for graduate science researchers. Although, much of the material will hopefully be familiar to them. Ritchie writes clearly. He's likeable and scientifically and statistically literate, but doesn't take himself too seriously. He's a great science populariser even when he is denigrating science! Ritchie helped kick off the well-publicised replication crisis in social science in 2012 when he attempted and failed to replicate a para-psychology paper. The original paper by Bem purported to show that we can study for a test after we have taken the test to improve our test results. Obvious nonsense, right? No surprises it failed to replicate. The major problem, as the original authors noted, is that their methods weren't all that different to many of the papers being published in social science. Essentially social science can't be trusted. Whether a study replicates doesn't correlate with how many citations it has. Truly remarkable. Ritchie does a nice job explaining to lay-audience concepts like p-value, statistical significance, and the common dodgy statistical methods such as p-hacking and harking. He also outlines how the issues are exacerbated by perverse incentives in academia such as publish or perish, and the need for results to be statistically significant and sexy. Ritchie also recounts some good narrative non-fiction around some of the most high-profile cases of fraud such as Diederik Stapel (the Bernie Madoff of science) and Paolo Machiarinni (who claimed he was healing people with risky procedures - as opposed to killing them!). Twitter user Alvaro De Menard is less optimistic than Ritchie. De Menard points out, with a systematic deep dive into social science paper's replicability, that this isn't a recent phenomenon. Any proposed way forward to fix the crisis within academia is unlikely to succeed.

  9. 4 out of 5

    Tristan Eagling

    Goodharts's law states "When a measure becomes a target, it ceases to be a good measure", which sums up the premise of the book perfectly. For centuries, science has tried to give value to subjective knowledge and academia relies on these often arbitrary metrics. But all we have done is create a system which can be gamed, and populated that system with clever (mostly) people who are heavily incentivized to game the system. As someone who has published scientific research, peer reviewed others and Goodharts's law states "When a measure becomes a target, it ceases to be a good measure", which sums up the premise of the book perfectly. For centuries, science has tried to give value to subjective knowledge and academia relies on these often arbitrary metrics. But all we have done is create a system which can be gamed, and populated that system with clever (mostly) people who are heavily incentivized to game the system. As someone who has published scientific research, peer reviewed others and worked for various funders, much of the author's criticisms of 'science' and how we incentivise it hits home. Nothing in this book was particularly surprising or new to me, but I had never considered the extent the combined effect of all these individual imperfections in our system are having on the quality of the science being produced (and how much time and money is being spent on nothing more than maybe furthering someone's career). The most fascinating part of the book was the reference to the field of meta-science (the science of science), which has started to quantity just how bad malpractice in science is and analyze aspects of the funder-scientist-journal relationships. This will be an uncomfortable read for many in the world of science or anyone who has advocated the finding of a popular scientist book to their friends. However, it is essential reading and hopefully will help us all get closer to that elusive concept of 'truth'.

  10. 5 out of 5

    MIKE Watkins Jr.

    This book reminds me of another book I read, the color of law. Like the Color of Law, this is a book that is very informative and will transform the way you think about the topic discussed in this book, but it lacks a few key ingredients that prevent it from being a 5/5 read. This book starts out by introducing you to how science works and how that science is publicized. 1. A scientist comes up with a scientific theory/question to look into and comes up with an hypothesis. 2. The scientist attempts This book reminds me of another book I read, the color of law. Like the Color of Law, this is a book that is very informative and will transform the way you think about the topic discussed in this book, but it lacks a few key ingredients that prevent it from being a 5/5 read. This book starts out by introducing you to how science works and how that science is publicized. 1. A scientist comes up with a scientific theory/question to look into and comes up with an hypothesis. 2. The scientist attempts to get a grant for researcher into said theory/question 3. The scientist conducts experiments and collects data 4. The scientist try's to get his findings published via peer review. The book proceeds to break down why the current process of peer review is flawed, and this largely stems from the fact that scientist don't replicate studies to ensure they are accurate. As a result of this, various "scholarly resources" are utilized by doctors, psychologist, and scientist without them realizing this, and people have died because of this dilemma. Scientist often engage in upright fraud, negligence, or hype in order to get a particular scientific discovery published. This is because they have perverse incentives to do so....sadly world renowned scientific publishing companies like Nature and Science display a preference towards scientific articles that are flashy, new, and eye popping. It's very hard to produce an article/paper that's new/positive/eye popping naturally, so scientist skew their results. This book also provides potential solutions for resolving these issues such as requiring submitted papers/articles to be assessed by algorithms that are programmed to spot fake data in scientific papers. But yeah in terms of how informative this paper was....as you can see it's very informative and changes your perspective on scholarly databases. However, as previously mentioned, this book was missing that thing that would put it over the top for me. A. This book wasn't an entertaining read at all. The information in this book was great....but the book itself was kind of boring. B. The book was repetitive it would use endless examples to repeat or overly explain a concept.

  11. 4 out of 5

    Mbogo J

    Science is the new god, a deity that has gotten its power too fast and too much, it has behaved the same way a human would;it became power drunk. The hostile takeover it did on religion as the preeminent field of knowledge that guides human affairs left it feeling smug and thought itself immune to the same pitfalls that had befallen religion. The end result? The current state of an epidemic of fraud, negligence, bias and hype. Stuart Ritchie took us across the current landscape of science as prac Science is the new god, a deity that has gotten its power too fast and too much, it has behaved the same way a human would;it became power drunk. The hostile takeover it did on religion as the preeminent field of knowledge that guides human affairs left it feeling smug and thought itself immune to the same pitfalls that had befallen religion. The end result? The current state of an epidemic of fraud, negligence, bias and hype. Stuart Ritchie took us across the current landscape of science as practised and not as it should be done. We heard stories of numerous researchers who manufactured data out thin air, others who tortured the numbers until they told them what they want to hear and some who know the art of mixing a perfect cocktail of words to take mild effects stratospheric. It was a fascinating tale. This book is one of those current books which are actually worth it rather than the common practice of taking one good Atlantic article and padding it enough with hot air to pass off as a book. Hats off to you Stuart, you did a splendid job in writing this. Still, the current state of science is not an indictment to science itself but rather the human incentives guiding science. The publish or perish rat race, limited funds, corporate interests and humans with suspect morals taint an otherwise noble field. Ritchie had a couple of worthy recommendations which if implemented might save science some face but I won't be too optimistic, these ideas have been floating around for a while but very little gets done. I think the public should read this book and develop a healthy skepticism whenever they hear sweeping declarations made by so called researchers on the "cutting edge."

  12. 4 out of 5

    Kyle

    This was a good read, in the sense that it clarified the problems that science faces (at least, for me). Ritchie does a good job of bringing in a lot of material to tell the story of the troubling findings of recent years. Ritchie focuses mostly on psychology, his area of expertise, and so while I found his thoughts very interesting for psychological and medical research, I did wonder about how some of it might translate to mathematics or physics. Reproducibility does not appear to be as much of This was a good read, in the sense that it clarified the problems that science faces (at least, for me). Ritchie does a good job of bringing in a lot of material to tell the story of the troubling findings of recent years. Ritchie focuses mostly on psychology, his area of expertise, and so while I found his thoughts very interesting for psychological and medical research, I did wonder about how some of it might translate to mathematics or physics. Reproducibility does not appear to be as much of a problem for the natural sciences, and so I appreciate Ritchie's thoughts on this issue as a reader of general science. The part that really fits well for all of science is the incentives structure of science. That is, the publish-or-perish paradigm that seems to be dominant right now. The author does an excellent job of explaining the perverse incentives and offers some ideas to change the incentive structure to value replication, opennness, and transparency. I think this is the most useful part of the book, even if the other parts are necessary to explain the scope of the problem. If you are interested in science news, this is an excellent resource to get an understanding of what to look for when deciding whether a new result is likely to hold in the long run. It also does a good job of explaining what openness means for science. It's also realtively short. From a stylistic perspective, I thought the writing style was fine (I did not think it was excellent, but neither did I think it was below average), and I enjoyed Ritchie's use of epigraphs. Overall, I think it was a good thing to read. If you are interested in improving the science incentive structure, I think it would be highly advisable to read this book.

  13. 5 out of 5

    William Schram

    Science is an ancient and vaunted establishment. It has done so much good for the world that it is sometimes easy to forget that science is a human construct. Scientists are human beings and are prone to making mistakes. This issue already surfaced for me in nutritional science. It seemed that every week a new food was discovered to cause cancer or to extend your lifespan. It sickens me to no end. Science Fictions is by Stuart Ritchie. It discusses the various issues that scientists have to deal Science is an ancient and vaunted establishment. It has done so much good for the world that it is sometimes easy to forget that science is a human construct. Scientists are human beings and are prone to making mistakes. This issue already surfaced for me in nutritional science. It seemed that every week a new food was discovered to cause cancer or to extend your lifespan. It sickens me to no end. Science Fictions is by Stuart Ritchie. It discusses the various issues that scientists have to deal with when doing science. Ritchie goes into several ways that scientists manipulate their data or lie. The system itself is flawed since scientists only post positive results. They forget that the simple act of reporting the research would save someone time down the road. The book highlights these issues and provides some solutions.

  14. 5 out of 5

    Mizuki

    This book is not usual 'science fiction', do not expect that. It is about scientific fraud, full of well-known cases. I'm quite familiar with these cases as a scientist, and even as a reader of many scientific literatures. Most of the points that the author says are painfully true - I know that is true, especially about the publication bias. Most scientists just want to publish their "best" data even if it is not reproducible well enough. On the other hand, I somewhat want to believe that most sci This book is not usual 'science fiction', do not expect that. It is about scientific fraud, full of well-known cases. I'm quite familiar with these cases as a scientist, and even as a reader of many scientific literatures. Most of the points that the author says are painfully true - I know that is true, especially about the publication bias. Most scientists just want to publish their "best" data even if it is not reproducible well enough. On the other hand, I somewhat want to believe that most scientists seek something truly important/useful. In my sense, we usually just publish our best results knowing these are not really practical, and in the background, we're continuously seeking something really practical even though it would not be published until fully validated.

  15. 5 out of 5

    Ulrich Schroeders

    This book is an important contribution to the so-called replication crisis in science. Ritchie neatly and clearly sums up the current state of affairs and unmasks the crisis as a social phenomenon that is strongly driven by communicative processes. The bigger picture across different scientific disciplines was very instructive. I also found a lot of new and useful references. I definitely recommend this book to every (psychology) student and researcher.

  16. 5 out of 5

    Ryan

    Science Fictions is an important book, which is aimed primarily at a non-scientific audience. It builds upon a number of other books about scientific practices, and covers some of the same ground as for work by Ben Goldacre [1, 2] who the author cites. The author is a lecturer in Psychology, and most of the examples are drawn from this area, medicine, and social sciences in general - you can expect to read some discussion about for e.g. the Andrew Wakefield scandal, Cochrane reviews, pre-study r Science Fictions is an important book, which is aimed primarily at a non-scientific audience. It builds upon a number of other books about scientific practices, and covers some of the same ground as for work by Ben Goldacre [1, 2] who the author cites. The author is a lecturer in Psychology, and most of the examples are drawn from this area, medicine, and social sciences in general - you can expect to read some discussion about for e.g. the Andrew Wakefield scandal, Cochrane reviews, pre-study registration, and about how perverse incentives hinder science. The book starts by discussing the 'what' of science - in particular, how it works practically. In theory, science is as simple as coming up with a hypothesis, designing an experiment, and testing that hypothesis, and then disseminating the results. In practice, the Ritchie argues convincingly that much of science is social. Scientists (postdocs or above) must first apply for a grant to fund their research. This is often, though not always, from the government, industry, or charities. In order to actually get the funding, scientists have to put forward grant proposals, which are judged by other scientists. Assuming that a scientist gets a grant, which vary in size hugely, they can then go on and do the experiment (often not themselves - grants usually contain funding for PhD students or Postdoctoral researchers) and test their hypotheses. At this point, they then write up their findings into an academic paper, and try and publish it in a journal. Papers undergo peer review, usually by one or two authors, and work becomes part of the scientific record. Ritchie takes aim at research which is not replicable. Replicability is the idea of being able to take a piece of research, conduct the experiment in the same way as the original authors, and get broadly similar results. In particular, Ritchie speaks from his own field which is famously undergoing a 'replicability crisis' - many key results have been found not to be replicable when performed with larger sample sizes, more rigorous methods, or just generally subjected to higher scrutiny - as Ritchie points out, this can sometimes be due to simple mistakes in the analysis, particularly around randomisation of trial participants and statistics. In other cases, results were fraudulent to start with, and in others people attempting replication can't even try to do the experiment, because the authors haven't included enough information to attempt it. Ritchie discusses the well known fact, at least among scientists, that studies that make it into the popular discourse are often overhyped, often via press releases written by the scientists themselves which are copied almost verbatim by news outlets, and well received popular science books are often based on flimsy evidence, or hugely overstate the significance of the results. He also makes the argument, and I completely agree, that this replicability crisis is occurring in Psychology only because other fields are not looking very hard for it. Personally, I think that many scientists in other fields are complacent about this. The reasons for this replicability crisis are complex, and I think Ritchie does a good job of discussing the mix of perverse incentives that drive people to publish work that don't stand up to scrutiny - the difficulty of publishing negative or null results leading to distortion in the published data towards positive results, the drive for promotion or even getting a permanent position at a University, the need to publish in 'high impact' journals, the requirement for a large number of papers to be successful in grant applications, and even more direct financial incentives such as grants paid to researchers by Universities. This leads to scientists doing things like splitting a single study in a practice known as 'salami slicing' into many papers, sometimes in a particularly egregious way, because the pure number of publications leads to better chances of promotion or getting grant funding. Finally, the book moves on to discussions about improving the current situation. Some of these are obvious, such as requiring pre-study registration for fields, and for publication to be agreed in advance of the study. Others were particularly interesting and were unfamiliar to me; he discusses ways of performing many analyses on the same dataset, known as 'multiverse analysis' to work out whether the results of a study are positive only because of a fluke of the choice of statistical approach chosen. I do think that in this section, Ritchie misses a chance to discuss in more detail the 'Open Science' movement. This spans a broad range of ideas, including anyone being able to access to publications that are publicly funded, the idea that anyone can download a dataset from the original paper, and re-run the analysis on it. He does not mention the fact that this is now a reality in the UK in most fields - as any scientist will know from the many e-mails they receive on the topic, all research funding by government research bodies requires research data to be uploaded in some form or another [3]. However, there are many problems with this practice. Richie argues in the book that there are cases such as in medical research where you can't release much of the data due to anonymity reasons, but there are also commercial confidentiality clauses with non-governmental funders where a researcher is funded by multiple sources. In addition, it's worth asking the question of what actually constitutes research data? My own field is in Physics, and in the papers I've worked on, we've taken the approach of trying to publish all of the simulation scripts (so someone can rerun the simulations, and get the same results), the data itself (so someone can re-run the analysis from scratch on the same data), and all of the scripts that we used to run the analysis ourselves. Other authors in the same field publish the bare minimum - usually just an Excel spreadsheet with data from their analysis from which the figures can be generated. There is no checking whether people are following the spirit or the letter of these requirements, and it's very time consuming to do it - time which most researchers do not have if they want to be promoted, especially given they get little credit for doing it properly by funders and their institutions. Even making data public when you want to do so is not straightforward - there just is not a clear way of doing it. In my last research study in my PhD, my research data consisted of around 200gb of data generated on a supercomputer. University services are generally set up to host this amount of data, relevant or not. In the end I had to cut this data down by removing data which could have been interesting to others, heavily compress it, and upload it to the CERN funded provider Zenodo which allows uploads of up to 50gb for free and provides a Digital Object Identifier (DOI) which allows the data it to be cited easily if reused. The debate around making research software public as part of 'open science' is also becoming increasingly heated. Making software public in and of itself is not a panacea - the same software can give different results on different machines due to the underlying architecture, operating system or operating conditions. Even getting scientific software from other research groups to run is not straightforward and requires a high level of expertise in and of itself. Many scientists in computational areas argue that practices and expectations which are changing around research software are different to lab based subjects - if someone comes and uses your laboratory to run an experiment using equipment you have built, you would normally be named on the paper, which is certainly not the case for research software - most people working in an area will happily use tools like R and Python, and packages within but will not cite the paper even for the person who developed a particular statistical test they are using, let alone the person that has implemented it in the software they are using. People who implement scientific software and make it public have had a tough job sustaining a career within science, and the reality is that most people leave for greener pastures in industry, which is a great loss. There have recently been moves over the last 10 years to supporting people who work on research software - something which large numbers of people use - to have a career track of their own, known as Research Software Engineering [4] - and positively, this is something that funders across the world have begun to support explicitly. One of the big problems as I see it here, however, is that in explicitly training people from scientific fields to become software engineers, they open themselves up to much more lucrative job openings in the private sector, and this expertise required to help researchers is lost - effectively delaying the departure from academia rather than stopping it completely. In addition to this, scientific software can be subject to high levels of scrutiny, even by non-experts. The open science movement is starting to suffer from a bit of a culture where anything that is not absolutely perfect is heavily criticised on social media channels such as Twitter - Dr Olivia Guest and Prof Kirstie Whittaker have discussed this in depth under the moniker of 'bropen science' [5]. If this is in research into a topic which is contentious among for e.g. a particular political leaning, non-scientists jump onto the bandwagon of criticising software in topics they do not understand - regardless of whether the studies people have produced are replicable by others with better software or not. A recent example could be the criticism of COVID19 modelling software produced by the Imperial College group, and which was used to inform lockdown policy throughout the world. [6] While this software clearly left a lot to be desired, other groups were able to replicate the study results using both the software as-is, and using their own software. As one can imagine, the resulting criticism can be overwhelming, unexpected for people who are trying to just do their best, and leads many people to want to do less open science rather than more. This sort of (usually unfounded) criticism by non-scientists often comes from similar sources as criticism of climate science, which Ritchie does touch upon. All in all, I think that this is an excellent and well timed book. Throughout it, I nodded my head through much of it, being able to relate parts of it to experiences in my own scientific career, and I think that most most scientists who pick up a copy of this will do the same. I think also that making these criticisms available to the general public is important - as Ritchie says, scientists should have to work at least a bit harder for trust in them, which is something I strongly agree with. Scientists are human, and suffer the same flaws no matter what career they undertake, be it science or otherwise. [1] https://www.amazon.co.uk/Bad-Pharma-H... [2] https://www.amazon.co.uk/Bad-Science-... [3] https://www.ukri.org/apply-for-fundin... [4] https://society-rse.org/ [5] https://thepsychologist.bps.org.uk/vo... [6] https://github.com/ImperialCollegeLon...

  17. 4 out of 5

    Liam Crismani

    Brilliant. Comprehensive but easy to understand exploration of the problems in the ways we currently do science.

  18. 4 out of 5

    Scott Lupo

    AVOID. Here are my reasons: -From the beginning, this author lost my trust. In the preface, the author mentions how him and some colleagues wrote a null paper on the psychic experiments by Daryl Bem and was "unceremoniously rejected from the journal that published the original." The leaves the reader thinking that he never got that study published and moves on to the next subject. WRONG. Read the notes and you will find it did get published, just not to his liking. -Read the notes. It's another w AVOID. Here are my reasons: -From the beginning, this author lost my trust. In the preface, the author mentions how him and some colleagues wrote a null paper on the psychic experiments by Daryl Bem and was "unceremoniously rejected from the journal that published the original." The leaves the reader thinking that he never got that study published and moves on to the next subject. WRONG. Read the notes and you will find it did get published, just not to his liking. -Read the notes. It's another whole book back there with some of them paragraphs long. Many of them either refute what he originally said or alters the original meaning just enough to realize he's trying to pull something. It would be interesting to know how many people actually read citations or notes at the end of books (couldn't find anything on google). I would venture to say not many, which I think he purposefully relied on for his narrative. -Do you enjoy abusive relationships? Me neither. However, that is what this book is like. "I come to praise science, not to bury it; this book is anything but an attack on science itself, or its methods." The next paragraph then explains the only "fragile scrap of hope and reassurance that emerges from the Pandora's box of fraud, bias, negligence and hype" is that scientists have uncovered these things themselves. Throughout the book it's 'science is great but science also sucks really, really bad because...'. -He has a hard on for Daniel Kahneman. He really doesn't like him. -He conflates social sciences with ALL SCIENCE. Yes, social sciences are muddy and gray because it deals with human beings, who are muddy and gray in just about everything they do. Creating experiments is very difficult and interpreting results even more difficult. But to lump all of science into this category is foolish and leans towards trickery. Throughout the book he switches, sometimes within the same paragraph, from a social science to other sciences. -Fraudsters, charlatans, flimflammers, and hustlers all use certain phrases in their toolkit of shams. Some of those are things like "you know what I mean", "let's face it", "it should be noted", "that being said". These are the phrases of all those psychics we used to see on tv (John Edward, Miss Cleo, Sylvia Brown). These phrases purposefully leave the door open for interpretation and let the listener/reader fill in the blanks themselves. Yeah, that is fraudster 101 class right there. -He constantly brings up the oldest cases of science fraud and then tries to compare them to today's frauds. Every case he brings to light ends in one way: they were caught! Because that is what science does. Incredible claims require incredible evidence. He acts like it is the worst thing in the world that science actually caught these things. Apparently, it is never fast enough for the author or he thinks science should be absolutely free of any errors, full stop. I am unsure whether he truly understands the scientific process. -His conclusions on how to fix these things is paltry at best. In fact, many of his suggestions are already in use today! Others he admits would be impossible to do. My only conclusion to this book is that it is a thinly veiled hit piece on science. Every fraudster knows that if you include nuggets of truth in your parable, then it will seem like everything is truthful. That is exactly this book. He even talks about this in the book. That scientists have gotten so good at faking their results so that it doesn't look 'perfect' and people will buy it. The irony!! The author grandiosely overstates his hypothesis that there is an epidemic of fraud, bias, negligence, and hype in science. Many times I thought I was reading an Onion article made into a book because he uses all those things in this book. Okay, I have laid out my reasons but I also want to give credit when it is due. Science is not perfect and the process is not always efficient and it does not always incentivise the proper way. Welcome to the problem with scaling up and money. Sure, it would be great to have science run without any thought to money or resources. Science for the sake of science. Cool with me. Let's shoot for that and do what we can to get as close as we can to that ideal. But this is not the message in this book. I truly believe this author has problems with two things: social sciences and the philosophy of science (epistemology). He should consider writing on those subjects instead of attacking the whole of science. Especially in a dishonest way like this book. It gets a star for actually writing a book and a star for at least shedding light on some of the issues with scientific research sometimes. That's two stars. The same I gave to Michelle Malkin. Enough said.

  19. 5 out of 5

    Dylan O'Connell

    Almost certain to be the book I annoyingly recommend to everyone I meet for the next few months. “Science Fictions” is a vitally needed introductory text to the current crisis (crises?) in science, and I wish it was getting more “buzz” (podcast interview circuit, take note). Most readers will be individual with different individual parts–the reproducibility crisis in psychology, the dangers of hype in scientific journalism, or examples of egregious image-manipulation-fraud in prominent papers. Bu Almost certain to be the book I annoyingly recommend to everyone I meet for the next few months. “Science Fictions” is a vitally needed introductory text to the current crisis (crises?) in science, and I wish it was getting more “buzz” (podcast interview circuit, take note). Most readers will be individual with different individual parts–the reproducibility crisis in psychology, the dangers of hype in scientific journalism, or examples of egregious image-manipulation-fraud in prominent papers. But this is the first work I’ve found that properly synthesizes all these disjoint strands (it’s rather tedious to try and point folks to a bunch of different blogs to get up to date). To be crystal clear, this is not a book about the fundamental shortcomings of science, or our need for alternatives. I’ve seen books like those, and none of them have really worked for me, but maybe I just need a more open mind. But this is a step-by-step breakdown of the current failures of science, and the need for reform. The tagline of this book might as well be “Science is really, really, hard, and it’s time we take that seriously”. Of course, that means nothing in here is revolutionary, but it does the important work of introducing these ideas in an accessible manner, and in particular showing the ways that these issues are connected. My background is statistics, so my version of this book would basically be the analysis of statistical significance, drawn out into a full length book. I never gave ideas like fraud and “hype” much thought, because those just seemed fundamental to any field. But Ritchie makes a compelling case for how these are all part of the same whole. One of my favorite moments is his early line about the fundamental nature of “trust” as the foundation of science. Research might be built on natural principles, but the way that research is integrated into scientific progress is ultimately a social exercise. If the book has a weakness, it’s the use of certain meta analyses as ground truth, when they are obviously subject to those very same flaws the book so carefully outlines. But I don’t want to harp on that point too hard, because 1. Ritchie is obviously aware of this (and does mention it), and 2. there’s not really an alternative… a more comprehensive book would have included a more thorough discussion of the meta-analysis-of-meta-analyses, but this is meant as an introduction, not a deeply technical tome. Best of all, this is a genuinely entertaining page turner. Ok, at least large sections are. Learning about the difficulty of science not sound as exciting as tales of its success… but so often the failures and frauds are so spectacularly grand, the anecdotes are hard to put down. That is, most of the troubles that ails science are banal, but with millions of scientists in the world, the best anecdotes are extremely spicy indeed. Highly recommend this book, but it might make you trust your doctor quite a bit less, so be warned.

  20. 4 out of 5

    Ben Chugg

    A sober analysis of the ways in which the current scientific institution incentivizes poor research practices. Useful for graduate students and the general public alike, the book catalogues the systemic flaws which result in unreliable results. Most importantly, you will come away knowing what you always suspected: Nutritional science rests on a firm foundation of bull****. While the book will most likely be received as an indictment of science, it should actually inspire awe and optimism. For i A sober analysis of the ways in which the current scientific institution incentivizes poor research practices. Useful for graduate students and the general public alike, the book catalogues the systemic flaws which result in unreliable results. Most importantly, you will come away knowing what you always suspected: Nutritional science rests on a firm foundation of bull****. While the book will most likely be received as an indictment of science, it should actually inspire awe and optimism. For in leveling criticism against current malpractice, Ritchie has reminded us of the capability of our scientific tradition to error-correct; to be criticized using its own tools and admit its own faults. Indeed, much of Ritchie’s analysis itself relies on scientific studies which shine light on all the (admittedly, often exceptionally fucked up …. Looking at you Paolo) ways in which the current system is sub-optimal. Contrast this with many other traditions, whether religious or political, which don’t have the ability for, or actively discourage, such self-awareness. Typical religious traditions explicitly denounce such self-criticism as heresy, for example. The fact that Ritchie can publish this book without fear of being ostracized (or stoned, or hung, or burned, or imprisoned, or eaten by a whale …) and probably contribute to bettering scientific practice should be cause for celebration. We’ve done it: we’ve created a perpetual progress machine. Ritchie ends by giving some suggestions for how to improve the reliability of scientific findings. These include: Multiverse analysis: looking at the average effect over many (possibly all valid) methodological choices; Pre-registration: committing to looking a single effect, ideally using a single type of analysis; Pre-print grading system: all papers are first distributed as pre-prints. They are then given a grade by independent investigatory groups. Journals then select the papers they’d like to publish; Data integrity tests by journals: requiring that journals submit their papers to automated tests which pinpoint errors in the analysis; Enforced Transparency: Requiring that all data, analysis, and initial drafts of a paper be made available for scrutiny. All-in-all, an excellent read.

  21. 5 out of 5

    Hariharan Gopalakrishnan

    Stuart Ritchie's writing style is of the 'no frills, get the job done' variety - which works for a short book that is supposed to inform a reader efficiently about an issue. I liked his earlier book on intelligence research for the same reason. The topic this time around is broadly 'the reliability of scientific research'. Ritchie lays out the major problems with the research practices in various sciences very clearly (fraud, bias, unintentional misuse of statistical concepts etc.) . The final c Stuart Ritchie's writing style is of the 'no frills, get the job done' variety - which works for a short book that is supposed to inform a reader efficiently about an issue. I liked his earlier book on intelligence research for the same reason. The topic this time around is broadly 'the reliability of scientific research'. Ritchie lays out the major problems with the research practices in various sciences very clearly (fraud, bias, unintentional misuse of statistical concepts etc.) . The final chapter covering supposed 'solutions' is to my mind the weakest part of the book. The traditional 'n chapters talking about a problem and 1 chapter mentioning solutions' structure has some intrinsic weaknesses that shows: not enough time spent convincing the readers of the validity of the solutions- especially considering the amount of time spent describing the various nuances of the different problems here, anything less than an equal amount of time discussing the solutions is going to come across as unconvincing. Also if you are an extremely close follower of this research I can imagine the book just covering what is well-trodden ground for you. That said, I am a sporadic follower of this topic and hence learned a lot from this book. Overall this is a solid introduction to the topic of Metascience and talks about issues that every aspiring scientist and consumer of scientific knowledge should be aware of.

  22. 5 out of 5

    Justin Pickett

    “So why do people who became scientists for the love of science and its principles end up behaving so badly?” (p. 176). Science Fictions is one of the most important science books ever written. We now know that most scientific findings are wrong. Why? Because scientists lie, they cheat, they exaggerate, they cut corners, they make mistakes, they selectively report findings to benefit themselves or to support their beliefs—in short, they’re human. Humans do best in environments that nudge them awa “So why do people who became scientists for the love of science and its principles end up behaving so badly?” (p. 176). Science Fictions is one of the most important science books ever written. We now know that most scientific findings are wrong. Why? Because scientists lie, they cheat, they exaggerate, they cut corners, they make mistakes, they selectively report findings to benefit themselves or to support their beliefs—in short, they’re human. Humans do best in environments that nudge them away from their biases and that punish bad behavior. Academia and the scientific publishing process, however, do just the opposite—they encourage and even reward dishonesty and selfishness. “We’re not just talking about a few bad apples ruining science for everyone” (p. 194). Ritchie reviews the evidence about the degree of outright scientific fraud, of questionable research practices (e.g., selectively reporting findings), of statistical errors, and of hype/spin in science. He paints a scary picture, demonstrating that “the problems science faces are systemic, indicating an entire culture gone awry” (p. 190). Science Fictions is written brilliantly, it’s funny, it’s depressing, and it’s encouraging—Ritchie lays out a path forward for improving science. The first step to recovery, after all, is admitting you have a problem. Everyone should read this book, scientists and laypeople alike.

  23. 5 out of 5

    Todd

    Science is a process done by humans for the benefit of humankind. As we all bask in the bright lights that are the many benefits of science, we must constantly be on guard for the ways in which science can go sideways. Many, actually most, of these sideways ways are unintentional and can be done by the best of scientists with the best of intentions. In fact, good intentions themselves can be a source of the bias and hype mentioned in the title of Ritchie’s book. “Science Fictions” is a hard-scra Science is a process done by humans for the benefit of humankind. As we all bask in the bright lights that are the many benefits of science, we must constantly be on guard for the ways in which science can go sideways. Many, actually most, of these sideways ways are unintentional and can be done by the best of scientists with the best of intentions. In fact, good intentions themselves can be a source of the bias and hype mentioned in the title of Ritchie’s book. “Science Fictions” is a hard-scrabbled look at the state of modern scientific inquiry which should be read widely by practicing scientists, especially those early in the scientific education or career. Nonscientists interested in getting an insider’s look at how the proverbial sausage is made may struggle with some of the minutia (p-hacking, h-factors, etc.) but should be sustained by the various examples, many of which should ring familiar to attentive readers, Ritchie employs to anchor the narrative. An important piece of popular science writing that deserves a wider audience than it is likely to attract.

  24. 4 out of 5

    Satheesh Kumar

    A must-read for every researcher. Scientists ought to keep a high standard for themselves if they want to maintain the trust of the public, and this book advocates just for that. Often, in arguments we bring up journal articles as evidence and proof that are irrefutable, forgetting that despite (or because of) the rigorous nature of science, there is bound to be inaccuracies, sometimes even deliberate ones. All researchers have felt the pressure for publications as all your work is considered co A must-read for every researcher. Scientists ought to keep a high standard for themselves if they want to maintain the trust of the public, and this book advocates just for that. Often, in arguments we bring up journal articles as evidence and proof that are irrefutable, forgetting that despite (or because of) the rigorous nature of science, there is bound to be inaccuracies, sometimes even deliberate ones. All researchers have felt the pressure for publications as all your work is considered completely unproductive if it is not published. It was appalling to see the extent to which some scientists would go to alter their data to obtain "significant" results, but at the same time, not so surprising to read about p-hacking. P-hacking is expected when such undue importance is placed on an arbitrary number. Negative, null results are essential in science, even if not flashy, more importance should be given to gradual improvements as opposed to groundbreaking results. All of us scientists can do better.

  25. 4 out of 5

    Dominic Sutcliffe

    Sort of good survey of the toolkit that one should read science articles with and models which could get one closer to the actual effects of a given intervention or theory. Would be good for business strategy as many of the critiques of p-hacking, outcome switching and hype apply to business interventions (as well as medicinal / scientific). Even more so even as many of the former don’t have a basis in theory / literature and so lack a plausible mechanism. The question of bias is not particularly Sort of good survey of the toolkit that one should read science articles with and models which could get one closer to the actual effects of a given intervention or theory. Would be good for business strategy as many of the critiques of p-hacking, outcome switching and hype apply to business interventions (as well as medicinal / scientific). Even more so even as many of the former don’t have a basis in theory / literature and so lack a plausible mechanism. The question of bias is not particularly well explored beyond public choice-type explanations and the radical solutions seem to be quite ineffectual in theory and have little precedent in practice. Would be very interested in the impact of bias on reports directly commissioned by companies and the strong bias towards non-null results in that respect. The funnel effect would be even less present in those studies (e.g. null result small samples don’t get published whereas non-null results small samples do)

  26. 4 out of 5

    Vasil Kolev

    This gets 5 stars not because it's groundbreaking - most of the stuff in there is known, discussed in other places, and you don't need to actually be writing papers to have heard about the issues. The stars are because the book is a great summary of the issues, has interesting ideas on fixing them and a very important chapter on how to read papers and have some basic idea how correct they are. The tools described in there that can analyze stats and catch problems look useful enough also for peopl This gets 5 stars not because it's groundbreaking - most of the stuff in there is known, discussed in other places, and you don't need to actually be writing papers to have heard about the issues. The stars are because the book is a great summary of the issues, has interesting ideas on fixing them and a very important chapter on how to read papers and have some basic idea how correct they are. The tools described in there that can analyze stats and catch problems look useful enough also for people who do statistics from time to time and can be used to check their own work (so definitely will be looking into them). Everyone needs to read this, just to understand the limits of trust that can be put into scientific papers (and everything else). (a shorter summary would be that this is a book on "Use your head when reading", and some good pointers "how")

  27. 5 out of 5

    Richard Thompson

    This was a terrific book about the problems that exist today in the accuracy of scientific publications, the causes of the problems and some good ideas about what can be done to improve the situation. It could have been written in an alarmist style with much finger pointing, but it wasn't. Sometimes the bare numbers are shocking in disclosing the size of the problem, though the author would be the first to admit that even that has to be taken with a grain of salt since numbers are always selecte This was a terrific book about the problems that exist today in the accuracy of scientific publications, the causes of the problems and some good ideas about what can be done to improve the situation. It could have been written in an alarmist style with much finger pointing, but it wasn't. Sometimes the bare numbers are shocking in disclosing the size of the problem, though the author would be the first to admit that even that has to be taken with a grain of salt since numbers are always selected and presented for impact, even when the story they tell is essentially true. I liked how the author bases his arguments on the underlying values of the scientific method and builds his program around ways to reinforce those values. I would recommend this book to anyone interested in the process and philosophy of science.

  28. 5 out of 5

    Sanjay Mehrotra

    Amazing book. Even if you are not a scientist but have a scientific bent of mind this book is for you. And if you are allergic to conspiracy theories (most prevalent in non scientific communities) you will be surprised by the amount of data produced in this book that shows that scientific community is not insulated from the conspiracy theory bug. That's because scientists and publishers of scientific journals are also humans and humans love sensational stories. No one loves to read a scientific Amazing book. Even if you are not a scientist but have a scientific bent of mind this book is for you. And if you are allergic to conspiracy theories (most prevalent in non scientific communities) you will be surprised by the amount of data produced in this book that shows that scientific community is not insulated from the conspiracy theory bug. That's because scientists and publishers of scientific journals are also humans and humans love sensational stories. No one loves to read a scientific paper that confirms a null hypothesis. That is boring. This book teaches the scientist to start to love the null hypothesis. Beware of publisher bias and look beyond peer reviews. Well done Stuart !

  29. 5 out of 5

    Alex Bond

    Starts as a good reference for those familiar with the subjects although offers little new information, I imagine it'd be useful as a guide for the layman too. A well researched summary in most parts of the pitfalls of scientific practice with some solid suggestions for improvement. I did find issue with the harping on about 'null and positive' published papers, as any scientist worth that title does more than read the word 'significant' before citing in future studies. That's less on publicatio Starts as a good reference for those familiar with the subjects although offers little new information, I imagine it'd be useful as a guide for the layman too. A well researched summary in most parts of the pitfalls of scientific practice with some solid suggestions for improvement. I did find issue with the harping on about 'null and positive' published papers, as any scientist worth that title does more than read the word 'significant' before citing in future studies. That's less on publication and more on lazy research practice, although this is touched upon briefly and vaguely in the 15 min epilogue, it is notably missing in the relevant chapters (perhaps an editorial error).

  30. 5 out of 5

    Pandit

    Fabulous book - readable, well laid out, and easy enough to follow for the layman. The author is a psychologist, but is talking about the method of science, and corruptions in science and its reporting in general. I'm sorry to say that a number of the bad science episodes in this book are things that I have used or taught (such as 'power poses' before a speech). Really, everyone should read this book! So that they understand to be skeptical when they hear or read about 'settled science'. And so Fabulous book - readable, well laid out, and easy enough to follow for the layman. The author is a psychologist, but is talking about the method of science, and corruptions in science and its reporting in general. I'm sorry to say that a number of the bad science episodes in this book are things that I have used or taught (such as 'power poses' before a speech). Really, everyone should read this book! So that they understand to be skeptical when they hear or read about 'settled science'. And so they understand to spot the difference between science, and the way science is reported. Bug thumbs up.

Add a review

Your email address will not be published. Required fields are marked *

Loading...
We use cookies to give you the best online experience. By using our website you agree to our use of cookies in accordance with our cookie policy.