Last year, Geoffrey Chang and co-workers retracted five papers that contained a faulty protein structure prediction—the result of an error in their software (Chang et al, 2006). Although the ensuing debates and arguments about this ‘great pentaretraction' have slowly dissipated, it is useful to shed some light on the context in which such mistakes occur. We believe that there are fundamental differences in the scientific philosophy and methodology underlying the discussion that cannot and should not be explained by current definitions of good or bad science. This is not exclusive to protein crystallography; it is also typical of other large-scale, high-tech research fields including nanotechnology, systems biology and imaging technologies. In general, structural biology is a ‘hot' research field, involving the constant development of analytical approaches and technologies—combinations of two specific styles of science (Hacking, 1992): classical ‘wet' bench work, and ‘dry' computational and mathematical work. Each style of science—wet and dry—represents a framework for getting at the truth, and comes with its own scientific method, distinct protocols, technologies, theories, language and more general ‘ways of doing'. Consequently, it is possible to make claims within one style that make no sense from the viewpoint of the other. For example, the claim that “MsbA is a member of the MDR–ABC transporter group by sequence homology” (Chang & Roth, 2001) is the result of an in silico comparison of sequences that can neither be performed at the bench, nor understood or proven by wet work alone. This implies that wet and dry science differ in terms of what their proponents regard as ‘proper science'. In an article in Science, Chris Miller, from the Howard Hughes Medical Institute (Waltham, MA, USA), wrote that structures are “just models, not data” and argued that the danger lies in “ignoring biochemical results, conventional but logically solid” (Miller, 2007). He was clearly commenting on the pentaretraction from a wet point of view. From a dry perspective, comments about the error were generally less harsh. It is a widely accepted practice—necessary for doing dry science—to trust a model or an algorithm and believe the outcome. Consequently, dry scientists have generally attributed the error resulting in the Chang retractions to bad luck, an honest mistake, or ‘much ado about nothing'. Conversely, the wet community has tended to use more harsh terms including: debacle, fiasco, monumental blunder, sloppy science and inexcusable. In scientific fields such as structural biology, wet and dry styles are becoming increasingly interdependent. As the complexity of their data far exceeds the computing ability of the human mind, scientists have no choice but to trust computer models. This interaction has become commonplace to a level at which claims, technologies and tools are no longer either wet or dry. They can only be understood and used within a new framework or style that we call ‘moist' science—an integration of dry and wet styles. Accordingly, moist science creates a new way of doing ‘proper science'. Critiques directed at Chang and co-workers exclusively from a dry or a wet point of view therefore cannot fully evaluate the ‘properness' of their research, or fully assess the magnitude of any mistakes. As moist science is a science in the making, some of its technologies are still experimental and protocols have not yet been unanimously accepted. The specific criteria for what it deems to be ‘proper science', or what exactly counts as a mistake, have not yet been set. For example, should the code of an algorithm—which in this case created the error—be included in the methods section of a publication, or made available as supplementary material online? This and other questions must be settled in order to reach a new consensus on what constitutes ‘proper science'. Therefore, the lesson is not whether to blame, but how to learn from and improve on this new moist scientific style.