Saturday 27 July 2013

The power of narratives

We've all heard about how science fiction can inspire real science. If you frequent the type of nerdy, techy blogs I do, you'll see with some regularity "6 discoveries that science fiction thought of first", or some variant along those lines. We get it: science fiction gives us context for understanding science and technology and how it fits into our society.

The assumption, though, is that this is done more or less on purpose. That is, the science fiction writers are trying to extrapolate existing technology and imaging how it will affect society. They're doing thorough research into where technology might be headed; they're talking to engineers and scientists; they're making the best predictions they can. While this is true of a number of science fiction writers, many others are far more concerned with net effects of being able to do a particular thing, rather than the science that would go into doing it. This latter type of story can still provide powerful narratives for shaping our understanding of and reactions to new science.

An interesting example of this happened last week. In this article in Science, researchers reported being able to implant a memory into a mouse. How they did it is super cool, but I'm not going to go into it here; it's covered in the many articles I'm about to link to.

If, upon hearing that researchers could implant a memory, you thought of the 2010 movie Inception, you're not alone. A lot of other people did too. In fact, one of the authors of the study referred to the movie in talking to the press, and other one used the term "incept" to refer to the memory implantation process.

And here's what I find fascinating about this: the movie had nothing to do with the actual science that went into the mouse study. One of the things that I find by turns brilliant and infuriating about Christopher Nolan (who wrote and directed Inception) is that he has a keen sense for paring away details that aren't relevant to the story he's telling. In the case of Inception, the important points are that people can enter other people's dreams, and in doing so extract information or, rarely, implant ("incept") new ideas. Everything else is swept away; we don't find out anything about how people share dreams or where that technology comes from (other than that the military developed it), and we don't find out any psychological reason why moving an inanimate object in someone's dream would cause an idea to germinate and flower in their mind--we're simply told it is so. Nolan gives us only enough details to move the plot forward; the last thing he's trying to do is teach the audience some science.

The study on mice doesn't even use dreams. Nor does it plant the seed of an idea in the mice and allow it to grow. In fact, it's not even really about implanting ideas, but rather a particular remembered fear reaction. The only thing it has in common with the movie is that they both involve someone deliberately changing someone else's conception of reality.

Of course, that in and of itself is scary. The simple idea that there might come a time when our memories and ideas might not deliberately manipulated by someone else has terrifying implications for our sense of self. And that is why we turn to narratives to help us understand what's going on. Humans are story oriented. A significant part of what makes up a culture, and what distinguishes it from other cultures, is a collection of shared stories. We moderns may have moved away from myths as our explanations for how the world works, but that doesn't mean we don't have a need to frame those explanations in terms of stories. And Inception has given us a powerful, shared story about altering memories.

Put a bunch of science geeks (I category I willingly admit to belonging in) together and ask them about movies, and you'll find out that we love to nitpick. My undergraduate physics society once hosted a showing of The Core precisely because the science in it was so terrible that it gave us hours of enjoyable discussion about how bad it was. Sometimes, though, the details aren't the most important part; sometimes narratives get repurposed in ways the authors couldn't have imagined. And sometimes we in science should remember this, assuming of course that what we remember is really up to us.

Friday 26 July 2013

A particularly bad headline

From Gizmodo: Scientists Just Discovered a New Force That's Stronger Than Gravity

Okay, so many things wrong with this headline. To start with, gravity is the weakest of the four fundamental forces, so "stronger than gravity" applies to just about everything. But that's not the biggest problem here. The headline, by pairing "new force" with "gravity", seems to suggest that someone has discovered a new fundamental force. This isn't what happened.

Quick overview: there are four fundamental forces. In order of strength they are: Gravity, Electromagnetism, Weak Nuclear Force, Strong Nuclear Force. Gravity is weak, but because it only ever adds (nothing has "negative" mass), it ends up being powerful on large scales, like planets and stars. The Weak Nuclear Force is a little obscure, but as the name suggests is involved in nuclear processes like atomic decay. The Strong Nuclear Force is what holds nuclei together, and it's what makes nuclear weapons so powerful.

Everything else is electromagnetism. Friction, contact forces, air pressure, water pressure, everything other than gravity that you experience can ultimately be traced back to electromagnetism in its various forms.

The "New Force" in this article is electromagnetism. The researchers looked at the effect of blackbody radiation (which is a type of electromagnetic radiation) on hydrogen atoms in an astrophysical context. (Here's the article; it's behind a paywall though.) They found that it could create an attractive force between the atoms which hadn't been appreciated before. But keep in mind that what they have found is a new way in which the electromagnetic force is expressed, not a "New Force" in the fundamental force sense.

Onto the "Stronger Than Gravity" part. The researchers noted that this blackbody force they discovered could be more important than gravity in the context of star formation. In a nebula, there's a lot of hydrogen, but it's really spread out--it's waaay less dense than air. (Star Trek has led many people astray by depicting nebulae as clouds that ships can hide in. Look at a picture of a nebula. In most of them you can see stars through them, even though they're light-years across. Anything with visibility measured in trillions of kilometres is a bad hiding place.) Somehow, over time the hydrogen in a nebula manages to coalesce into a star, with an enormous density. Exactly how this happens is a subject of ongoing research, and this new blackbody force could play an important role in that. Because at the densities the nebula starts at, the gravitational forces involved are super, super, super small. So even thought this blackbody force is super, super small, it could still be important.

It is not, though, "Stronger Than Gravity" in an every-day sense. You will not be levitated by blackbody forces. They will not explain dark matter or the expansion of the universe.

It's starting to sound like I might be down on this research, but I'm really not. It's a novel idea that is important to an area of ongoing research in astrophysics. That's basically the goal when you write a journal article. The headline writers at Gizmodo simply did a horrible job summarizing it. Good research deserves a better headline than this.

Thursday 25 July 2013

Rosalind Franklin, Social Science

Today's google doodle honours Rosalind Franklin, who was born 93 years ago. To me it's one of the better doodle subjects, as it draws attention to the way women's contributions often get overlooked. Franklin is a famous case; there are many others we don't know about.

So, a quick summary: Franklin was working as a research assistant at King's College London. While there she applied x-ray crystallography to DNA in an effort to deduce its structure. X-ray crystallography isn't like taking a picture; the crystals have to be prepared carefully and the pattern that comes out requires a fair bit of interpretation and deduction to figure out the actual structure of the molecule, particularly in an era without easy access to computers. Franklin produced the best x-ray data on DNA in the world, and was using it to build models of DNA. Watson and Crick were also building their own models at Cambridge, using Franklin's data. But the interaction was one-way; Maurice Wilkins, another research at King's College, was showing Franklin's data to Watson and Crick (without her permission), who used it to build their famous model, but Franklin was excluded from the conversations that her male colleagues were having and so was building her DNA model largely on her own. Watson and Crick published their famous paper proposing the double helix, which minimized Franklin's contributions, and later Watson wrote a book that further minimized Franklin's contributions (a particularly low blow since, by that point, Franklin had died).

It's an important story to tell, not least because it explodes the myth of an abstract, impersonal something overseeing a meritocratic process in which the best work always rises to the top; I'm going to call that something SCIENCE. It's a myth that underlies the "scientific method" so often taught in primary schools, which presents science as an abstract cycle that can be done anywhere, and notably leaves out such steps as "placing your work in the context of the field" and "convincing other people it's worth their time to read what you've done." It's a myth that underlies the movies and books and comics in which the (usually mad) scientist works in seclusion for years before unveiling their creation to the world, which looks on in awe--while conveniently sweeping away any details about how one gets "the world" to pay attention long enough to look on in awe. It's a myth that many scientists have helped to foster, by extolling the supposed ideal of pure research, unhindered by such mundane realities as "politics", in which the invisible hand of the "marketplace of ideas" selects the most worthy contributions.

This is, of course, not the way science has ever worked. Not least because science isn't an impersonal force, it's a collection of people. Papers get reviewed by people, data and ideas get shared by people, hiring and tenure decisions get made by people. And those people have biases, likes and dislikes, and ideas about what a good scientists looks like. Science, in short, is not SCIENCE.

And while we might like to promote SCIENCE as an ideal to strive for, the reality is science is simply too big to work like that. Here's an example of what I mean: on arxiv.org, which is a repository for physics research articles, there are 63 articles listed under "condensed matter" for July 24, 2013. So for one subfield in physics, on one day, scientists produced about 300,000 words of research articles. That's about the length of three typical novels (or one George R. R. Martin novel). Someone in the field, then, who wants to keep up with current research, has three novels a day to read. Three novels of physics, which, in my own experience, generally takes more time to get through than actual novels. Add to that the articles published in the literature of chemistry or other fields that could be relevant, and older articles the new articles refer to that are necessary to fully understand them, and our hypothetical condensed-matter-physics researcher has a rough estimate of half a million words to read every day to keep up with the research.

Of course no-one can read that much. So scientists, like professionals in every other field, use a collection of heuristics and skimming techniques to sift through the mountain of potentially relevant research and pull out things that actually interest them. And these tools are very dependent on social networks and name recognition. See a big name in your field as the author? Probably worth going through. Oh, that person gave a talk at that conference that was quite good; maybe her paper is worth reading. If you're new to the field, you likely have a supervisor who sends you articles to read; your supervisor's choice of articles is influenced by their professional network.

I'm using journal articles here as a proxy for overall research. Less formal avenues are even more prone to be dependent on social networks; a lot of science happens over beer in settings that blur the lines between friend and colleague. The point is that a) any research you do is only valuable if other people see it and use it in their own research, and b) there isn't a good way of navigating the enormous amount of scientific research out there without relying on professional and social networks. So talking to people, making contacts, and participating in "politics" (a term that scientists seem to use solely to describe social interactions they dislike) is and always will be important. It is also, unfortunately, a major mechanism by which implicit and explicit biases at the individual level are magnified into entrenched systemic bias.

This brings us back to why Rosalind Franklin's story is important to tell. Because if we keep insisting that SCIENCE is the ideal, we don't address the biases that are running wild in science, since they're simply a bi-product of the non-ideal aspects that shouldn't be there anyway. If we acknowledge that all science is social, we can look at how to address systemic bias by addressing the individual biases that shape the social network of science. And then maybe we can keep future Rosalind Franklins from being marginalized and ignored in making world-changing discoveries.

Tuesday 23 July 2013

Headlines in (Social) Science: Gender, Politics, and Unreviewed Findings

Often when I see a science story in the news the first thing I do is look up the related research article. That way I can see what was actually done, and evaluate, if not the detailed methods, at least the overall scientific logic of the article. If it's particularly new or controversial I sometimes bookmark the article so I can come back later and see what else has been published in response.

Of course, this requires that there be an article to look at. Since I'm at a university, paywalls aren't a problem, but even my university library subscription doesn't get me access to articles that haven't been published yet. Or even accepted. Which brings us to the story at the centre of today's post: a study funded by the Economic and Social Research Council, a British government funding agency, authored primarily by James Curran, director of the Goldsmiths Leverhulme Media Research Centre at the University of London.

According to various headlines, this study showed that:

Across the world, women know less about politics than men 

Women know less about politics than men, study finds (that goes for Canada, too) 

Women, especially in Canada, are more ignorant of politics and current affairs than men, says UK research

Study: Women Know Less About Politics Than Men

Did the study actually show this? To answer that we need to take a close look at the details of the research. As part of a CBC radio interview here, University of Calgary prof Melanee Thomas points out that there are a lot of ways these types of studies can be misleading. She brings up the important point that there can be biases in what is defined as political knowledge. Often "politics" is restricted in definition to so-called hard news: trade, military issues and conflicts, economic and budget issues, etc. Which these are certainly important, the list often leaves out other issues that are inarguably political: health, education, and all manner of issues of local governance. It's hard to argue that health policy isn't a "political" issue, but it is often labelled "soft" news and stuck with the other "human interest" stories. (As an aside, "human interest" is an interesting term: exactly why should I care about anything that's not of interest to humans?)

The point is that the study may be biasing its result by asking questions in the domains that men, on average, know more about, and ignoring domains that are, by any objective definition, equally political and which women know more about.

Does this study fall into those pitfalls? We don't know, and neither does Melanee Thomas, because the study hasn't been released yet. It's not listed on James Curran's research website, and the ESRC site lists the work as "submitted". As anyone in academia knows, a lot can happen between "submitted" and "published".

There's a couple of things to address here. First, we have the headline magnification I've talked about before. Curran holds a press conference in which he summarizes his work; a journalist takes that work and turns it into an article; an editor takes that article and turns it into a headline. Based on other cases of this headline magnification, these levels may have resulted in headlines that bear little resemblance to what was actually shown.

But, and this is the second point, we don't know what was actually shown, because the research is unpublished. And this is where James Curran has, in my opinion, acted reprehensibly. He has used his position as an expert to promote a conclusion without allowing the underlying work to be scrutinized. It will, obviously, be scrutinized eventually, but by the time that happens the original news stories will months in the past--an eternity in the online news world. No one is going to prominently display a story that adds context and corrections to a relatively minor headline from a few months ago. So prof Curran is putting out a conclusion that can't be verified, but that adds to a narrative that has women as intrinsically less able than men in certain key areas.

I don't think that research should be subordinated to social mores or political considerations. What I'm arguing is that researchers have a responsibility to ensure that, within the best of their ability, their research is reported correctly and in context. Especially when it has the potential to further harm already marginalized groups. We've seen the harm that can be done with headlines such as "Vaccines cause Autism." The harm from "Women know less than men about politics" may not be as obvious, but that doesn't mean it's not there. For that reason, Curran had a responsibility to give as nuanced and context-filled report as he could, and to allow other researchers the chance to dispute his findings and provide their own insights. He did none of that.

Monday 22 July 2013

I am slowly doing science (1, 2, 3, 4, 5, 6, drop).

Last week, the pitch dropped. The tar pitch, that is. I won't go into all the details of what happened, but here's a summary: A long time ago some people got in an argument about whether or not tar pitch was an extremely slow moving liquid (as opposed to a solid). To resolve this argument, they stuck some tar pitch in a funnel, which they stuck in a jar which went in another jar which went in a cupboard, and the green grass grew all around. The idea was that if the pitch ever dropped through the funnel, it would be proof that the pitch was liquid. If we reached the end of time and the pitch hadn't dropped, it was probably solid (turns out proving non-existence is tough...).

So last week, after various events had kept such a drop from ever being recorded (they happen once every ten years or so), a pitch drop was caught on video, 70 years after the experiment was started. Yay science!

The reason I bring this up is that there's a message in all of this that hasn't made any of the news reports about the pitch drop: Science sometimes takes time. Science is sometimes boring, and tedious. Science is sometimes boring and tedious even for scientists. If that seems like a strange thing for someone who spends at least some of their time as a science communicator, well it is. But it's also an important one.

First off, sometimes is a key word here. Science can be, and often is, exciting. It can blow your mind and change your view of the world in an instant. It can be indescribably cool. And sharing those cool, mind-blowing moments is an important part of inspiring both future scientists and the public at large to learn more about the world around them and what humanity can do with it.

But if that's all we ever focus on, we risk sending the message that doing science is about having a big idea, which is so obviously right that everyone goes, "Wow! You're obviously right," and sees the world in a new way. These moments, though, are few and far between. Far more often, someone proposes an idea that is partially right, and it gets bounced around, and revised, and extended. And, in the most crucial step, it gets tested by experiments. Experiments that can take time, experiments whose results are inconclusive or difficult to interpret, experiments that lead to more questions than answers.

The development of silicon computer chips is a good example of this. Electronic band structure theory, the ideas that eventually allowed people to understand the electronic structure of silicon, started development in the early 1930's. Experimentally testing this, though, was a problem; experiments in silicon contradicted each other, and were generally inconclusive. The problem, it later turned out, is that silicon is both exquisitely sensitive to the presence of impurities (which is why it's so useful) and extremely difficult to purify. It took a decades-long effort of progressively refining the techniques to manufacture pure silicon before its properties could actually be probed. This went hand in hand with refinement of bandstructure theory. Eventually, the structure was known well enough that the first solid-state transistor could be created, which would lead to the computer revolution--decades later.

Even after the silicon transistor was created in 1954, it took scientists and engineers years to get to the desktop computer and the internet. And much of the development was incremental, rather than in revolutionary flashes of insight. Each generation of hardware allowed engineers to refine techniques and build a better set, which is why the cpu in this computer consists of transistors largely in the same design as the 1954 original, except millions of times smaller and faster.

Ignoring this type of incremental (but no less world-changing) science leads to the type of big-idea, insight driven reporting so brilliantly excoriated in this extended piece by Boris Kachka in New York Magazine, written in the wake of the Jonah Lehrer scandal. It leads to doubt when climate change science isn't as clear-cut and straightforward as people have come to expect real science to be. And it leads to young potential scientists doubting their ability to be scientists when their ideas aren't right, or are incomplete.

So let's keep telling the mind-blowing stories. But let's also remember to occasionally tell the stories of ideas that weren't quite right, experiments that were confusing, and pitch that took a decade to drop.

Curing cancer in everything but humans

If, the next time you read an article that claims that a cure for cancer is just around the corner, you will be forgiven if you don't rush out to tell your friends and family the good news. After all, it seems like such announcements are made fairly regularly; meanwhile oncology wards aren't exactly closing down due to lack of patients.

You could repeat this example indefinitely; we still don't have cures for Alzheimer's or Parkinson's; we don't know what causes autism (or even if there is a cause); we still can't halt aging. All this despite the regular headlines telling us that such things are just around the corner. What gives?

A big part of the problem here comes from the fact that, when it comes to doing medicine in humans, there are two types of studies: the type we would like to perform, and the type we actually get to. The type we would like to do goes something like this: take two groups of people with a disease. Treat one group with the drug you want to test, and don't treat the other group. Keep everything else the same, to the point of giving the control group fake treatments so that the experience isn't different. At the end, tally up how everyone did, and see if the drug worked. Or, to take a slightly different formulation, take two groups of people. Expose one group to the agent that you think causes a certain disease. Don't expose the other group, but keep everything else the same, to the point of exposing them to a fake agent. At the end, tally up how everyone did, and see if the agent caused the disease.

The problem is, of course, that it is usually completely unethical to do this type of study with actual people. Obviously you can't go around deliberately exposing to things that you think might cause terrible diseases in the name of science, and you also can't deny people older, at least partially effective treatments because you need a control group to test your newer, better treatment. So scientists fall back on two ways of getting around this. One, you could do the test on animals. Two, you could look back at records of who had radium watches, or smoked cigarettes, or consumed excess vitamin C, and then correlate that with the rates of leukaemia, or lung cancer, or long life.

There are, of course, practical downsides to each. Animal testing, which can in principle be done with our rigorous ideal study design, has the problem that you don't know that humans will react the same way as the animals, and the only way to know is to do another study, which brings us back to the original problem. Correlating patient histories, which involves actual humans, has the disadvantage that you don't have controls; you don't know, for example, that the people who consumed excess vitamin C weren't simply the type of people who followed all kinds of health fads, in which case they may also have been, compared to the general public, less likely to live near power lines, more likely to rub their skin with olive oil, more likely to eat organic foods, etc, etc.

John Ioannidis looks at statistical issues around both animal studies and "look-back" studies. His research is, to say the least, a little disturbing, considering how often these types of studies are reported in the media. In a now-classic paper, "Why most published research findings are false," Ioannidis points out that most look-back studies ignore the roads not taken in their analysis. What he meant was that if you do a study on, say, the connection between aspartame and Alzheimer's (which made headlines in the '90's), you need to look at all of the other things that you didn't test that were, in principle, just as likely to be connected. This is because study conclusions are typically reported with what's known as the significance; the probability that the effect observed could have arisen randomly. Significance is reported basically because it's what we can calculate easily. But the problem with look-back studies is that if there were, say, 50 different things that were as likely as aspartame to be connected to Alzheimer's, then a significance of 0.05 (which is a typical value) becomes inconclusive. Because now you have a 1 in 20 chance of getting the effect you saw by chance, but there were 50 different relationships you could have tested, so odds are you'd get 2 or 3 positive results just by randomness. Unfortunately it's usually pretty difficult to estimate how many different things are as likely to be connected with Alzheimer's (this is known as the prior, or prior probability, and it's a pretty endemic problem across science). So most studies don't report it. Which means that they may be drastically overestimating the strength of their conclusions.

Ioannidis has also looked at animal studies. In a paper published last week in PLoS Biology, he asks the following question: if you perform some large number of studies, how many do you expect to see come back with a positive result (ie, the medicine worked). He works out a statistical argument for this expected number, then compares to available databases in which people have reported both positive and negative results from animal tests. What he sees there is that the observed positive results are way higher than the expected. Hence, researchers are, for whatever reason, more likely to report positive results than negative results. This is a problem because of the last section: in order to get an idea of the prior for a given relationship, we need to know how many similar studies have turned up negative results. If the negative results are unreported, again, studies end up drastically overestimating the strength of their conclusions.

This leaves us in rather a bad situation. Not only do look-back and animal studies have built-in limitations, which tend to get glossed over in media reports looking to make an impact, but the studies themselves are overestimating how conclusive the data they report is. The end result is the first paragraph of this post: a relentless stream of articles promising potential breakthroughs that never quite pan out.

What's the solution to this? There may not be a simple one. Better appreciation of study design and limitations, both by researchers and science communicators. Most importantly, we need to design more integrity into biomedical studies. There are places where research teams can register studies before starting them, thus removing an important source of bias. Such efforts are voluntary at the moment; national governments and healthcare bodies could make them compulsory. And scicomm blogs could work to make sure that each time someone reads a headline that says "Cancer cured in ...", they skeptically say, but what type of study was it?

Tuesday 16 July 2013

Headlines in Science: Scientists say...

The last post in this series looked at a specific example of a bad headline. This time, I want to zoom out a little and focus on a class of headlines. Specifically, all those that include "Scientists claim," or "Scientists say," or any equivalent to this, in the title.

This is a dangerous construct. The implication is that all scientists, or at least all scientists in a given field, or at least a majority of scientists in a given field, agree with the statement in the rest of the headline. It's understandable when you consider headlines as a way of generating interest in an article. After all, no one cares if "Joe from accounting totally thinks he gained weight when he stopped sleeping." But give it a title like "Lack of sleep can make you fat, scientists claim" and now we've got something.

However interest grabbing it may be, problems arise when the implied support of the scientific community collides with that effect we've discussed before, wherein an editor summarizes (and sensationalizes) an article whose author was summarizing (and sensationalizing) a research project. A potentially misleading or flat-out wrong statement has now been given the weight of expert consensus.

To look at the "lack of sleep can make you fat" example, looking at the article reveals that the scientists didn't actually measure weight, or BMI, or waist size. And they didn't run the experiment long enough to even see a noticeable weight gain. What they did was measure the levels of a particular chemical in the body that is linked to a desire to eat. Now there's nothing wrong with doing that, but as with all research it's important to be clear on what was and wasn't shown.

The next big problem with the "scientists claim" construct is that very often it's applied to the findings of one researcher, or at most a small collaboration. While technically it's true that a paper with three authors is "scientists" saying something, headlines using this construct give the impression of consensus, not just a small group.

The example we've already looked at falls squarely in this category; it's a single group reporting one study they performed. This problem, though, appears to be rife. Searching "scientists claim" and "scientists say" in Google news on 16 July 2013 brought up, in addition to the sleep-fat story:


"Singing And Yoga Might Have Same Health Benefits, Scientists Claim"

"Global warming 'can be reversed', scientists claim"

"Earth had two moons, scientists claim"

"'There is no scientific consensus' on sea-level rise, say scientists"

In every one of these stories, upon actually reading the story you realize that each is based on one paper published by one research group. The research might be born out, and the headline may actually reflect scientific consensus in a few years (well, except the last one, which is a flatly disingenuous article from the climate-change deniers camp). But at the moment they're jumping the gun.

There's one last point I want to make here. Often "scientists claim" headlines do include a qualifier: might, may, can, etc. This in and of itself isn't a bad thing. But I don't think it lets editors off the hook for making the statements they then qualify. Now, I don't actually have any research backing up what I'm about to say (if anyone else knows of any, I'd love to hear about it!). But from personal experience and talking to other people, it seems like qualifiers are the first things forgotten when recalling a headline. I don't generally remember the exact wording, I remember the idea and I paraphrase it, which comes out something like "this study showed that if you get less sleep you gain weight." Qualifier gone.

For this particular problem, there's a pretty easy solution: stop using "scientists claim"! Or any other equivalent construct. At the very least, reserve it for statements that come out of large conferences designed to forge a consensus. But on the whole, editors, please just stop.

Maybe then I can stop losing sleep over terrible headlines. Which I heard was making me fat. It's true; scientists say so.

Monday 15 July 2013

Why E=mc^2 is actually cool

E=mc^2 may be the most famous physics equation in history. Why this is, though, is misunderstood, both by the public at large and even by many physics students (at least ones I've talked to about this).

So, the Public Understanding: Einstein was a super-genius, and he invented E=mc^2. This has something to do with energy. Einstein used this to invent the atomic bomb and win World War II.

Why this is Wrong: Well, Einstein was actually a super-genius. I kind of have a crush on him, to be honest. And he did derive (an important point we'll come to later) E=mc^2. He did not, though, have much to do with inventing the atomic bomb. What's more, E=mc^2 didn't lead straight to the bomb in the sense that most people think it did.

So let's take a step back. The equation we're talking about says that Energy (E) is equal to (=) mass (m) times the speed of light (c) squared (^2). This tells us that (a) mass can be converted to energy, and energy can be converted to mass, and (b) a little bit of mass converts to an enormous amount of energy, since c^2 is a very big number. Now, it's certainly true that the mass of the final nuclei involved in a nuclear bomb is less than the mass of the initial nuclei, and that this change in mass is proportional to the energy released. But that's true of all processes. When I burn gas in my car, the final products are ever so slightly lighter than the initial ones. But I don't credit E=mc^2 with making my car run. So what's up?

The reason we associate E=mc^2 with nuclear (ie, a-bomb) processes and not chemical (ie, gas-burning) ones is basically a matter of technical convenience. When I burn gas, it's easy to measure the energy that came out, but hard to measure the change in mass, because it's incredibly tiny. When I split or collide nuclei, it's hard to measure the energy that comes out, partly because there's so much of it and partly because a bunch of the energy gets carried off by neutrinos, which we can't capture very well. But it's (relatively) easy to measure the mass of the initial and final nuclei, so that's what we do. E=mc^2 is always true, it's just sometimes convenient to use, and other times not.

In any case, most of the effort that went into building the atomic bomb was on rather practical questions like, "How do we separate out the uranium we want from the uranium we don't want?" and "How can we use precision explosives to bring that uranium together in just the right way?" These questions had really nothing to do with E=mc^2.

Now the Common Physics Student Understanding: Einstein was a super-genius, and he derived E=mc^2. This tells us that mass and energy are equivalent, two aspects of the same thing. This changed our view of reality.

Why this is Wrong: Well, it's really not. What it is, though, is incomplete. So to complete it, we have,

Why E=mc^2 is Cool and Important: To understand this, we need to take a look at where the equation came from. Where it came from was two papers Einstein published in 1905 on electricity and magnetism. Einstein starts off this little duology by noting that, at the time, the laws of electricity and magnetism were inconsistent with the laws of motion in a peculiar way. The example he used requires a bit of background, so I'm going to pick a simpler, but equivalent one.

You probably learned at some point that electric current can make magnetic fields--this is how we get electromagnets. In fact, any current, and any electric charge that's moving, creates magnetic fields. This, though, creates a problem. Say I rub a balloon on my head to put some charge on it, then put my charged balloon out in space, a long way away from anything. Now, if the charged balloon is moving, it creates a magnetic field; if it's not, it doesn't. But, how do we know in space what is moving and what is standing still? If one person (normally called Alice) is floating next to the balloon, and another person (normally called Quvenzhané) shoots past, they would disagree on who is moving and who is standing still, and hence they would disagree on whether or not the balloon was producing a magnetic field. But the magnetic field can't both be there and not be there, so we have a problem.

Einstein noted this inconsistency, and found a way to write physics laws in a way that didn't create these disagreements. It was a bit of a weird way, with time slowing down as you sped up, and lengths changing and whatnot, but it worked. And, almost as an aside, it produced the expression E=mc^2.

The details of how that all works aren't really important here. What is important is this: the laws of Electricity and Magnetism (EM), which you can work out with some styrofoam balls and plastic in a high school classroom, imply that every object in the universe has an intrinsic energy that only depends on its mass. Not its internal structure, of what it's made of, just its mass. So E=mc^2, which isn't really about EM, and applies to things that aren't charged or magnetic, and plays a large role in gravity, is embedded in the structure of electricity and magnetism. This should blow your mind. The laws of how electricity works also tell you that everything has an intrinsic energy proportional only to its mass. This is one of the best pieces of evidence so far that there is, in fact, a consistent mathematical structure underlying the universe. That Einstein figured out this implication pretty much cemented his genius status, even if he didn't single-handedly win World War II.

And THAT is why E=mc^2 is cool.

Saturday 13 July 2013

Headlines in Science: io9 and Objectifying Women

This is the first post in a series on headlines in science communication. This one is from last year, but I'm going to talk about it because it's a particularly egregious example of a terrible headline.

One of the blogs I read is io9, a blog about science, technology, and nerdy things in general. Normally I enjoy frequenting this part of the internet. But in this case they participated in the all-too-common trend of headline inflation - that is, taking some research, adding an interpretation to it that wasn't in the original but that makes it more attention-grabbing, and then making that interpretation the headline. In this case, the research was this, an article (unfortunately behind a paywall) that sought to quantify the mental processes that drive the objectification of women's bodies.

I won't go into all the details, but basically the study looked at how likely people were to notice changes in specific body parts in pictures of men and women, and how easily they identified pictures they saw earlier from a picture of just one part of the person. It was a study looking at the way the minds of the participants worked, while assuming throughout that a) objectification is a bad thing, and b) studying the way the mind works could help us combat it.

Move on over to the Scientific American blog post covering the story. Here the bare facts are reported, and the post stresses that objectification is harmful, and points to an experiment that could lead to a better understanding of how to put your brain into a non-objectifying mode. But the author also wants to get at more than just what is in the study, so we get this quote:

There could be evolutionary reasons that men and women process female bodies differently, Gervais said, but because both genders do it, "the media is probably a prime suspect."

Note that the quote they got from the study author (Gervais) says "the media is probably a prime suspect," but the post author chose to place it in a context that implies that evolutionary reasons are also a possibility, something not mentioned at all in the research article. Obviously, I have no idea what their conversation was, but it seems like the journalist asked whether evolution could have hardwired us for this, and the study author gave a researcher's version of "no." Bear in mind that scientists get good reputations by not being wrong, so they tend to hedge any statements they make, especially ones on which they don't have conclusive data from multiple sources. The journalist here took that "no" and shaped it into a sentence that comes across as a "maybe..." It's a little sad, but not entirely unexpected.

Now we get to the io9 article. The headline is "Both men and women may be hardwired to objectify women's bodies". At least, it was (I'll get to that in a bit.) At this point, my question for io9 is, WHAT THE HELL?!?! With one headline they took an article whose goal was to help reduce objectification, and turned it into something that could be used as an excuse to keep doing it. The comments indicated that a number of people took it exactly as that. (One comment: "I'm staring at your titties because of science baby. Pure science." Presumably not the reaction the study author was hoping for.) I appreciate the fact that io9 wants to increase their page views, since that is how they generate revenue, but completely inverting the subject of research in a way that harms the people the research was trying to help so that more people are directed to their blog is despicable. There's no way to soften that or justify what they've done here.

Just in case that wasn't bad enough, the article title was changed, after a number of the comments pointed out that it was completely at odds with the research it was reporting. Now it reads, "Why both men and women's eyes are drawn to women's bodies." Perhaps no more accurate, but at least less offensive. (The original title is still in the web address). But, there's no mention in the article that this has been edited. So now, the numerous people who made comments suggesting that this was a sexist attempt to increase page views are left sounding like they over-reacted to a straightforward article. If you're going to correct a mistake on a thing like this, the least you could do is fess up to it. Not that would have undone the damage. Since io9 is a high volume blog, even someone who checks in once a day won't ever see the article again, unless they scroll down the sidebar looking for it. All in all, this amazing double-whammy might be the biggest science journalism fail I've seen in a long time.

This whole episode is a sad illustration of the process that creates terrible headlines, and the damage they can do. To start with, the science being reported on was an exploratory work; an initial study that will presumably be followed by others to given more nuance and context. Exploratory work often turns out to be incorrect, and when it is correct, the best interpretation is bounced around the research community, often for years, until the work is settled in context with other work and theories, and a story emerges. Any take-home messages that are suggested before then are, at best, the opinion of one researcher regarding the significance of their own work, or at worst, the opinion of one editor regarding work in a field they have only a passing knowledge of. This type of science needs to be considered especially carefully as it is especially prone to bad headlines.

The next step in this sad process is the multiple layers of sensationalizing. Here there were three levels: the original blog at Scientific American, the article at io9, then finally the headline for that article. Each level pushed the conclusions from the original research into more sensational, and less accurate, territory, until the point was completely lost.

Finally, the damage. Though the body of the article never makes the statement the headline does, it's pretty clear from the comments that a number of people assumed the headline was the conclusion of the research being reported on. And why not? Isn't that the point of a headline? It's difficult to quantify this, but it's safe to say that, for each person that took the time to comment on the article, many more simply saw the headline and added it to their internal "facts I know" database.

The process by which this headline went wrong suggests some things that could be done. First, read the original research. I know that paywalls are a problem for many science enthusiasts, but there's really no excuse for a professional journalist to not get a copy of the original article. Reporting on a report on some research is bound to introduce distortions. Second (and this will be a theme), the headline should be tossed back as far as possible. What I mean by this is that in the worst case scenario, the headline is okayed by the journalist writing the article, and in the best case, by the scientist actually doing the research. This won't always be realistic given news timelines, but throwing the headline back as far as possible to double-check it can only help.

So, thank you, io9, for that wonderful illustration of how to be completely terrible at making headlines. I sincerely hope the rest of the articles in this little series have far less to work with.

Scientists say Headlines Generally Terrible

A headline serves as the title for a news story, but obviously it's more than that. It serves as a guide, letting the reader know what to expect; it forms a context for what they are about to read; and it may be the only part of the article many people look at. Which is to say it's important that a headline be as accurate as you can be in one line.

Unfortunately, headlines often aren't written by the author of the attached article--they're added by an editor or someone else involved with the publishing process. So there's a double layer of understanding to surmount: the science journalist understanding the material and communicating it clearly in an article, then an editor understanding the article and making a good headline. Sadly, the point of a piece of research or a discovery is often lost in these two translations.

It gets even worse when the subject matter is controversial. Now, instead of just two layers of understanding going into the headline, there are also two layers of sensationalism. I'm not trying to say here that science journalists are irresponsible tabloid writers, just that by nature they look at a story and ask, "What's the excitement here? How can we spice up this story?" Then the editor looks over the story and asks, "What's the excitement here? How can we spice up this headline?" Two layers of this, and it's no wonder that the headline often ends up an extreme exaggeration of the science it purports to describe.

It's helpful to look at some examples to see where headlines can go wrong, and what the damage can be. With that in mind, this post is merely the introduction to what I hope will be an ongoing series looking at headlines in science articles. For the most part it will be focussed on where they go wrong, although if I come across anything that strikes me as a particularly good headline for a subtle or difficult topic, I'll post about that too.

Often I see articles pointing out problems with a format, or institution, or society, without any suggestions as to what can be done. This can be useful, but obviously has its limits. So I'm going to try to think about how headlines can be better generated as I post each article about them. Hopefully I'll leave you with not just "This is a problem," but "This is a problem, and here's how it could have been done better."

So, now for some terrible headlines!

Newton, Gravity, and How People Weren't Total Idiots

There's an unfortunate reality about getting a university degree in science: you end up knowing essentially nothing about the history of science. I was reminded of this recently because there was a relatively large history of science conference happening in the city I live in, sponsored by the school I attend. You would think it might be of some interest to some people in the physics department. You would be wrong. It wasn't mentioned once in the numerous emails I get describing the events of interest going on each day, wasn't discussed by any of the graduate students I ran into that week, and when I did bring it up, I got strange looks, as if, why would someone in physics care about the history of physics.

I'm not saying the state of affairs is all physics' fault; looking over the talks scheduled at this conference made me realize that the academics in the field, like all other fields, are mainly concerned with impressing their colleagues in their subfield, rather than building bridges across related disciplines. Still, it's sad, and these types of divisions mean, among other things, that you can get multiple degrees in science while maintaining a complete lack of understanding or appreciation of how your field came to be.

So hopefully I can do a small part, occasionally, to remedy the situation. Starting with Newton and gravity.

Isaac Newton, as everyone knows, invented gravity. Or discovered gravity. Something to do with gravity. There's two versions of the story. The first one, the one that is vaguely in the heads of non-scientists when they are asked about Newton, goes something like this: Newton was sitting on the ground one day when an apple fell on his head. This made him realize that gravity was a thing, so he told other people about it. They then realized that gravity was a thing and so declared Newton to be a genius.

This version of the story seems to imply that people back then were complete and utter idiots; that no one had ever noticed that things fall down, or commented on it, or thought about why this might be. Clearly, a little bit of though shows that this cannot be a true story.

The version that you get in first year physics is more like this: Ha ha, normal people are dumb, there was no apple. Newton realized that gravity is a force that is proportional to the inverse of the square of the distance between two objects, and also to the objects' masses. That is why he is famous for gravity.

This, while being closer to the truth, in that Newton did propose an inverse square law, doesn't fully explain Newton's fame and lasting influence. Hooke also proposed an inverse square law independently, and neither was the first person to make mathematical statements about gravity and the planets.

Newton's lasting influence arises from a bold claim he made with this theory (and others): that there is a single law of gravity, which applies to apples, and the Earth, and Mars, and Jupiter, and the Sun, and every single body that we can see, regardless of whether it lives in the heavens or the earth. It is this universal nature that sets Newton apart from the people who came before him, and it is that attitude that is perhaps his most influential contribution. Even if you don't remember a single law of motion, or how gravity works in a mathematical way, you know that things on Mars obey the same laws of physics as things on earth, and you believe, without needing it to be proven, that if we ever sent a probe to a planet in another star system, that the same laws of physics would apply there as well. That you believe that is Newton's most lasting contribution to science.

Why does that matter? Well, for one thing, it's always worth remembering that people in the past didn't necessarily think the same way we do now, and there's a lot we take for granted that would have been foreign to them. Prior to Newton (and yes, I know I'm simplifying things by implying it was all due to him), the idea that the universe operated under a set of consistent rules that applied everywhere was wouldn't have occurred to most people. In fact, if you go back far enough, you lose the distinction between the supernatural and the natural completely.

Secondly, in general I think that the more we educate ourselves about how science has worked, and how it works now, the better able we will be to make decisions about the many, many issues that science touches on today.

So there's the history lesson. For more on Newton, I. Bernard Cohen is a place to start. For more on pre-scientific world-views, the opening chapters of The Evolution of God offer a fantastic description.