Friday 13 December 2013

Doing Stuff! (or, why there haven't been new posts in a while)

So it's been a while. And not, as you might suspect, because I've been lazy. Rather my science communication time (which is given by Total Time MINUS time doing enough work to keep my research supervisor happy MINUS time ensuring that the students of Physics 101 [yes, it's actually called that here] get tutorials and marks and such MINUS time spent ensuring that food and clothing and basic cleanliness happens MINUS just enough time with my family to keep my sanity MINUS perhaps six hours of sleep a night) has been taken up by a couple of other projects.

Project 1: More Drawing! I illustrated a series of pages looking at Einstein and E=mc^2. The content is similar to what I wrote here, but modified for visual appeal. These drawings actually got their own physical copies which hung out in the Irving K Barber Learning Centre here at the University of British Columbia, as part of a science-art exhibit called "Inescapable Perspective" put on by the UBC Carl Sagan Association.





Project 2: A Video! I've been teaching myself how to do 3D animations in Blender, and the result is this video, which shows how our sense of smell works at the molecular level, using scenes from Star Wars. I made this for the Faraday Show, an all-ages science show (in accordance with popular usage, all-ages=kids) put on by the UBC Dept. of Physics & Astronomy (there were also some chemists involved). The name comes from Michael Faraday, a British physicist who did a bunch of important work way back in the day, and also put on a physics show for kids every Christmas.



These two projects have been sucking the life out of this blog, but hopefully now I'll have a little more time (ha!) to catch up on some topics I've been dying to write about.

Stay tuned!

Wednesday 23 October 2013

Drawing for UBC Carl Sagan Association

A while back I started a feature called my Scientific Notebook, in which I put up sketches I did that explained science. It didn't really fit with the tone of the rest of the blog, but I did want to post them somewhere.

Well, the people over at the UBC Carl Sagan Association, whose mission is to communicate science, have agreed to post some of my sketches on their blog! So future Scientific Notebook entries will likely wind up over there.

The first one is already up. If you read this blog you'll recognize it as covering some of the same ground as this post on genetics, though it approaches the topic differently and has more pictures.

Enjoy!

Monday 21 October 2013

Evolving Superpowers

Wouldn't it be cool to have super-powers?

That's the premise for a huge number of TV shows, movies, books, and of course comics. Apparently wondering what it would be like to be able to fly, read minds, teleport, or stop time is a pretty universal past-time.

And as every fan of superhero stories knows, one of the most important elements is the origin story--how the hero gets their powers. They could be from an alien planet, or be vested with an alien artifact. They could be an ordinary person with access to specialized technology. Radiation could be involved. Or it could be that humanity is evolving to its next state, and that state involves super-powers.

This last one is among the most popular. It's the basis for X-men, one of the biggest superhero franchises out there, as well as the TV show Heroes, and numerous other series. And it's the basis for the new TV series (which is a re-boot of the old TV series) The Tomorrow People.

I've watched the first couple of episodes of The Tomorrow People, and I liked it. I'm hopeful that, by focussing on just three powers, the show will be able to look at their implications with a little more depth than normal and avoid painting themselves into a corner (as in, "wait, doesn't Peter Patrelli have, like, every power? Why are there any world-threatening problems he can't solve?"). I also like the whole double-agent angle, and I hope the show has the courage to explore some murky waters and moral ambiguity around whether those with super-powers should be policed, and if so, how.

But what I want to talk about now is the science.

Yes, I'm sure you can already predict where this is going. "Evolution doesn't work like that, there would be many harmful mutations for every beneficial one, evolution can't 'look ahead', how could a single gene cause people to teleport..." Well, I'm not going to complain about those (mostly). I'm willing to cut a TV show some slack in its superhero origin stories; it's primarily entertainment after all. If I wasn't willing to suspend disbelief I wouldn't enjoy much media.

I do want to point out two things, though. First, the leader of the bad-guys is supposed to be an expert genetics researcher. At one point, when threatened by one of the super-mutants, he discusses how the tomorrow people can't kill anyone--they freeze up if they ever try to. Bad-guy-leader talks about how this will eventually be a beneficial mutation, but sadly right now it's a liability.

This is exactly the kind of mutation that can't evolve. Evolution has no "look-ahead"; genes proliferate because they're beneficial right now, not many generations from now. Having the scientist character basically rub the audience's face in bad science took me out of my suspension of disbelief for a moment. It's always better to leave some things unsaid and let viewer's imaginations fill in the details than to over-explain and blunder.

The more important issue is the show repeatedly referring to "the gene" that causes these superpowers. It also refers to the tomorrow people as being a new species, not entirely human--it's something they stress on multiple occasions in the first two episodes. The clear implication here is that a change in one gene can move you from "human" to "not-quite-human". And this, to me, is a problem.

In any species there is a fair bit of genetic variation--humans are no exception. You have different genes from the person standing next to you (unless you're standing next to your identical twin). A large part of that variation comes from sexual mixing--because you get half your genes from each parent (and because there's a lot of genes to pick each half from) you end up with a unique mix of genes.

And of course there are also mutations. Humans have an estimated average mutation rate of 175 mutations per generation--meaning the average person has 175 new mutations they didn't get from their parents. Since each person also inherits the 175 mutations that their parents had, and the 175 their grand-parents had, etc, etc, we're all walking around with thousands and thousands of mutations. (Sadly, after extensive testing, I can report that none of my mutations have led to latent super-powers.)

None of this variation makes any of us not human; it just means we're all unique. Humans simply aren't a mono-culture.

Why am I picking on this particular point? Because in our non-fictional world, people have used a small locus of genes to argue that certain other people weren't fully human. Genes like the ones controlling skin pigmentation, or hair texture, or even nose width. We have a long, sad history of classify some people as subhuman, and even today there's still some hold-outs to that view.

This may be part of the point in some superhero stories. Certainly in a number of the X-men stories the parallels between the mutants and oppressed minorities in the real world were intentional. I worry, though, that the emphasis The Tomorrow People puts on a single gene leading to a new species will lead audiences to internalize the message, "different genes always mean different (sub)species". This would be a disservice, regardless of what else the story might say about tolerance.

It would be nice to hear, just once, in a story about mutants with super-powers, someone point out that mutations aren't rare, and that genetic variation doesn't make some people not human.

Can we work on that, TV writers?

Wednesday 9 October 2013

Headlines in Science: Higgs Nobel Prize Edition

It's time to award the prize for best headline about the awarded of the prize for best physics by the Nobel committee. And by "best", I mean in the can't-look-away-from-the-train-wreck sense.

Background: as you may have heard, the Nobel Prize for Physics was announced, and surprising no one, it honoured the discovery of the Higgs boson (just the theorists though--no love for the experimentalists who, you know, actually found the thing).

Anyway, there were obviously lots of headlines on this issue, but I hearby award the following one from The Register, a British technology news website, for the honour of best headline, (by which I mean worst headline):

"Brit boson boffin Higgs bags Nobel with eponymous deiton"

I don't even know where to begin.

Scratch that, I'll begin with the words. "Boffin" seems to be some sort of British word for expert. That's fine, but the headline implies that Higgs is an expert on "bosons". Bosons, for those who don't know, are one class of particles; the other class is Fermions. Every single particle or collection of particles (which includes, well, everything) can be classed as either a boson or a fermion. Higgs isn't really a boson specialist, though--the category is so wide it's hard to know what that would even mean. Rather he used quantum field theory to predict a new particle, which happened to be a boson.

"Deiton" appears to be a new word coined by the good people at the Register. No explanation is given but it would seem to refer to the fact that the Higgs boson has sometimes been called the "God particle".

"Eponymous" seems like a strange choice for a context in which clarity is presumably important, especially when paired with the made-up "deiton".

Beyond the actual words, though, there's a bigger issue here, and it's conceptual. The issue is whether a headline exists to inform, to set context, and to draw in interested viewers; or to show off the cleverness of the headline writer. It seems like the Register opts for the latter.

So, an open letter to the Register (because they totally read this blog...):

Dear Register,

Please be aware that a truly clever headline is one that sets up the reader for what is to come in an accurate, concise, and clear way.

Also, as a rule of thumb, headlines should probably not contain words you just made up. Just saying.

Sincerely,
Everyone who cares about science communication

Monday 7 October 2013

Quick Hits

I've been pretty busy the last week and a bit, which has kept me from writing anything substantial here, but here's a few quick thoughts about some science stories that have struck me:

  • So the Nobel Prize in Physiology or Medicine was awarded today to three scientists for their work on how vesicles move various molecules and substances around cells. The science here is very cool. I will note, without taking anything away from the work done by the recipients, that the prize went to three white men; two born in the US, one is Western Europe. Since its inception more than a hundred years ago, ten women have one the Nobel Prize in Physiology or Medicine (out of 204 winners), four women have won the Nobel Prize in Chemistry (out of 162 winners), and two women have won the Nobel Prize in Physics (out of 193 winners). No Nobel Prize in the sciences has ever been awarded to a black man or woman. We still have a long way to go.
  • If it wasn't enough that industrial fishing has killed off 80% of the biomass of the oceans, the effects of warming and CO2 absorption (which changes the ocean's acidity) are causing the world's largest ecosystem to decline faster than previously thought.
  • This is a great post on Malcolm Gladwell and the danger of oversimplifying science--a perennial concern here.
  • Finally, this is terrible.
Hopefully I'll find a little more time for writing this week. Well, maybe next week. Okay, it's going to be a while.

Monday 23 September 2013

What's Political and What Shouldn't Be: Science in Canada

Last week protesters gathered in a number of Canadian cities to draw attention to the science policies of the current government. Their concerns are, according to the press, that the government is keeping scientists from communicating to the public, and also that it's defunding important scientific projects.

Here's my problem with this: the media reports are making two things on very different scales of problematic seem equivalent.

First, the money issue. If you read the second article linked to above, it's the main reason for the protests. Scientists aren't happy with the funding for science under the Harper government, particularly with regard to basic research. This is a legitimate complaint for scientists to make; they want to see Canada reap the benefits that come from having a strong research community and they see these cuts as threatening that.

We do live in a democracy, though, and the people of this country elected the Conservatives on a platform to cut government spending. So while it makes sense to argue that the cuts are ultimately going to hurt the country (an argument I am on board with, as it turns out), it's also important to realize that in this regard the government is, in fact, doing what they said they would do during the election campaign.

The muzzling issue (which is the main reason for the protests according to the first article linked to above) is an order of magnitude more serious. This isn't about saving government money. Public money was spent on research, then once the results were in the government demanded that they and they alone see the them, before deciding what to pass on to the public after suitable editing.

Selectively releasing results is a form of dishonesty; it's no different than when pharmaceutical companies release studies that show their products in a good light and bury ones that point to potential risks. When certain research outcomes are suppressed the government is, as a whole, giving the public a misleading picture.

This is an issue that should transcend political affiliation. Whether you believe in big government or small, decision makers need the clearest picture they can get from the people the public is paying to investigate some of the most pressing issues facing the country.

It's no secret that the Conservative government and the science community have been at odds. By both occupation and political leaning I am on the side of the science community, but this is a bigger issue than just some professional researchers wanting job security. Ultimately the question is this: Is the government is interested in getting the best answer to the questions that matter to policy, whether or not those answers line up with political ideology? The alternative is a government which cares about protecting their image, even if it means wilfully distorting the research the public paid for.

I'm not saying that cutting the funds to science is a good thing, or that scientists are wrong to go out and engage the public in the need for science funding. That is how you build democratic support for your position. But by conflating the funding and the muzzling, these latest protests and the media reporting them have watered down an important message about the way this government treats public research like the property of the Conservative Party. That's unacceptable, and it should have been the focus last week.

Wednesday 11 September 2013

More on individuals and averages

I seem to keep hitting on this idea: the individual may not be well described by the average. In fact, it's possible for no individual to be well described by an average. It's an important point because it really strikes at the heart of where a lot of science reporting goes wrong.

I'm not the only one saying this: Jamil Zaki, a psychologist at Stanford, has a great post on a Scientific American blog going into detail about exactly this idea. His point is that psychology deals with averages, and sometimes there's a lot of variation around those averages that isn't often reported.

Zaki only discusses psychology in his article, but of course the idea extends beyond that. Any science that deals with populations and tries to extract generalized information from them carries the same caveat. "Populations" as I'm using it don't even have to be people; they could be animals or even stars, or events, or days. In other words, most of science, including all of economics and medicine, is covered here.

So the weather in one season in one part of the world may not be well described by a global average temperature (It was minus 20 here yesterday! What happened to global warming, eh?) Your risk of cardiac disease may not be well described by the average for other people with similar habits and backgrounds to you. It's even true that, if you smoke, the decrease in your lifespan may not be well described by the average decrease in life expectancy for smokers.

The average is simply one measure of a population. It might be a good way of describing things; it might not be. As an example, I could take the average height of my family. Adding myself, my spouse, and our toddler, and dividing by three gives me something around four and a half feet. That's not anywhere near any of our heights; in this case the average is simply an irrelevant measure.

More commonly, the average isn't a bad measure per se, it's just incomplete. What you usually need is the average, plus some indication of how spread out the population is around that average. The standard deviation is one such measure.

Any reputable scientific paper will have calculated many measures for the population it's looking at. Here's a list of, among other things, various measures that can be applied to a population. It's a little bewildering, which is likely why most media reports focus on one number and strip away the complicating details.

What to make of all this? Well, don't start smoking. Even if there is a certain amount of variance in the data, it's foolish to assume that you'll be an outlier. For well-established health issues, the average is, more often then not, a good guide.

Moving outside of that, if the study is new it's always worth asking, what's the variation around the average they're reporting? We looked at a study a while back in which the a connection between autism and induced labour was reported; the actual research paper showed that the variance around their average results was so large that it threatened to undermine the conclusions.

Knowing when an average is a bad measure is a little harder. Often when this happens the person reporting the average is deliberately using a poor measure to make themselves look better. Economic data is a prime example. Whenever you see the GDP per person, unadjusted for inflation, you can safely discount that number as worthless. The average simply isn't a good measure for the typical person's income. Politicians report it, though, because it's a quantity that governments can reliably increase through monetary policy, even if life for the typical person hasn't changed.

The key point here is that populations are complex. Any time you see them reduced to a single number, it's worth asking, "What am I missing here?" And, as Zaki points out, it is not always about you.

Wednesday 4 September 2013

Due to genetics, or, simplicity and determinism

I typed "due to genetics" (complete with the quotation marks) into google news, and this is what I learned: 

Due to genetics, you might have a higher risk of heart attacks.

Due to genetics, you can get dark circles under your eyes.

"Happiness is 50% due to genetics"

Due to genetics, some athletes are simply better at doping without getting caught. (This article was amusing for all the pictures of athletes who did get caught doping, as if somehow this proves their point)

Your enlarged pores may be due to genetics.

Other articles that came up on this search discussed autism, breastfeeding, stretch marks, and Larry Summers' views on women in science (if you're not familiar with them and you just have too much good feeling for humanity, google it). Clearly a lot of our health, in every sense of the term, is presented as being "due to genetics".

Genetics is certainly important, and an understanding of it should be a priority for science educators. It's one of the areas of science that directly impacts people's understanding of themselves and their health. I worry, though, that the dominant perception of genetics is one that is far simpler and more deterministic than the reality. To quote the heart attack risk article linked to above, "When it comes to certain diseases, preventing the onset might be almost impossible. Due to genetics and other factors involved, certain people have a greater risk of developing certain health conditions." Genetics here is simple: genes=heart attack. It's also deterministic: preventing heart attack "might be almost impossible". Genetics, though, is neither simple nor deterministic.

Simple


First, simplicity: genetics is complicated. As genetics prof John H. McDonald points out, a large number of "canonical" examples from high school biology, from eye colour to earlobes to hitchhiker's thumbs are simply wrong. Two parents with blue eyes can, in fact, have a brown eyed child.

Why is genetics so complicated? It's a question that is still a subject of academic research. One answer, though, comes from looking at what DNA actually does in a cell. What DNA does is surprisingly simple: DNA codes for proteins. That's it. That's the only thing it does. Each three base pairs in your DNA code for one amino acid, and a string of amino acids forms a protein. Everything else that DNA is responsible for is the result of those proteins interacting with other parts of the cell, such as other proteins, lipids, DNA itself, etc.

In order to go from a relatively simple mechanism (DNA-->proteins) to extremely complicated outcomes (the huge variety of cells in your body, which work together to form your nervous system, your immune system, and you) requires a lot of feedback loops. What I mean by that is that DNA codes for proteins, which then interact with the DNA to affect the way it codes for other proteins.

The interaction between your DNA and your environment, in the broad sense, is composed of many, many of these feedback loops. DNA interacts with proteins inside a cell, which interact with proteins embedded in the cell wall, which interact with proteins outside the cell (and also with the lipids that make up the cell wall), which interact with proteins on other cell walls, in a chain of microscopic links that stretches from deep inside you to the surface of your lungs, skin, or eyes. Is it any wonder this chain of interactions is hard to understand in simple terms?

What these complicated interactions mean, among other things, is that you can't in general apply average results to an individual. So if gene X causes people on average to be 50% more likely to die of a heart attack than people without the gene, that won't hold true across all sub-groups. More concretely, a vegetarian who is also an avid jogger with gene X isn't necessarily 50% more likely to die of a heart attack than a vegetarian jogger who lacks gene X. It could be larger or smaller; the 50% on its own doesn't tell us. The complicated interactions of DNA with the environment--keeping in mind that what you eat and what you do are part of "the environment"--means average results may not apply to a particular group, or worse: they may not apply to any group, and only be relevant when everyone is added together. Still useful, perhaps, but not to you as an individual.

Deterministic


Genetics is also less deterministic than it's made out to be. Part of this is because it's complicated. Another part is because you inherit more than DNA from your parents.

I don't just mean that metaphorically; I'm not saying here that you also pick up a variety of behaviours and ideas from them, though that's obviously true as well. No, I mean that you inherit, in a biological sense, more than just the information encoded in the base pairs of your DNA.

People have known that this is technically true for a long time. The building block of a human isn't just a piece of DNA; it's a cell, with a cell wall and organelles and structural elements and an environment. This much has been known for a while. What's changing is our appreciation for how that early environment can actually reverberate through a child's development to affect their life as an adult. (The grammarians out there should note that a child's development also effects the life of an adult.)

The interactions DNA has with the cell environment not only change the way it behaves in the short term, they can also change the DNA in ways that are passed on to daughter cells. DNA methylation is one such way. Here a particular piece of a molecule called a methyl group is attached to part of the DNA. Not only does it change the way the DNA codes for proteins, it's a modification that gets passed on when the cell divides--even, potentially, when the cells in the reproductive system divide, combine, and go on to form a new organism.

What this means is that environmental factors can alter genes for several generations before dying out. Since both the alteration and the time it takes for it to go away depends on the environment, trying to describe a deterministic role to genes is mistaken at best.

The field that looks at stuff is epigenetics. It's the field that studies the way the environment influences genetic expression, and it's currently a buzzword in science reporting. In many ways it is still a controversial field. What's clear, though, is that the environment does influence expression, even if we're still not sure exactly how it all pans out.

The amount of data available about both individual and group genomes is huge, and only growing larger. Soon people will be asked to make health decisions not only on current diagnosis but also on what their genes say about likely future scenarios. We need to keep this in mind, though: Genetics is complicated. It's not deterministic. Pretending it is only undermines our understanding and our health.




Wednesday 28 August 2013

Lie-to-children

I admit, the first time I saw this phrase in print, I was a little put off. Lie-to-children? That sounds like the kind of thing that gets you a lot of extra time in purgatory.

Then I learned more about it and decided that perhaps it wasn't so bad after all. Well, maybe. It still could be bad.

I should explain all that. The first thing I thought when I saw the phrase "Lie-to-children" was that people were talking about the kind of thing that you tell kids because it's convenient, even if in the long run it doesn't help them at all. That's not what it actually is.

What lie-to-children refers to is the type of simplification that happens when you are teaching someone (anyone, it turns out: doesn't have to be a child) about physics, or math, or chess, or any field of human endeavour with a deep and complicated body of knowledge. The idea is that throwing the full force of, say, quantum electrodynamics at a beginner will only turn them off the field all together, so you teach them simplified forms (in this case, simplified electricity and magnetism) that you know aren't entirely correct. Since it wouldn't really be good teaching practice to emphasis their incorrectness at each turn, you're sort of lying. Hence, lie-to-children.

In this sense the concept has some merit. I've come to realize, though, that there's two different types of simplification. One is a type that, while simplified, gives students the right intuition about how the more complicated process works. The other type does the opposite: it is a subject simplified in such a way that students either don't make any progress towards understanding the fuller ideas.

Here's an example of the good type. In high school and early undergraduate physics, we teach students a theory of friction. In this theory, friction forces depend on the materials rubbing past each other (eg rubber on concrete or skin on carpet) and the force pushing them together (gravity in most cases)--and the dependence of friction on the force pushing the two objects together is linear (double the one force, double the other). There's no dependence on the size or shape of the contact area, or any other factors.

Clearly this can't be the complete theory of friction. If it was, then all cars with the same material in their tires would have the same stopping distance, and sports cars wouldn't need fat tires or good suspension for good handling--skinny tires would work just as well. But it works as a lie-to-children because it lets students figure out things like force, energy, and work in ways that serve them well as they move on to more complete forces. (As an aside, the wikipedia article about friction is terrible. Please don't read it unless you want to be seriously confused and misled).

An example of the bad type of lie-to-children is how we teach uncertainty estimates. The most common way of introducing students to measurement uncertainty in high-school and first year labs is to tell them to look at the four or five data points we've told them to collect, and subtract the largest from the smallest to get a range of uncertainty.

Why is this so terrible? For starters, it gives students the idea that a range of uncertainty on a reported value means that the true value cannot possibly be outside of that range. That's an unfortunate idea, though one that even professional scientists sometimes seem to have. It's not the worst of it, though. The worst part of calculating uncertainty this way is that the uncertainty goes up the more measurements you take. Taking more measurements gives you a higher chance of having a particularly large or particularly small one, which makes an uncertainty based on max minus min get larger. This is bad; we want students to get an intuitive feel for uncertainty as a measure of the confidence in a set of data, then we give them a way of calculating it that implies that the more data you have, the less confident you are in it.

I'm not going to go into how I think uncertainty should be introduced in high school. All I want to do here is point out that we need to shift the question from "how can we simplify this body of knowledge?" to "does this simplified version build students' (or readers', depending on context) intuition in the right direction?" If we can do that, the lie-to-children will be a little less of a lie.

Friday 23 August 2013

Scientific Notebook: Depletion Forces and Protein Folding



A new feature

In addition to doing science and writing about science (and writing about how we write about science...) I occasionally indulge in my artistic side. It's not much, mind you, but every so often I sit down with an art notebook and sketch out some science-y stuff. Since I'm writing this blog anyway, I've decided that every so often I'm going to upload those sketches for the world to see.

So far what I've discovered is that it's a lot of work to turn a sketch in a book into something that looks decent on the screen. And I don't mean looks like a professional painting, I just mean good enough that you can read the writing and follow what's going on.

Anyway, this next post is the first entry in My Scientific Notebook. Enjoy!

Wednesday 21 August 2013

The Worlds of Biophysics

The other day I received a brochure from the Biophysical Society, the world's largest association of biophysicists, with a call for papers for their 58th annual meeting. The brochure contained a world map with the number of Biophysical Society members listed for each region. Since I'm far more interested in maps than in whatever society business the rest of the brochure was trying to tell me about, this is the section that I actually paid attention to.

Here's what I learned: there's a lot of biophysicists in the US. Not so much in Africa. The list of Biophysical Society members by region, sorted by which region has the most members, is as follows:

US: 5,823
Europe: 1,790
Asia: 762
Canada: 348
Latin America: 151
Australia/New Zealand: 99
Middle East/Africa: 95

That's right, the Middle East and Africa together have less biophysicists than Australia and New Zealand. I suspect that the comparison would be even worse if the Middle East and Africa were considered as separate regions, since Israel and Iran are both relative science powerhouses.

Even as it stands, though, the numbers are stark. Here's a variation on the list above: the number of people in each region per biophysicist:

US:                    1 biophysicist per 54,000 people
Canada:             1 biophysicist per 95,000 people
Australia/NZ:    1 biophysicist per 282,000 people
Europe:             1 biophysicist per 413,000 people
Latin America:  1 biophysicist per 3,900,000 people
Asia:                 1 biophysicist per 5,050,000 people
Africa/ME:       1 biophysicist per 13,700,000 people

When weighted by population, the US has 250 times as many biophysicists as Africa and the Middle East. That's a far larger disparity than the economic one--the richest countries in Africa have about a quarter of the GDP per capita of the US, which the poorest have around a tenth (based on a rough estimate from here and here).

Now, you may object that I'm conflating "biophysicist" and "member of the Biophysical Society" here. And you'd have a good point; the society makes no effort to survey everyone to make sure they've gotten all the biophysicists, and it's likely that rates of scientists working in biophysics that pay to join a society are lower in parts of the world where research funds are less available.

This brings up an interesting point. Why wouldn't someone want to be part of an organization like the Biophysical Society? Perhaps because the main point of joining is to gain access to the conferences. There's no professional designation granted by the society, and it's not really something to put on a CV. Sure, you get a subscription to Biophysical Journal, which is probably a motivator for a few people, but not many. No, the main reason to join is that each year the society hosts it's annual meeting, the largest gathering of biophysicists in the world, as well as a number of smaller, more focussed conferences.

In principle the organization is international, but the meetings are always held in the US. For researchers in Africa, this means that the society membership and meeting registration are the smallest costs of attending: a couple of hundred dollars isn't much compared to a flight and hotel. Quickly searching United Airlines tells me that a round trip flight from Nairobi to San Francisco, where the next annual meeting is to be held, will run you about $1,800 (and also take about 30 hours each way). That would be small change in the research budget of a large American group, but for researchers at cash-strapped African universities it puts the trip out of reach. And if you can't go to the conferences, there's really no point in becoming a member of the society.

Not being able to attend conferences is important because they are where a large part of scientific networking happens. I've written before about the role of social networks in scientific careers. By being priced out of conferences, African researchers are also being kept out of the social networks that are advancing the careers of their American "colleagues".

Beyond that, conferences are vital for simply keeping up with the field at the level required to contribute to it. Here's an example. This past March, I went to a conference and presented work I had done that we were "submitting next week" to a journal. That work was finally submitted a month later, revised, accepted, and will be published in a couple of months--October, maybe. I'm not under any illusion that anyone in any part of the world is racing to react to this particular work, but it sets a typical timeline: someone who is reliant on published research articles to build off of is about half a year behind someone who went to the conference and saw the work presented there. In some areas this won't matter so much; in others six months is an eternity.

This isn't a problem specific to biophysics. Across disciplines, high profile conferences tend to happen where there are concentrations of high profile researchers, leaving poorer regions on the outside. And obviously underlying this all is a whole host of economic and political systems that the Biophysical Society can hardly be held accountable for fixing. Still, there is one step that they could take. They could hold the occasional meeting in Africa. Or Asia, or Latin America. Anywhere outside the US or Europe. Ostensibly the organization is international, after all. Hold the occasional meeting in Africa and a bunch of researchers (and students!) suddenly have a small bit of access to the professional networks and early results they are typically denied.

Why don't they do this? The cost isn't the biggest concern; I've already mentioned that travel costs would be a small amount of a typical research group's budget. The travel time is probably far more significant--most faculty I've met would far sooner part with a few thousand dollars from their grant than with sixty hours.

The biggest obstacle is apathy. As far as I can tell there is next to no concern among the research communities in the West for those in the developing world. We, they say, are a biophysics group. We are not a development organization, nor is it our job to build a scientific community in countries that don't have the economic base to support one even if they had the talent.

Perhaps. But everyone who works in science should realize that the historical scientific community in Europe and later the US was nurtured by an economy that was deeply exploitative, in a way that the rest of the world has yet to recover from, particularly since in some areas the exploitation never stopped. Extending a hand to researchers in countries on whose backs we built our scientific enterprise is the absolute least we can do.

For the moment, though, it seems like we're not even doing that.

Tuesday 13 August 2013

The "link" Between Autism and Induced Labour Could Be Just Noise (and in any case is being reported poorly)

Again with the autism links. This time, a study out of Duke University (which is unfortunately paywalled) showed a statistical link between autism and induced or augmented childbirth.

There are, of course, a few qualifying comments. The study authors note that their result doesn't imply that induced or augmented childbirth causes autism, even some of the time. There could be other factors at work. As a hypothetical example, if there was an unknown foetal developmental issue that both caused an increased risk of autism and interfered with the signalling that starts labour (which is still not well understood), such an issue would explain the study results--especially since the authors did not separately consider mothers who had labour induced because they went past term and mothers who had labour induced at term or before for other reasons. Most importantly, the authors stress, in such hypothetical case it would be useless from the point of view of autism prevention and downright harmful from the point of view of general maternal health for parents to refuse a medically recommended induction on the basis of this study.

There's another issue that the authors don't address, and it's worth mentioning: the study could just be picking up noise. The study reports a number of different results for various models, but they all are just barely significant. For example, the odds of an induced only (as opposed to induced or augmented) baby developing autism are 10% larger than that of an un-induced childbirth--with a confidence interval of 9%. 10+/-9 isn't very precise; the confidence interval is almost as large as the effect. Further, the confidence intervals reported are 95% confidence intervals, meaning there is a 1 in 20 chance of seeing an effect that large by chance.

A 1 in 20 chance is fine for most studies (though, if you think about the hundreds of studies published each year, it means an appalling number are wrong through sheer randomness). It's dangerous, though, in look-back studies of this type. I've written before about the problems here, but it's worth summarizing again. The key point is that if researchers are working on a problem with many possible connections, 1 in 20 suddenly seems quite poor. The question that needs to be asked is, how many other connections with autism were, at the start of the study, as likely as that between autism and induced labour?

Or to phrase it another way, was there some reason to think a priori that induced labour was more likely to be linked to autism than, say, breastfeeding vs formula, or the mother's food choice, or early life air quality, or lack of any number of vitamins, or abundance of any number of vitamins, or certain types of stimulation, etc? If there's not, than 1 in 20 isn't very good. This would be obvious if you actually tested all the possibilities: if you did a study that looked at 40 different possible causes for autism and came back with a positive result for one of them, with a 1 in 20 chance that the result was just due to random fluctuations, that obviously wouldn't mean anything. Not checking the other 39 possibilities doesn't make your statistics better.

Of course, we know that look-back studies like this are the most popular type reported in the media, and we've seen that even the study authors were either unaware or ignored the statistics of such studies. So it's not surprising, though it's still disappointing, to see all of the all of the nuance and caveats go out the window in the way this study has been reported.

“Pregnant women who have procedures to induce or encourage labor might have an increased risk of bearing a child with autism, according to a new study,” reads the opening line of the Wall Street Journal's article. Readers who finish the article will learn that at best this a gross simplification, but assuming that readers who are both busy and lacking a science background will realize that the opening sentence here is at odds with the quote is either a naive estimation of the audience or a deliberate confusion for the sake of an attention-grabbing hook.

The write-up in the NY Daily Mail has this helpful caption below their related picture: "Among autistic boys in a new study, one third of mothers had labor induced or hastened, compared to 29% of boys without autism." This is head-scratching on multiple levels: most obviously, why compare a percent with a fraction? People hate fractions. Second, they're reporting the numbers in the form of a probability the mother had induced labour given that child has autism. But this is entirely backwards, since what we actually want is the probability a child gets autism given that mother had induced labour, relative to the probability a child gets autism under any circumstance. The former is a needlessly confusing way of reporting the numbers, given that the latter form is available in the study, and is what's actually discussed in the study's conclusions.

(As an aside, the NY Daily Mail also wins the look-back study reporting award, with the following "Related" links scattered throughout their article: "STUDY LINKS IVF TO SMALL RISK OF MENTAL DISABILITY"; "ANOREXIC GIRLS ALSO HAVE AUTISTIC TRAITS: STUDY"; "HIGH LEVELS OF AIR POLLUTION LINKED TO AUTISM RISK".)

We also have the BBC News report. I should mention here that the BBC News story was the most responsible of the ones I looked at in stressing the limitations of the study and the importance of consulting with your doctor about delivery decisions. In it, though, we learn that "Children whose mothers needed drugs to start give birth are slightly more likely to have autism, US researchers say." Beyond the caveat noted above, that this actually could be just noise, there's an additional, subtle problem. Suppose the statement itself is true. Then is the statement, "If you have an induced childbirth, your child is slightly more likely to have autism," true? The answer is no.

Another example is helpful here. Suppose you and your spouse both have dark hair. Suppose also that your parents both had dark hair, and their parents, all dark hair stretching back to the dawn of recorded history. Suppose this is true of your spouse as well. Now let's imagine that the two of you move to Sweden together for work, and while you're there you have your first child. As you're being taken to the delivery room, the doctor says to you, "Did you know that 85% of children born in Sweden are blond? Since your child is a child born in Sweden, there's an 85% chance he or she will be blond."

Obviously this is ridiculous; moving to another country doesn't change your DNA, and if neither you nor your spouse is carrying blond genes, your baby isn't going to get them, no matter where you live. So even though it's true that 85% of children born in Sweden are blond1, and it's true that your child will be born in Sweden, it's not true that your child has an 85% chance of being born blond.

The exact same structure carries over: Even if it's true that children whose mothers had induced childbirth are 10% more likely to develop autism, and even if it's true that you had an induced childbirth, it is not necessarily true that your child is 10% more likely to develop autism.

This last point needs to be stressed, because it would be tragic if any mothers or infants were harmed out of a belief that inducing labour increased the chance of the child developing autism.  I will most likely come back to it in another blog post, because the conflation of general odds with individual odds is at the heart of many of the most controversial misunderstandings we face.

In the mean time, while this blog doesn't exactly have the readership of the NY Daily Mail, I'll do what I can to spread what should be the take-home message here: at the moment there is no reason to believe that YOUR child is at a greater risk of autism if YOU have an induced labour. It's a message that would be clearer if news outlets and study authors were more focussed on getting the message right than on getting it out.

1Full disclosure: I have no idea what the actual rate of blond babies is in Sweden. It's just an example.

Friday 9 August 2013

Who owns your genes?

There's been a flurry of activity lately around the topic of genes and ownership. There was the Myriad case before the US Supreme Court earlier this year centring around the issue of whether genes can be patented. (The answer: not really, but sort of). The same case is now being appealed in Australia.

Another aspect of genes and ownership has arisen with the publication of the HeLa genome.

In case you're unfamiliar with the story, Henrietta Lacks was a woman who lived in the US and died of cervical cancer in 1951. Before her death, some of her cells were taken from her, and unlike almost every other set of human cells they kept multiplying, to the point where people are still using them today. They've been instrumental in a hugely wide range of biomedical research.

A few years ago Rebecca Skloot, a science journalist, tracked down the Lacks family, and told their and Henrietta's story in one of the most well-received and read books of 2010, The Immortal Life of Henrietta Lacks. Lacks' story has multiple layers: the cells were taken from her without her knowledge or consent; she was an African-American woman; she was poor. The US has a long and troubled history around medicine, consent, poverty, and race. There's far too much to summarize here, so I would just encourage you to read the book.

Fast forward to this past March, and we have yet another chapter of Henrietta Lacks and consent. A German team sequenced the HeLa genes and published their data. When the Lacks family learned of this, they felt that yet again their consent had been unsought in working with their family's cells.

Fast forward again to last week, when Francis Collins, the director of the NIH, met with the Lacks family and, after explaining the various ways in which publishing the genome would benefit medical research, obtained their consent for a limited publication of the genetic data. The data will be available, but only by application, to prevent it from being completely public.

It is, obviously, good that Collins was willing to meet with the Lacks family, and that the family was willing to meet with Collins. Ignoring the family's wishes would be unconscionable given the history of denied consent surrounding the HeLa cells. It's worth noting, though, that this is the beginning of a discussion about genes, ownership, and consent, and not the end.

The way the situation was handled has been presented as a precedent, but if so it's an extremely limited one. For one thing it's completely unclear what would have happened had the family not given consent at all, and insisted that the genome data remain private. Even if the NIH had agreed to that, they don't have the force of law, only funding. I don't know the law well enough to say if the family could have launched a lawsuit in the US to stop publication of the data, but they certainly wouldn't have the resources to launch similar challenges in every country with labs in possession of HeLa cells and the capacity to sequence them.

Hopefully tissue samples at stake in the future will be from consenting donors, but this brings up another issue. As Michael Eisen asked, if the donor consented, does their family still have the right to withhold their consent to the genome being published later on the basis that the data affects them as well? Rebecca Skloot seems to think so, judging by the New York Times editorial she wrote about the HeLa genome publication.

The issue of gene ownership isn't going to go away. The capacity to sequence someone's genome is steadily coming down in price, and the data processing power to analyse the sequence (and identify who it came from, even if it's "anonymous") is becoming ever more widely available.

If we're going to address this we need a framework with two qualities: the force of law, and international scope.

 Force of law is needed because the cheaper it is to acquire genomic data the less effective large organizations like the NIH will be at setting policy. Someone biohacking in their garage is presumably a lot less concerned with being on a NIH blacklist than a large university lab. The only way to realistically enforce a gene policy in a world where individuals can acquire the means to sequence and process genomes is to give those who feel they've been wronged an avenue for legal redress.

International scope is needed because research ties stretch across borders, and information doesn't even notice them. What good is a guarantee from the Canadian government that an unconsenting individual's genome will be protected if groups in the US or Germany can publish it with impunity?

Beyond these requirements are a whole host of discussions around access and consent. The question "who owns your genes?" may not have one answer. Even if I do have ultimate control over any tissues in my body, what about the genetic information itself? Generally information about my body isn't protected. (Doctors are bound by confidentiality, but if a journalist somehow found out about my heart condition, for example, the law wouldn't stop them from publishing it). And speaking of tissue, every day I shed small amounts of skin and hair everywhere I go, not to mention the chunks I might leave on the floor of a barber shop. Does that abandonment of tissue imply that someone else can pick it up and sequence any DNA they find in it?

There's a lot to discuss, and unfortunately the discussion that has come about from the HeLa genome has so far been somehwhat muddled. We already saw that Skloot seemed to be confusing the issue about whether the consent of the Lacks family was needed because Henrietta's consent was never obtained (a situation specific to the HeLa cells) or whether it was needed because HeLa DNA contains information about her descendants as well (a situation that would apply to all genome sequences).

Collins' interview contained an answer that I found even more worrisome:
Why not ban all research on tissues from unconsenting donors?

The goal of medical research is public benefit, to try to make discoveries that are going to help people. And although the use of archived specimens is limiting in certain ways, [those tissues still offer] an incredible trove of material. If you shut off access to them, you would undoubtedly slow research right now, in terms of diseases such as cancer. The trade-off would not justify that extreme position.
Collins seems to be saying that banning research on tissue from unconsenting donors wouldn't be worth it because it would slow research down. But this in and of itself isn't a good argument. If continuing to use tissue from unconsenting donors wasn't any faster than banning it, it wouldn't be an ethical dilemma: banning the research would be the best thing to do from all angles. Collins doesn't show any indication that he's grappling with the balance between collective good and individual harm.

And perhaps he doesn't have to. He is, after all, not the medical world's Ethics Czar; he's the head of the NIH. Someone needs to be talking about this, though; preferably a lot of someones. And if the public doesn't have the framework to do so now, then we in the science communication community have a mission ahead of us.

Monday 5 August 2013

Humans vs Robots, in Space!

I noticed a couple of stories popping up over the weekend.

First, there was this one, about a Japanese robot that is heading to the International Space Station as part of a study to see if talking robots can provide emotional support to astronauts on long missions.

The second story was Chris Hadfield's official retirement, and the angst this prompted about the future of Canada's human spaceflight program, which is set to be overhauled soon.

It seems like there's something about the humanoid shape. After all, there are tons of robots in space; there's a giant one crawling around Mars right now (that just celebrated its first birthday there!). Given that almost any human-less space probe fits the definition of "robot", robots have done by far the bulk of the space exploration to date. But now that we're sending a human-shaped robot into space, well isn't that something?

The angst over human spaceflight is also a little puzzling. After all, there is the aforementioned giant robot crawling around Mars. There have also been high-profile missions to Jupiter and Saturn in the past few years. Looking at the enormous recent progress in miniature robots and the interfaces we might use to control them, and the nascent rise of private space flight, and it's tempting to conclude that we're on the cusp of a golden age of space exploration. The angst is understandable for those people who have dedicated their careers to getting bodily into space, but for the rest of us the future looks exciting.

Perhaps it's a narrative failure. After all, we have all kinds of stories of the intrepid space pilot piloting their intrepid spaceship (more than a few times the stories have even gone so far as to name the spaceship "Intrepid"). Not so much with the autonomous robot probes.

It's worth noting, though, that the intrepid space pilot largely came out of a piece of government propaganda. As Tom Wolfe notes in The Right Stuff, when NASA originally looked for people to send into space, they weren't looking for pilot skills. They knew better than anyone that whomever they sent up needed primarily to be able to keep their cool and occasionally press the right buttons; there was no actual flying involved. The decision to pick astronauts from the test pilot ranks was due to simple expedience: the test pilots had already signed up for dangerous work, had already passed security checks, had already passed fitness checks, etc. Once that decision was made, then the government started putting it around that these were the best pilots on earth, and that they were chosen for their supreme piloting skills.

As for the humanoid robots? The main reason to tell stories about them is the technical issue that it's a lot easier on a TV or movie screen to portray a sentient robot by putting makeup on a human actor than by creating something guided by ideas about what functional future robots might actually look like.  Humanoid robots really only make sense in a limited number of situations--like if they have to operate in an environment designed for humans. Floating freely in space, or even crawling, flying, jumping, or swimming around an unknown world, opens up possibilities for form limited only to the imagination. 

So given that astronauts never actually needed mad pilot skills (well, except for landing the Shuttle, but that's a whole different kettle of WHY!?!?!), and robots don't need to look like humans, maybe it's time to build some new narratives. Unless the structure of space, time, and energy is very different than what we now think it is, non-humanoid robotic exploration is the future. I refuse to believe that we as a species might have the imagination to build machines to explore the cosmos, but not the imagination to put them in a compelling narrative.

So bring on the robot stories!

Thursday 1 August 2013

Animals, people, and choices

According to a new study, guys stopped sleeping around and settled down in respectable families once their friends starting killing their kids.

Okay, that's not actually what it said. The study, published in Proceedings of the National Academy of Sciences (PNAS), was a study that correlated various traits in primates with the emergence of monogamy to better understand how it might have evolved. Their data showed that monogamy correlates better with infanticide than with any other behaviour that had been hypothesized to lead to monogamy. Infanticide is presumed to be an offensive strategy; a male in a given group increases the chance of his own offspring surviving by killing off other young. So they conclude that monogamy evolved as a defence mechanism against infanticide.

Another study, published in Science, disagrees. They see monogamy arising from females spreading out and becoming more territorial, reducing the benefit gained from a male moving from female to female, and hence enhancing the relative benefit gained from sticking with one woman.

One point that both studies make, though, is that their results are for non-human mammals. It's an open question as to whether humans are "naturally" monogamous, or even whether the question is one that can have an answer, given the dominant role that culture plays in human sexual and child-raising arrangements, and given that just about every type of arrangement has been observed in various human cultures through the ages.

Which is why it's curious that many news reports on this story start out with a hook along the lines of, "Why are humans monogamous?" The summary of the research on the Science news site even includes a picture of Will, Kate, and royal baby at the top of the article, subtly suggesting that perhaps the pomp and pageantry of a British royal wedding all springs from His Royal Highness' instinctual desire to keep the Duke of Somerset from killing wee Prince George1.

It's an inaccuracy, to be sure, but on the whole it's forgiveable; the research does relate, even if it's more tangential than the headlines would imply. (As an aside, it's interesting that the report in the CBC, a government funded agency that hence isn't as reliant on page-views for revenue, is much more cautious--and accurate--than the reports from profit-driven private media companies.)

The bigger problem is one that plagues almost every story about evolutionary research. We can't seem to get away from the language of choices (and hence morality) when talking about animal behavioural strategies. As an example, CNN summarizes the infanticide driven pairing by noting that, "a male that lives with a female mate can protect their offspring from other males, that might want to kill these children." The author is trying to be neutral, but it's hard to see phrases like "protect their offspring" and "want to kill these children" without overlaying human notions of choice and morality on them.

To get away from this, it's important to note that evolution in the genetic sense has nothing to do with choice. Evolution acts on behaviours that are hard-coded into genes, because ultimately it's the genes that evolve, not individuals.

As an example, let's consider a set of identical, early human twins. One of our primordial twins, whom we'll call the Fonz, doesn't let anyone tie him down.  He has a girl in every cave, an no idea of how many kids he may have fathered, let alone their names. The other, whom we'll call Ward, is a devoted father, marrying his cave-sweethard and dedicating himself to raising his little cave-children. Now we could argue about which strategy was more successful: perhaps the Fonz's promiscuous strategy will pay off and he'll have dozens of children that go on to have children of their own. Or perhaps he'll have more kids but, without a father, they won't do well and Ward's six well-raised children will ultimately out-breed them. Since we started with the assumption that they're twins, though, none of this matters. They carry the same genes, so there can't be any selective reproductive success, which is the driver of evolution. As soon as individuals can choose one strategy or another, all bets are off. Evolution does not select between strategies that individuals choose.

This is, of course, getting into deep philosophical waters. How much choice do any of us actually have, and how much is determined by genetics? It's not a debate I really want to get into here. The point is that evolution only acts on behaviours that are determined by genes; in that sense, it's incompatible with choice. If we want the public to understand evolutionary science, then, we need to come up with a new vocabulary. One that doesn't rely on words that imply thought out strategies, or foresight, or choice. How do we do that? I don't know, but if we take science communication seriously it's worth the effort to try.

1I am in no way suggesting that the Duke of Somerset actually wants to kill wee Prince George. I'm sure the Duke has no baby-killing tendencies whatsoever and is a very nice person, even though he has a huge amount of money and power for no better reason than that his great-great-...-great-great-grandmother was the only one of Henry VIII's wives to die before he got tired of her.

Saturday 27 July 2013

The power of narratives

We've all heard about how science fiction can inspire real science. If you frequent the type of nerdy, techy blogs I do, you'll see with some regularity "6 discoveries that science fiction thought of first", or some variant along those lines. We get it: science fiction gives us context for understanding science and technology and how it fits into our society.

The assumption, though, is that this is done more or less on purpose. That is, the science fiction writers are trying to extrapolate existing technology and imaging how it will affect society. They're doing thorough research into where technology might be headed; they're talking to engineers and scientists; they're making the best predictions they can. While this is true of a number of science fiction writers, many others are far more concerned with net effects of being able to do a particular thing, rather than the science that would go into doing it. This latter type of story can still provide powerful narratives for shaping our understanding of and reactions to new science.

An interesting example of this happened last week. In this article in Science, researchers reported being able to implant a memory into a mouse. How they did it is super cool, but I'm not going to go into it here; it's covered in the many articles I'm about to link to.

If, upon hearing that researchers could implant a memory, you thought of the 2010 movie Inception, you're not alone. A lot of other people did too. In fact, one of the authors of the study referred to the movie in talking to the press, and other one used the term "incept" to refer to the memory implantation process.

And here's what I find fascinating about this: the movie had nothing to do with the actual science that went into the mouse study. One of the things that I find by turns brilliant and infuriating about Christopher Nolan (who wrote and directed Inception) is that he has a keen sense for paring away details that aren't relevant to the story he's telling. In the case of Inception, the important points are that people can enter other people's dreams, and in doing so extract information or, rarely, implant ("incept") new ideas. Everything else is swept away; we don't find out anything about how people share dreams or where that technology comes from (other than that the military developed it), and we don't find out any psychological reason why moving an inanimate object in someone's dream would cause an idea to germinate and flower in their mind--we're simply told it is so. Nolan gives us only enough details to move the plot forward; the last thing he's trying to do is teach the audience some science.

The study on mice doesn't even use dreams. Nor does it plant the seed of an idea in the mice and allow it to grow. In fact, it's not even really about implanting ideas, but rather a particular remembered fear reaction. The only thing it has in common with the movie is that they both involve someone deliberately changing someone else's conception of reality.

Of course, that in and of itself is scary. The simple idea that there might come a time when our memories and ideas might not deliberately manipulated by someone else has terrifying implications for our sense of self. And that is why we turn to narratives to help us understand what's going on. Humans are story oriented. A significant part of what makes up a culture, and what distinguishes it from other cultures, is a collection of shared stories. We moderns may have moved away from myths as our explanations for how the world works, but that doesn't mean we don't have a need to frame those explanations in terms of stories. And Inception has given us a powerful, shared story about altering memories.

Put a bunch of science geeks (I category I willingly admit to belonging in) together and ask them about movies, and you'll find out that we love to nitpick. My undergraduate physics society once hosted a showing of The Core precisely because the science in it was so terrible that it gave us hours of enjoyable discussion about how bad it was. Sometimes, though, the details aren't the most important part; sometimes narratives get repurposed in ways the authors couldn't have imagined. And sometimes we in science should remember this, assuming of course that what we remember is really up to us.

Friday 26 July 2013

A particularly bad headline

From Gizmodo: Scientists Just Discovered a New Force That's Stronger Than Gravity

Okay, so many things wrong with this headline. To start with, gravity is the weakest of the four fundamental forces, so "stronger than gravity" applies to just about everything. But that's not the biggest problem here. The headline, by pairing "new force" with "gravity", seems to suggest that someone has discovered a new fundamental force. This isn't what happened.

Quick overview: there are four fundamental forces. In order of strength they are: Gravity, Electromagnetism, Weak Nuclear Force, Strong Nuclear Force. Gravity is weak, but because it only ever adds (nothing has "negative" mass), it ends up being powerful on large scales, like planets and stars. The Weak Nuclear Force is a little obscure, but as the name suggests is involved in nuclear processes like atomic decay. The Strong Nuclear Force is what holds nuclei together, and it's what makes nuclear weapons so powerful.

Everything else is electromagnetism. Friction, contact forces, air pressure, water pressure, everything other than gravity that you experience can ultimately be traced back to electromagnetism in its various forms.

The "New Force" in this article is electromagnetism. The researchers looked at the effect of blackbody radiation (which is a type of electromagnetic radiation) on hydrogen atoms in an astrophysical context. (Here's the article; it's behind a paywall though.) They found that it could create an attractive force between the atoms which hadn't been appreciated before. But keep in mind that what they have found is a new way in which the electromagnetic force is expressed, not a "New Force" in the fundamental force sense.

Onto the "Stronger Than Gravity" part. The researchers noted that this blackbody force they discovered could be more important than gravity in the context of star formation. In a nebula, there's a lot of hydrogen, but it's really spread out--it's waaay less dense than air. (Star Trek has led many people astray by depicting nebulae as clouds that ships can hide in. Look at a picture of a nebula. In most of them you can see stars through them, even though they're light-years across. Anything with visibility measured in trillions of kilometres is a bad hiding place.) Somehow, over time the hydrogen in a nebula manages to coalesce into a star, with an enormous density. Exactly how this happens is a subject of ongoing research, and this new blackbody force could play an important role in that. Because at the densities the nebula starts at, the gravitational forces involved are super, super, super small. So even thought this blackbody force is super, super small, it could still be important.

It is not, though, "Stronger Than Gravity" in an every-day sense. You will not be levitated by blackbody forces. They will not explain dark matter or the expansion of the universe.

It's starting to sound like I might be down on this research, but I'm really not. It's a novel idea that is important to an area of ongoing research in astrophysics. That's basically the goal when you write a journal article. The headline writers at Gizmodo simply did a horrible job summarizing it. Good research deserves a better headline than this.

Thursday 25 July 2013

Rosalind Franklin, Social Science

Today's google doodle honours Rosalind Franklin, who was born 93 years ago. To me it's one of the better doodle subjects, as it draws attention to the way women's contributions often get overlooked. Franklin is a famous case; there are many others we don't know about.

So, a quick summary: Franklin was working as a research assistant at King's College London. While there she applied x-ray crystallography to DNA in an effort to deduce its structure. X-ray crystallography isn't like taking a picture; the crystals have to be prepared carefully and the pattern that comes out requires a fair bit of interpretation and deduction to figure out the actual structure of the molecule, particularly in an era without easy access to computers. Franklin produced the best x-ray data on DNA in the world, and was using it to build models of DNA. Watson and Crick were also building their own models at Cambridge, using Franklin's data. But the interaction was one-way; Maurice Wilkins, another research at King's College, was showing Franklin's data to Watson and Crick (without her permission), who used it to build their famous model, but Franklin was excluded from the conversations that her male colleagues were having and so was building her DNA model largely on her own. Watson and Crick published their famous paper proposing the double helix, which minimized Franklin's contributions, and later Watson wrote a book that further minimized Franklin's contributions (a particularly low blow since, by that point, Franklin had died).

It's an important story to tell, not least because it explodes the myth of an abstract, impersonal something overseeing a meritocratic process in which the best work always rises to the top; I'm going to call that something SCIENCE. It's a myth that underlies the "scientific method" so often taught in primary schools, which presents science as an abstract cycle that can be done anywhere, and notably leaves out such steps as "placing your work in the context of the field" and "convincing other people it's worth their time to read what you've done." It's a myth that underlies the movies and books and comics in which the (usually mad) scientist works in seclusion for years before unveiling their creation to the world, which looks on in awe--while conveniently sweeping away any details about how one gets "the world" to pay attention long enough to look on in awe. It's a myth that many scientists have helped to foster, by extolling the supposed ideal of pure research, unhindered by such mundane realities as "politics", in which the invisible hand of the "marketplace of ideas" selects the most worthy contributions.

This is, of course, not the way science has ever worked. Not least because science isn't an impersonal force, it's a collection of people. Papers get reviewed by people, data and ideas get shared by people, hiring and tenure decisions get made by people. And those people have biases, likes and dislikes, and ideas about what a good scientists looks like. Science, in short, is not SCIENCE.

And while we might like to promote SCIENCE as an ideal to strive for, the reality is science is simply too big to work like that. Here's an example of what I mean: on arxiv.org, which is a repository for physics research articles, there are 63 articles listed under "condensed matter" for July 24, 2013. So for one subfield in physics, on one day, scientists produced about 300,000 words of research articles. That's about the length of three typical novels (or one George R. R. Martin novel). Someone in the field, then, who wants to keep up with current research, has three novels a day to read. Three novels of physics, which, in my own experience, generally takes more time to get through than actual novels. Add to that the articles published in the literature of chemistry or other fields that could be relevant, and older articles the new articles refer to that are necessary to fully understand them, and our hypothetical condensed-matter-physics researcher has a rough estimate of half a million words to read every day to keep up with the research.

Of course no-one can read that much. So scientists, like professionals in every other field, use a collection of heuristics and skimming techniques to sift through the mountain of potentially relevant research and pull out things that actually interest them. And these tools are very dependent on social networks and name recognition. See a big name in your field as the author? Probably worth going through. Oh, that person gave a talk at that conference that was quite good; maybe her paper is worth reading. If you're new to the field, you likely have a supervisor who sends you articles to read; your supervisor's choice of articles is influenced by their professional network.

I'm using journal articles here as a proxy for overall research. Less formal avenues are even more prone to be dependent on social networks; a lot of science happens over beer in settings that blur the lines between friend and colleague. The point is that a) any research you do is only valuable if other people see it and use it in their own research, and b) there isn't a good way of navigating the enormous amount of scientific research out there without relying on professional and social networks. So talking to people, making contacts, and participating in "politics" (a term that scientists seem to use solely to describe social interactions they dislike) is and always will be important. It is also, unfortunately, a major mechanism by which implicit and explicit biases at the individual level are magnified into entrenched systemic bias.

This brings us back to why Rosalind Franklin's story is important to tell. Because if we keep insisting that SCIENCE is the ideal, we don't address the biases that are running wild in science, since they're simply a bi-product of the non-ideal aspects that shouldn't be there anyway. If we acknowledge that all science is social, we can look at how to address systemic bias by addressing the individual biases that shape the social network of science. And then maybe we can keep future Rosalind Franklins from being marginalized and ignored in making world-changing discoveries.

Tuesday 23 July 2013

Headlines in (Social) Science: Gender, Politics, and Unreviewed Findings

Often when I see a science story in the news the first thing I do is look up the related research article. That way I can see what was actually done, and evaluate, if not the detailed methods, at least the overall scientific logic of the article. If it's particularly new or controversial I sometimes bookmark the article so I can come back later and see what else has been published in response.

Of course, this requires that there be an article to look at. Since I'm at a university, paywalls aren't a problem, but even my university library subscription doesn't get me access to articles that haven't been published yet. Or even accepted. Which brings us to the story at the centre of today's post: a study funded by the Economic and Social Research Council, a British government funding agency, authored primarily by James Curran, director of the Goldsmiths Leverhulme Media Research Centre at the University of London.

According to various headlines, this study showed that:

Across the world, women know less about politics than men 

Women know less about politics than men, study finds (that goes for Canada, too) 

Women, especially in Canada, are more ignorant of politics and current affairs than men, says UK research

Study: Women Know Less About Politics Than Men

Did the study actually show this? To answer that we need to take a close look at the details of the research. As part of a CBC radio interview here, University of Calgary prof Melanee Thomas points out that there are a lot of ways these types of studies can be misleading. She brings up the important point that there can be biases in what is defined as political knowledge. Often "politics" is restricted in definition to so-called hard news: trade, military issues and conflicts, economic and budget issues, etc. Which these are certainly important, the list often leaves out other issues that are inarguably political: health, education, and all manner of issues of local governance. It's hard to argue that health policy isn't a "political" issue, but it is often labelled "soft" news and stuck with the other "human interest" stories. (As an aside, "human interest" is an interesting term: exactly why should I care about anything that's not of interest to humans?)

The point is that the study may be biasing its result by asking questions in the domains that men, on average, know more about, and ignoring domains that are, by any objective definition, equally political and which women know more about.

Does this study fall into those pitfalls? We don't know, and neither does Melanee Thomas, because the study hasn't been released yet. It's not listed on James Curran's research website, and the ESRC site lists the work as "submitted". As anyone in academia knows, a lot can happen between "submitted" and "published".

There's a couple of things to address here. First, we have the headline magnification I've talked about before. Curran holds a press conference in which he summarizes his work; a journalist takes that work and turns it into an article; an editor takes that article and turns it into a headline. Based on other cases of this headline magnification, these levels may have resulted in headlines that bear little resemblance to what was actually shown.

But, and this is the second point, we don't know what was actually shown, because the research is unpublished. And this is where James Curran has, in my opinion, acted reprehensibly. He has used his position as an expert to promote a conclusion without allowing the underlying work to be scrutinized. It will, obviously, be scrutinized eventually, but by the time that happens the original news stories will months in the past--an eternity in the online news world. No one is going to prominently display a story that adds context and corrections to a relatively minor headline from a few months ago. So prof Curran is putting out a conclusion that can't be verified, but that adds to a narrative that has women as intrinsically less able than men in certain key areas.

I don't think that research should be subordinated to social mores or political considerations. What I'm arguing is that researchers have a responsibility to ensure that, within the best of their ability, their research is reported correctly and in context. Especially when it has the potential to further harm already marginalized groups. We've seen the harm that can be done with headlines such as "Vaccines cause Autism." The harm from "Women know less than men about politics" may not be as obvious, but that doesn't mean it's not there. For that reason, Curran had a responsibility to give as nuanced and context-filled report as he could, and to allow other researchers the chance to dispute his findings and provide their own insights. He did none of that.

Monday 22 July 2013

I am slowly doing science (1, 2, 3, 4, 5, 6, drop).

Last week, the pitch dropped. The tar pitch, that is. I won't go into all the details of what happened, but here's a summary: A long time ago some people got in an argument about whether or not tar pitch was an extremely slow moving liquid (as opposed to a solid). To resolve this argument, they stuck some tar pitch in a funnel, which they stuck in a jar which went in another jar which went in a cupboard, and the green grass grew all around. The idea was that if the pitch ever dropped through the funnel, it would be proof that the pitch was liquid. If we reached the end of time and the pitch hadn't dropped, it was probably solid (turns out proving non-existence is tough...).

So last week, after various events had kept such a drop from ever being recorded (they happen once every ten years or so), a pitch drop was caught on video, 70 years after the experiment was started. Yay science!

The reason I bring this up is that there's a message in all of this that hasn't made any of the news reports about the pitch drop: Science sometimes takes time. Science is sometimes boring, and tedious. Science is sometimes boring and tedious even for scientists. If that seems like a strange thing for someone who spends at least some of their time as a science communicator, well it is. But it's also an important one.

First off, sometimes is a key word here. Science can be, and often is, exciting. It can blow your mind and change your view of the world in an instant. It can be indescribably cool. And sharing those cool, mind-blowing moments is an important part of inspiring both future scientists and the public at large to learn more about the world around them and what humanity can do with it.

But if that's all we ever focus on, we risk sending the message that doing science is about having a big idea, which is so obviously right that everyone goes, "Wow! You're obviously right," and sees the world in a new way. These moments, though, are few and far between. Far more often, someone proposes an idea that is partially right, and it gets bounced around, and revised, and extended. And, in the most crucial step, it gets tested by experiments. Experiments that can take time, experiments whose results are inconclusive or difficult to interpret, experiments that lead to more questions than answers.

The development of silicon computer chips is a good example of this. Electronic band structure theory, the ideas that eventually allowed people to understand the electronic structure of silicon, started development in the early 1930's. Experimentally testing this, though, was a problem; experiments in silicon contradicted each other, and were generally inconclusive. The problem, it later turned out, is that silicon is both exquisitely sensitive to the presence of impurities (which is why it's so useful) and extremely difficult to purify. It took a decades-long effort of progressively refining the techniques to manufacture pure silicon before its properties could actually be probed. This went hand in hand with refinement of bandstructure theory. Eventually, the structure was known well enough that the first solid-state transistor could be created, which would lead to the computer revolution--decades later.

Even after the silicon transistor was created in 1954, it took scientists and engineers years to get to the desktop computer and the internet. And much of the development was incremental, rather than in revolutionary flashes of insight. Each generation of hardware allowed engineers to refine techniques and build a better set, which is why the cpu in this computer consists of transistors largely in the same design as the 1954 original, except millions of times smaller and faster.

Ignoring this type of incremental (but no less world-changing) science leads to the type of big-idea, insight driven reporting so brilliantly excoriated in this extended piece by Boris Kachka in New York Magazine, written in the wake of the Jonah Lehrer scandal. It leads to doubt when climate change science isn't as clear-cut and straightforward as people have come to expect real science to be. And it leads to young potential scientists doubting their ability to be scientists when their ideas aren't right, or are incomplete.

So let's keep telling the mind-blowing stories. But let's also remember to occasionally tell the stories of ideas that weren't quite right, experiments that were confusing, and pitch that took a decade to drop.