Image Credit: NASA/ESA/ASU/J. Hester |
The subject of the press conference was "Not-So-Standard Candles." Before 10 am this morning, I didn't know what a standard candle was, let alone a non-standard one. (Ok, this isn't strictly true: Sarah tried to explain to me what a standard candle was a few days ago, but I didn't listen very closely. Sorry, Sarah!) But what a candle is isn't so important as the purpose it serves. It's something astronomers use to measure how far away things are: a cosmic yardstick.
If astronomers know how intense something is, they can tell how far away it is. Astronomers use "standard candles" - classes of stars called cepheids - to calibrate their instruments. They use cepheids not because those stars have intensities that are reliable and unvarying, but because their intensities vary in a predictable way.
Or, at least, that's what everyone thought. At this morning's press conference,
- Massimo Marengo of Iowa State described a star, identified as a standard candle, that is losing mass due to a strong solar wind. Basically, it's a standard candle that is shrinking over time.
- Scott Engle of Villanova described a standard candle, identified in 1926 as a cepheid, that just stopped being one all of a sudden. Over the past 80 years, it stopped varying in that useful predictable way.
- Marco Tavani of the University of Rome and Colleen Wilson-Hodge of NASA Marshall Spaceflight Center spoke about the way the Crab Nebula - a standard candle so commonly used it has a unit of measurement, the milli-Crab, named after it - fluctuates in not-so-standard short-term and long-term ways. It has short-term gamma-ray flares, and since 2008 it has decreased in intensity 7% which, as Wilson-Hodge put it in nice metaphoric terms, is like "X-ray astronomy's 'meter stick' is shorter by 7 centimeters." For more, check out the NASA press release. (They also have a really slick video.)
What's interesting about this morning's announcement is that astronomers found this disparity - flickering candles - as they were trying to calibrate and use their standard instruments. And they kept getting "weird" results, and at first they thought, "Uh oh, we're doing something wrong here." So they explored that "weird" result, and are now finding interesting things.
This is something you've probably heard before, but it's something that you can never hear too many times: Science is never finished or definitive. And that's ok. That's not some problem with science or the scientific process. It's actually the strength of the scientific process.
There have been a lot of interesting articles recently that examine the way science actually happens - see Jonah Lehrer's New Yorker piece from a few weeks ago and Benedict Carey's piece from the New York Times yesterday on the use of statistics. I love this type of analysis and self-reflexiveness, and I hope we'll only see more. It's important to realize that examinations and explorations of the scientific process aren't attacks on science. They are just the opposite.
No comments:
Post a Comment