Q:Thanks for running this blog, it makes me so certain I've chosen the right life path in becoming a scientist.
No, thank YOU!
The wonderful thing about becoming a scientist is that you don’t have to become a scientist to become a scientist.
Q:Hi Joe!! Right now, I have a really horrible Calculus teacher, and I was wondering if you knew of any websites that can help me learn everything he didnt teach us before the midterm!! Thanks!
Eesh, I haven’t thought much about calculus since calculus class. Sorry your teacher can’t derive their way out of a paper bag.
Khan Academy, obvs, but I also recommend everyone check out Open Culture’s list of free online courses (scroll down for the math courses). It puts a cornucopia of learning at your fingertips. Bookmark that page. It’s glorious.
Q:Hi Joe! Love your blog, and I wanted to ask: What year is it? Not in the Gregorian calendar but what actual scientific year for the earth is it? And if it's too hard to calculate, do we have an estimate? Thanks!
Kinda depends on where we set year zero, eh?
My first inclination was to answer this in relation to the Big Bang, calculating today’s date based on the age of the universe. When we average together the results of all the different scientific experiments that have sought to calculate that number, we get 13.798 ± 0.037 billion years, or an uncertainty of 37 million years. That’s less than 0.3% “?” territory, but still pretty fuzzy.
But wait! The idea of a “year” is based on the Earth’s orbit around the sun (and scientists have many ways of defining a year, as it turns out), so you can’t have “years” without Earth. I think Earth’s age is a better starting point.
Based on radiometric dating of ancient meteorites and other really old rocks, scientists peg Earth’s age at 4.54 ± 0.05 billion years, an uncertainty of 50 million years. Sheesh, 1% error? Are we sure about anything?
That means it’s somewhere between year 4,490,000,000 APF* and 4,590,000,000 APF. Kind of a broad estimate, unfortunately, but it means that next time someone tells you to turn something in or finish a project at work by a certain date, you can just stare at them for a few seconds and say “But we don’t even know what YEAR it is, man…”** and just walk away.
* “APF” stands for “After Planet Formation” and is an abbreviation I literally just now made up so it should not be deemed scientific, although I AM a scientist, so maybe just say it with conviction and everyone will believe you.
** I recommend using your best Spicoli impression while saying this.
Q:Hello. Since you're the only science tumblr I follow I thought I would ask you this question. If an interracial couple were to marry, and have children, and their interracial children had interracial children, and so on, how many generations would it take before either the maternal or paternal ethnicity would be completely eliminated? (i.e. if it was a black and white couple and their mixed child married an asian, and their mixed child married an hispanic, and so on.)
Hi there! Thanks for your question. Unforch, this question isn’t really answerable.
Ethnicity and race are social constructs, not useful genetic traits that we can (or should) use to differentiate people. Ethnicity and race can’t “dilute” out (in a genetic sense), because you can’t point to a genome and say “that’s the Hispanic gene” or “There’s the sequence that makes you Asian.” Yeah, we can point to genes that influence skin color or facial features, but that’s not race. It’s biology.
That doesn’t mean that we can’t track genetic differences based on geography and its associated populations, though. We can, and we do. For instance, if we compared the genome sequences of indigenous North, Central and South American populations to, say, Asian and European genome sequences, we would see that the original Americans are more closely related to Asian populations. This matches up to geological studies that suggest that there once existed a Siberian land bridge, and allows us to make hypotheses about human migration patterns across the Earth (not all of those migrations have been voluntary, mind you).
We can, and have, done the same analysis by comparing modern and ancient samples from place X with modern and ancient African DNA, which is how we know that we the first members of our species left Eastern Africa about 70,000 years ago to settle the four corners of the Earth (which has no actual corners, of course).
However, like quick-drying cement, this analysis gets really hard, really fast (insert your own dirtier joke there if you like). Genetic fingerprints get jumbled thanks to the huge amount of genetic crossover that happens as part of our meotic sexytime, and because humans have interbred … a lot. Not in a gross (and genetically dangerous) “banjo player in Deliverance” way, but in a “we’re all related” way. We only have to go back 2,000-4,000 years before we find a person who is a common ancestor for every single human alive on Earth, and, for Europeans at least, anyone who was alive and had children 1,000 years ago is the ancestor of every person of European descent alive today.
So it only takes a few dozen generations before analysis of our crossed-over, interbred nuclear genomes gets so messy that we’re tracing complex statistics instead of neat and tidy family trees. So to make it easier, instead of nuclear genomes, we often compare the tiny, circular genomes that persist within our mitochondria.
You’ll recall from biology class (you were paying attention, right?!) that our mitochondria used to be free-living bacteria, complete with circular, prokaryotic genomes. While most of that ancient genome has disappeared (or migrated to our own nuclear genome), our cellular energy factories still hold a circle of DNA that gets passed down to baby mitochondria when a cell divides and when a mommy and daddy lie down (or stand up, or whatever page of the Kama Sutra they’re on) and do Grown Up Stuff™. What’s weird is that (probably because eggs are big and sperm are small) every one of your mitochondria came from your mom, not your dad.
By comparing mitochondrial genomes from the past with mitochondrial genomes from around the world today, we are fairly certain that one single female of the Homo sapiens crew, living in Africa about 100,000-200,000 years ago, is the ancestor of every living human being today. We call her Mitochondrial Eve. She wasn’t the only human female alive then, and she wasn’t the only human with mitochondria. She’s just the one whose kids ended up covering the Earth.
Yeah, people whose recent ancestors come from South Asia look different from people whose recent ancestors come from Sweden. But that’s just human genetic variation, the same way that I have blonde hair and my friends Jamie and Eric are orange-haired gingers.
People have grouped together (and often excluded other groups) throughout history for a variety of reasons, some of them good, and many of them unthinkably horrible. Because of this, our ancestors often bred with those close to them in geography as well as culture, reinforcing bits of human genetic variation in traits like skin color and facial features. We invented “race”. Evolution just made different kinds of people.
All of this is a long way of saying that while your original question doesn’t have an answer, studying genetic differences based on geography and culture is still important to science. Not because it shows us how we are different, but because it highlights our human connections, and reminds us of our shared experience and common origin in a world that could always use a bit more of that kind of thinking.
Q:So I see your post about Evolution with NDT. But Joe. You have to undrstand, as the devils advocate right now (being me), how do you explain the semantics of this argument. If it is fact, why not call it so. Gravity isnt a theory. It is a law because it is observable. The Law of gravity. The laws of thermodynamics. These arent theories, they are postulates. Why if the scientific community is forthforward about gravity, cant they accept it as fact as with these other observable laws?
(FYI, we’re talking about this post)
Thanks for being the devil’s advocate. Nobody ever stands up for that guy!
You ask an important question about the difference between a scientific theory, a scientific fact, and a scientific law, and in doing so you may have inadvertently caught a mistake in Cosmos. We’ll get to that, but first, let’s untangle these confusing terms.
A scientific theory begins life as a hypothesis. And a hypothesis is born when an observation comes together with a possible explanation in the womb of the mind. That hypothesis is fed further observations, and if all remains correct, one day it grows up into a theory. The more a theory can explain, the stronger it is. It can be modified or proven wrong by future observations. What is special about a theory is that it ultimately allows us to predict what will happen and also explain why it is happening.
A scientific law is fairly similar to a theory, except that it doesn’t explain the why. Let’s take the Law of Gravity as an example. It has been incredibly well supported by observation, and it has been revised over time to adapt to new observations (like spacetime), but nothing about the Law of Gravity explains why gravity does its gravitational things. (Incidentally, we usually capitalize laws because it makes them look more important)
A scientific fact, the way I interpret it (its philosophical definition has been debated many times), is an observation that no one has been able to disprove and that we expect two people would observe in exactly the same way regardless of when or where or how they observed it. For instance, it is a scientific fact that the jellyfish green fluorescent protein emits light at a wavelength of 509 nm when it is excited by 395 nm light. This is just a thing that happens. It is an observation that can then be applied to a more general theory of fluorescence, where other observations combine with this observation to tell us both what is happening when a jellyfish glows and most of the sciencey reasons why it happens. Got it? Good.
So what is evolution? It’s a scientific theory. It is a thing that we can see happening (yes, I mean actually observe it happening) and it also allows us to explain why it is happening. The theory of evolution , when we take it all together, encompasses all the chemistry of DNA, the random action of mutations, the shared (or unique) anatomy of distant species, and the mathematics of selection. It’s a what and a why.
What about gravity? Why did Neil call it a “theory”? Here’s the mistake in Cosmos that I think you’ve identified. He shouldn’t have called gravity a theory. It’s a law. We know a lot about the what of gravity, from how mass interacts at a distance to curvatures in the fabric of spacetime, but we don’t know why gravity gravities. So you’re right that gravity is a law. Neil was wrong, at least on this week’s show.
By this time you’re all probably thinking “Joe, this is a load of semantic bulls**t!!" You are absolutely right. It is a load of semantic bulls**t. It’s actually the very definition of semantics, the study of meaning. I’d forgive some of you for thinking this is all a worthless waste of verbal and cognitive energy, because what’s wrong with just saying something is or isn’t?
Well, that all depends on what your definition of “is” is.
Q:Someone recently asked me if there were other grad students/scientists blogging pictures from their daily life in the lab. I realized that I only know of a couple scientist bloggers who focus on original content, generated by their own experiences in the lab/field. Do you know of any others, or even of a list somewhere?
Huh. You know, I don’t actually know of that many bloggers off the top of my head that blog about their own science and their own daily life in the lab and field. I mean, I know plenty of them exist, and I’ve come across their work from time to time (on Tumblr and beyond), but few of them have really stuck with me.
I think many (but not all) of us who start writing while we are doing science have a tendency to write about things that may be in our field, but are not what we work on, because we spend enough damn time thinking about our work as it is! That might be my personal bias from grad school coming through, though.
But I also think that scientists writing about their own work, whether they are tenured profs or first-year grad students, is enormously important, both for communicating science in general and communicating the science that you are doing. Because if you don’t talk about it, maybe no one will? Or worse, they may talk about your science in a way you don’t like.
Maybe we can crowdsource a list of grad students and young scientists who blog their own work? Leave yours in a reblog, reply, or comment!
Here’s just a few I know of to get the ball rolling:
- Danielle Lee - The Urban Scientist (she was recently recognized by the White House for her outreach!)
- Christina Agapakis - Oscillator (synthetic biology)
- The Southern Fried Science team writes about their own marine biology research alongside big ocean news
- Christie Wilcox also pokes around in the ocean and writes about it, always very well.
- Jane Hu writes about cognitive psychology here on the Tumblr
There’s hundreds… likely thousands more out there. What are your favorites?
Q:As for this week’s IOTBS episode - can I add Stanisław Lem to the list? He predicted many things we all know – like the Internet, e-books and audiobooks, device resembling Kindle (he named it ‘opton’. There’s also ‘lectan’ for audiobooks), USB flash drive (Lem named it ‘trion’), nanotechnology (he wrote about ‘smartdust’ in his novel ‘The Invincible’), military robots like drones and (unfortunately) terrorism – not a science prediction though. Anyway, I loved the video. Great topic!
Q:Most science fiction writers (myself included) would tell you that science fiction isn't about predicting the future. We write about the future to show people how things could be different so that we can affect their impressions of the present. Isaac Asimov didn't write about robots because he really believed that by the 21st century we'd have AI servants; he wrote about robots because they were a social commentary on racism. And in truth, *that* message inspires scientists (myself included).
It’s nice to get an “insider’s” perspective on science fiction inspiring science, that perhaps it’s not the other way around, at least not all the time. Although I still think those ideas must extend a root to some deep, careful knowledge of science in order for the seed to fully blossom and bring the adjacent possible into being.
(continued from here)
Q:So, I just have a comment about your Science Fiction as Science Fact, and I just want to preface it with the fact that I loved the video, etc. When you relate the idea that the early science fiction writers were so accurate at predicting the future was there any thought given to the duration before the prediction becomes accurate? Could the fact that the change in technologies, with more things being plausible be something to look into?
(question is about this week’s video, in case you are confused)
I accept that we certainly have a little bias of time separation when it comes to older science fiction and sci-fi predictions. For instance, when it comes to the works of H.G. Wells, we have had a lot more time to see things come to fruition than say, someone more recent, like William Gibson.
My knee-jerk hypothesis would be that our pace of realizing sci-fi technologies is increasing along with the pace of innovation itself, but I am not sure that’s actually true. Many of the predictions of the early- and mid-20th century sci-fi authors were realized within a couple decades of their writing (like Wells and tanks/airborne warfare, Clarke and geostationary satellites, Asimov with robotic Mars explorers). On the other hand, some took a lot longer (Twain’s internet, Wells’ genetic engineering).
Does a longer time between prediction and realization mean it’s a worse prediction? I don’t know. I don’t know how to fairly judge how long is “long” between sci-fi idea and scientific invention, and I’m not even totally certain it’s a valid question.
I mean, whether it took 20 years or 100 (it was closer to the latter), the fact that H.G. Wells’ The Invisible Man devised a technology for invisibility that depended on metamaterials, something that had never even been dreamed of, is amazing on the level of flabbergastification! I mean, check out what Wells wrote about metamaterials, in 1897:
"…it was an idea… to lower the refractive index of a substance, solid or liquid, to that of air — so far as all practical purposes are concerned.”
That’s pretty much exactly how today’s metamaterials work! Sure, today’s cloaking devices don’t evanesce in the visible range of light like Wells’ did, but so what.
Something I didn’t talk about in the video (on purpose) was the extent to which the sci-fi creations act as inspiration for actual scientists. Many people have caught on to that idea in the comments, which is exactly what I hoped they would do. Scientists read and watch science fiction. They are humans who are subject to human influences. But I wonder if there’s a way to ever really know to what extent they were consciously or unconsciously driven by works of fiction.
I guess this whole answer is a long way of saying I’m not really sure. Does the duration between prediction and reality relate to the quality of prediction? How does the whole process even work?