Skip to main content
Blog

Climate scientists can be divided into two large interactive groups: Experimentalists, who go out into the world and collect climate data (e.g., levels of carbon dioxide, methane concentration, seasonal temperature, snowfall rates, etc.); and Modelers, those who build computer simulations based on that data (called “climate models” by those in the know) to estimate how climate variables affect one another (e.g., does increasing CO2 increase temperature enough to melt polar ice caps that will raise sea levels so high that Miami will be the next Atlantis?). The Modelers depend on data from the Experimentalists to direct, refine, and validate their simulations, and in turn the Experimentalists depend on simulations from the Modelers to formulate hypotheses, define experiments and guide their data collection missions. Currently, there are many more modelers than there are Experimentalists, probably because it is far more cost-effective and less physically intensive to devise simulations on a desktop than it is to head to the Arctic for many weeks and take measurements. This is not to say that the Modelers job is easy. The global climate is an extremely complex system- innumerable feedback loops separate that butterfly from the hurricane it started- and simulating it can be an intellectual Herculean task.

The process to create a simulation of the climate starts with a question. Often it’s a seemingly simple question like “Do rapid changes in wind speed affect CO2 levels?” Once this question is asked, the Modeler must find data sets having CO2 levels and wind speeds measured by some Experimenter somewhere (these data sets exist, you just have to know where to find them). Simple so far… right? Now things start to get complicated: the data must include both CO2 and wind speed, ideally collected at the same time; the data should cover a large enough geographical area (having data that covers all of North America is much more useful than a set that was collected only from one location); there should be geographic data (you want to know where samples came from and see if your result is universal); the data set must cover a long enough period of time (ideally years- the more the better); the data set must have enough data (“enough” can mean a lot of things, and answering that question is a science unto itself); the data must be good quality (ditto); and there should be more than just CO2 and wind in the data (e.g., total humidity, precipitation, other gases in the atmosphere, etc… you want to be able to catch anything else that might influence CO2 or be affected by wind speed). Once the data set is decided upon and parsed, the Modeler uses finely-tuned logic and computer programming skills, with just a touch of mathemagic, and voila, the simulation gives a result! This can take anywhere from a few minutes to several months, depending on the detail required for the simulation to work. Once the result is had (let’s say the result is “YES! Wind speed does affect CO2 levels, and it makes them lower!), the Modeler excitedly announces the result in a scientific forum, usually a journal or a conference, for the World to see. Inevitably, different Modelers will ask the same question, and their results will be completely different (“Your model is erroneous! My model clearly shows rapidly increasing CO2 with wind!). And still others will decide both are crack-pots and their results are irrelevant.[i] At this point the Modelers will argue over who is correct, and eventually agree to disagree until more data becomes available. This is when the Experimentalist steps in.

The Experimentalist (often not the one who collected the data used by the Modelers in the first place) will evaluate the data that was used to build the simulations. S/He will look at the quality through a different lens than the computer and math-savvy Modelers and ask questions relevant to how the data was collected and analyzed in the first place: specifically what methods were used to measure the data (if you use the wrong method, you will get the wrong data); what sort of quality controls ensured the measurement is accurate (if you don’t use the correct quality control, the data could be wrong)? Other questions which the Modelers often ask with the Experimenters are: How large is the geographical area of the data set and were enough locations within that area sampled (something we call “spatial distribution”- the more locations the better); how often was data collected and over what time span (something we call “sampling frequency”- again the more often you sample over a long period of time the better); are there any other variables that should be measured in more detail (the climate is complex and lots of things can cause changes- you want to be able to account for this); and, finally, is there enough data (that’s usually a big one- if you don’t have enough good quality data then you cannot be sure of your result).

Having addressed the issues, the Experimentalist now sets out on designing the way to make these measurements. This includes plans and definitions on where to sample, what to sample, when to sample, and, most crucially, how to sample. Defining how one is to collect and analyze the sample is the nuts and bolts of the operation: what equipment is needed to make the measurement, which in turn defines where, what and when the sampling is to occur. Typically lots of equipment is needed to make the measurements; to know the precise location of your sampling you need a GPS; to know the precise wind speed at the time of sampling, you need what’s called a sonic anemometer; also not obvious, but absolutely necessary if you want to accurately measure small changes in gases is to measure the relative humidity at the time of sampling; and of course analyzers that can measure CO2, and other relevant gases such as methane, carbon monoxide, etc. Then of course is data clean-up: lining up all the data to be certain that the bits recorded at the same time are tied together; a lot of data is often parsed down because of time incongruencies or “warming up” periods; then there is the quality control.

That’s a lot of measurements and data time, and intuitively one would think the Experimentalist would need a variety of expensive, complicated equipment, each requiring a power source and some sort of data logger to save the data, not to mention computer programming skills to make sure all the data is collected properly. If you’re keeping count, that’s at least six analyzers, with power sources (car batteries, often) likely a data logger or two, and most likely a laptop, amounting to 14-15 individual pieces of equipment for six types of data. Wouldn’t the Experimentalists’ scientific life be made easier if these numbers could be reduced?

At Picarro, we think that if the Experimentalists’ scientific life is easier, they will be able to shift focus from the nuts and bolts to the real meaning of their measurements, which means better organized, higher-impact science. That is why we work with Scientists from around the globe to design analyzers that not only measure multiple greenhouse gases simultaneously (and continuously) at the highest precision and in as simple a manner as possible, but also seamlessly integrate data with other analyzers such as GPS and sonic anemometers. This way, the Experimentalist now only has to bring along a single instrument to measure carbon dioxide, carbon monoxide, methane and humidity levels that can be plugged into the GPS and sonic anemometer. Six types of data, one analytic node. The number of individual pieces of equipment is greatly reduced, the data is top-notch and trustworthy, and, best of the all, the Experimentalist can analyze the data instantly without all that lengthy parsing and aligning required in sampling campaigns past.

So, when the Experimentalist and Modelers meet next, everyone can be sure of the data quality, and the Modelers will be secure in using this new data to test their simulations: if a simulation is correct, then it should have accurately predicted what was measured; if the simulation is incorrect, then the modeler must go back to the good ol’ drawing board. Either way, Climate Science as a whole moves forward in helping us better to understand our World.

 



[i] Sidenote: this affects all sciences that rely on analysis of complex systems based on very large sets of data. It is partly why one year you’ll read that “scientists announce a glass of wine a day is healthy for you”, but the next year “scientists announce a glass of wine a day is bad for you.” Complex interactive systems are not straightforward.