Virtual Observatories and Snakes in the Grass

virtual observatories

I ended my last post “Amazing GRACE” with a segue to virtual observatories with a call to “… start thinking about our methods of measurement and data management to figure how to combine what remote sensing does well (extensive coverage) with what field observations do well (high resolution).”

Virtual observatories are a blend of the things that we observe with the things that we predict, or assume, to be true.

The things that we predict, or assume, to be true are derivatives of observational systems. There are many sources of satellite remote sensing information and almost all of these products are based on some measure of the energy either emitted by, or reflected from, the earth’s surface. Algorithms transform these spectral signatures into some other geophysical variable of interest. Similarly, there are many models that use algorithms to derive the spatial-temporal distribution of some interesting variable from some representation of how we believe the variable responds to other information that we believe to be true.

Dr. Keith Beven and his co-authors raise some serious concerns about virtual observatories in the commentary “On virtual observatories and modeled realities (or why discharge must be treated as a virtual variable)” (Beven et al. 2012). The basic problem is that virtual observatories conflate the aleatoric errors of observation systems with the epistemic errors of derivation systems. Aleatoric and epistemic are adjectives that are not used in everyday conversation very often (e.g. I am sorry Honey; it was a simple epistemic error. I did not predict that there would be drinking involved when I went out with my Water Survey buddies last night) so it might be useful to use the terms in context.

Imagine a hawk hunting over an open field

The hawk is looking for movements in the grass that indicate the location of tasty field mice. Errors in detecting movement tend to be aleatoric as a result of sensory imprecision, particularly when the observations are near the limit of focal resolution. However, the hawk must also interpret patterns of movement that may indicate a gust of wind, the slithering of a venomous snake, or the scurrying of a tasty treat. In nature, the calibration of predictive models is a brutal process. If the model is flawed (i.e. has epistemic error) then the hawk will either die quickly from snake venom or slowly from starvation. If the model is valid then the hawk grows fat and raises many healthy chicks.

The distributions of geophysical variables (from either space-based monitoring or from gridded model output) in virtual observatories are based on predictive algorithms that have never been tested in a survival of the fittest selection process. This is because we do not make predictions at a scale that we can validate with field observations. Proponents of virtual observatories argue that it comes down to fitness-for-purpose; they would never claim that virtual data are fit for every local-scale purpose but the information is invaluable for global-scale investigations. The problem is that the truism “if the only tool you have is a hammer, then every problem starts to look like a nail” is never more true than for hydrological investigations. Beven warns us of the danger of being indiscriminate in our desperation to find relevant data for hydrological studies.

Beven also raises the concern that discharge is a virtual variable that contains epistemic errors inherent in the transformation of stage to discharge.

This brings the conversation about virtual observatories a bit closer to home. Discharge is the most trusted variable in water balance calculations and therefore typically results in the forcing of other variables to agree for the purpose of closing the water balance. Epistemic errors in discharge will therefore alter our perception of watershed function and result in the development of a pathological understanding of hydrological process. These concerns deserve a response from the hydrometric monitoring community.

Beven, K., W. Buytaert and L.A. Smith. 2011. On virtual observatories and modeled realities (or why discharge must be treated as a virtual variable). Hydrol. Processes. doi 10.1002/hyp.9261
http://onlinelibrary.wiley.com/doi/10.1002/hyp.9261/full

Photo Credit: Lisa Jeans  |  snake in the grass  |  License

One response to “Virtual Observatories and Snakes in the Grass”

  1. I’ll try reply to Dave and Andrew at the same time.Dave is talking about teoochlngy that is capable of both creating new, useful, information from the raw sensor readings and storing all of the metadata required to understand and interpret this information.Andrew is talking about the ability to verify information independently. When you get to the checkout at the grocery store you can trust the till receipt with more confidence if you already have done the summation in your head.My understanding of what is possible is:1. That logical tests can be conducted on the information that would result in machine-generated metadata based on a pass/fail criteria. These are binary measures of some quality of the data and as such cannot logically be combined in any way with any other measures of data quality. In other words, a value at any given time-stamp can accumulate several of these indicators all of which are relevant for forensic analysis of pathological data. The nature of the test may determine whether the data are still usable or not and, if not, then the data value may be censored from visibility further downstream.2. Human grading of the data based on conformance with an a priori model of what seems reasonable. This model may be informed by: other variables (e.g. what an upstream gauge is doing); evidence that the gauge is working correctly (e.g. inspection of on-board diagnostics); and evidence that the gauge has been operated in conformance with standard operating procedures (e.g. inspection of the written log book). Supervised grading is inherently rational and done a posteriori and so a different set of rules is required for this type of metadata.3. Contextual metadata (e.g. the algorithms programmed into the sensor and logger) can be stored in a text format and associated with blocks of data. Unfortunately, this metadata tends not to be machine readable and so a receiving system does not know what to do with it. It is also difficult to link this metadata to the data in a way that it will be discoverable anytime in the future.This all sounds good but there are several aspects that I struggle with. One thing is the case of a shaft encoder giving the max, min and mean for a 30 second sample of 1-second readings. This provides a precise mean value as well as a measure of dispersion so you lose neither information about the water level nor about the turbulence. What time-stamp do you give this information, the beginning, middle or end of the sample period? How do you aggregate a series of these readings for an hour or for the duration of a rating measurement? You need to be very careful about averaging a series of averages if you are interested in making inference about the value during the unsampled time-frame.Unbundling this context efficiently would, I think, require machine-readable metadata. What would be helpful is some sort of industry-wide standard for encoding sensor and logger programming metadata.

Join the conversation