I had the great pleasure of re-connecting with colleagues at the NASH symposia at the AWRA 2014 national conference. There were three well-attended NASH sessions and a panel discussion that were all great starting points for conversations.
One such conversation about discharge measurement uncertainty with Tim Cohn resulted in the statement that is the title of this post. We were discussing the reasons for study into the problem of quantification of discharge measurement uncertainty.
Most of us intuitively believe that understanding, quantifying and communicating uncertainty is inherently important.
However, we were exploring the value added by a measure of uncertainty on data. After all, the data are the best estimate of the truth and are the values that will be used in any design, planning or operational decision. Any uncertainty in the data is passed through and incorporated in the uncertainty of the decision.
Herein lays the problem. In the absence of information about uncertainty, then the ‘best’ decision will respect the ‘best’ estimate of the truth. The decision is defensible because that is what the data indicate is true. If there is a problem with the outcome of the decision, then accountability for the problem will trace back to the data provider if, in fact, it turns out that the data were disinformative.
The defense that the decision-maker was unwise to trust the data is weak. If the data are untrustworthy then why was that not communicated with the data?
On the other hand, if uncertainty is communicated with the data then the decision-maker is obliged to interpret this uncertainty in the context of the risk of adverse consequences. It is this interpretation that locks the door to unwanted surprises.
Over the course of the day many other use-cases for investigation, quantification and communication of measurement uncertainty were revealed. One use-case that I think is most important is that measuring and monitoring uncertainty reduces uncertainty.
This was demonstrated over the course of several presentations as a recurring theme. Specifically, when uncertainty is discovered by use of systematic field procedures, replication or redundancy, field experiments, quality control procedures or by lab experiments then systematic sources of uncertainty can be systematically eliminated from the equipment, from techniques and procedures, and from analysis.
One could readily imagine a future where objective quantification of uncertainty is intrinsic to data production. The time-series of uncertainty would be as valuable to the stream hydrographer as the data is to an end-user. Patterns of uncertainty would be revealed as noise, trends, transients, and steps and each of these patterns could be investigated for root cause to inform a needed change in the monitoring plan.
We are not there yet.
It is a difficult problem. An important challenge is to understand uncertainties due to mismatch between actual and ideal conditions for the deployed methods, techniques and technologies. The initial phase of flow regattas to study uncertainty is in near-ideal conditions but these are not locations where uncertainty variance is the greatest.
Another challenge that seems to be particularly intractable is the uncertainty due to inter-personal variability. Training, skill, diligence, and experience are not uniform across the streamflow monitoring community. The sophistication of modern flow measurements requires expert assessment of the monitoring conditions to adjust the data acquisition accordingly. The difference between skillful adjustments or not can make a huge difference in the truthfulness of the result.
The third challenge that is close – but not quite – within reach is propagation of measurement uncertainty through rating curves. There are some really, really smart people working on this problem. I am confident they will be able to solve this problem but I am not smart enough to guess how they will do it.
Another problem that was discussed was developing an understanding of the uncertainty of the uncertainty estimate. The existing ISO method of calculating measurement uncertainty of panel measurements (the equation is at top of page) is nothing more than an index for the number of panels. If there are a lot of panels the uncertainty is deemed to be low, if there are few panels the uncertainty is deemed to be high. No other factors related to site or equipment or inter-personal variability factor into the calculation.
Nonetheless, it was the consensus of this NASH forum that it should not be difficult to develop methods that out-perform the ISO method and that even uncertain estimates of uncertainty are better than no uncertainty.