Meta-monitoring – Exploiting the Increasing Self-Awareness of Smart Sensors

Dave Gunderson, a frequent contributor to this blog, has been active in designing his monitoring program to be highly self-aware. Achieving this requires quite a high level of customization and programming of the field hardware and a fairly high bandwidth of communication. This is in stark contrast to many monitoring programs where highly sophisticated hardware and software has replaced a simple float and pulley system connected to a clock-driven chart recorder with no net change in the information collected, communicated and processed.

We used to be limited by our technology to a single value per time-stamp from our sensing device. We became accustomed to this limitation and developed our operating procedures around that very basic constraint. We have since replaced the limiting technology but there is a lag in the development of operating procedures that can fully exploit the multi-channel, multi-thread capability of modern technology.

There is a wealth of opportunity afforded by the ever-increasing sophistication of modern monitoring technology.

I believe that much of the capability of modern sensors is currently sitting idle for want of guidance in terms of best practices and standards for use of this information. Dave has set up his gauges to improve the decisions made by end-users dependent on his data. A variety of performance metrics are now used in real-time to improve the interpretation of the data and also to identify and provide early intervention for potential data faults. It is far better to be proactive and prevent data problems from occurring than it is to only become aware of faults only when the data become unusable.

For any time-stamp, for any given variable of interest, there is no longer just a single value but also an extensible array of useful information about the value. I originally thought that this array of information would fall under the category of metadata – that is data about data – but a quick review of the metadata concept on Wikipedia disabused me of the notion of adding yet another straw on the donkey’s back of the already over-burdened terms meta-data and meta-content.

Instead, I propose the term meta-monitoring i.e. monitoring about monitoring.

In addition to the real-time decision-making and preventive actions purposes that Dave has been working on I am very excited about meta-monitoring for data traceability and auditability. For example, most sensors are designed to be field calibrated to the river stage or water level. This makes perfect sense from an operational point of view but is a bit problematic from a data management perspective. There are two dimensional transformations that occur on board the sensor that are completely opaque to downstream scrutiny. The first is a transformation from some measure of electrical current to pressure and the second is the transformation from pressure to length of a column of water. Treating these transforms as a black box has become standard practice. I would far sooner see us program our sensors to output all three values in all three dimensions as a routine product. We would continue to work with the data in the dimension of length but the lid of the black box would be opened so that we always have full traceability of the data to source.

Bureau of Reclamation, USBR, Water Data Management, hydrological data, hydrological methods, hydrology, hydrology blog, hydrology corner, water data, Rating Curves, MonitoringNot many of us have the technological savvy of Dave Gunderson, I know I don’t. However, as he explores the potential of smart sensors and advanced data loggers, hardware vendors are paying attention and will be making it ever-easier to achieve similar outcomes.  In the meantime, we should all be thinking about how monitoring our monitoring will reduce uncertainty, increase network  reliability, and generally improve the credibility and integrity of our data.

No comments yet.

Join the conversation