Aquatic Informatics is a software company. Software is a set of algorithms that take inputs and produce some sort of output based on encoded rules. In order to be most efficient the human machine interface needs to be designed along a work flow of ‘best practice’.
The AI user group meeting is one of our primary methods for ensuring that our software both ‘does the right thing’ and ‘does the thing right’. These sessions are designed to be more of a conversation than training per se. This feedback may validate our alignment with best practices or it may identify opportunities to improve on the efficiency and effectiveness of the user experience.
It is sometimes necessary to get the conversation started.
There is no existing community-wide consensus on what the best practices are for building better rating curves. I have been working with Marianne Watson from New Zealand to address this deficiency.
If you are interested in this perspective, please view my presentation from last week’s User Group meeting. I would like to hear back from you on whether we are on the right track, or not. Marianne and I are collaborating on a paper on the subject. We intend to follow this up in a series of papers including: a condensed version of this paper as a Whitepaper; a technical note on each of the best practices; and a series of use-cases that will demonstrate how a best practice approach is successful when applied to difficult rating problems.
A reliable rating curve is one that is credible, defensible, and minimizes re-work. This paper outlines 5 modern best practices used by highly effective hydrographers.