Whitepaper - Stage-Discharge Rating Curves.

Best Practice Approach to Stage-Discharge Rating Curve Development

The rate of downloads of the paper “5 Best Practices for Building Better Stage-Discharge Rating Curves” indicates a very high latent demand for guidance on how to develop rating curves.

A stage-discharge rating curve represents the relation of water level at a given point in a stream to a corresponding volumetric rate of flow.

The shape of a curve can be discovered by conducting synchronized measurements of stage and discharge and investigating the pattern of points on a scatter plot. Any physical change to the stream channel will alter this relation and these changes must be accounted for in the derivation of a discharge hydrograph from a time series of stage data.

In an ‘ideal’ world there would always be enough measurements to empirically characterize the shape of the curve for every channel configuration. In the ‘real’ world obtaining ‘enough’ measurements would require an enormous investment in stream gauging.

There are often only enough measurements to hint at what the true shape of the curve is and how it changes over time. The real skill in rating curve development is in interpreting these clues to disclose the truth.

The street work method of detective work requires a senior hydrographer with powers of deduction informed by many years of experience. The forensic method of detective work is an evidence based approach that uses the scientific method to deduce the truth from sparse evidence.

Both methods are valid but there are fewer and fewer hydrographers with the Columbo-like eye for detail and intuition to ask ‘just one more question’ needed to get at the underlying truth. Use of the scientific method can help hydrometric agencies cope with a changing demographic.

A poorly-conceived rating curve can produce discharge data that does not pass the test of hydrologic reason.

Extreme values from curve extrapolation may be too high (e.g. a rainfall runoff ratio > 1), or too low. The early years of record may have peaks defined by a badly extrapolated curve relative to ‘better’, more mature, extrapolations resulting in an apparent (but not real) trend in peak flows. The seasonal water balance may be skewed by poorly modeled backwater effects.

Developers of hydrologic models are particularly attuned to whether the discharge time series make hydrologic sense or not.

Calibration of model parameters to force a fit to unreasonable data results in a loss of performance in the validation dataset relative to the calibration performance. Mistrust of hydrometric data limits the advancement of hydrologic science because an improved process understanding requires data that are trusted to be close to the truth.

It is a very good thing for the science and practice of hydrology that there are so many practitioners interested in learning how to develop better rating curves that are reliable, credible and defensible.


Free Whitepaper: 5 Best Practices for Building Better Stage-Discharge Rating Curves

A reliable rating curve is one that is credible, defensible, and minimizes re-work. This paper outlines 5 modern best practices used by highly effective hydrographers.

  • Dick Allison
    Posted at 6:03 pm, February 4, 2014


    Thanks for the e-mail and the Whitepaper. The paper is right to the point and doesn’t elaborate unnecessarily. It is of particular interest to me at this time, as I’m wrestling with most of the problem areas you write about when it comes to drawing up an S-D curve. As you are well aware, the Flood of 2013 in the Bow/Elbow River basins in Alberta was of such a magnitude that the subsequent high flows blew out not only natural controls in many streams, but in some cases, completely altered the channels near a lot of the water level recording sites.

    Personally, I do flow metering each winter for all 3 ski areas in Banff. The ski areas have water extraction permits with Parks Canada for snow making. To comply with the water permit restrictions, each site requires a flow measurement every 2 weeks from October 15 to the end of snowmaking which can be into March or early April. Each of the monitoring sites has a staff gauge and a corresponding S-D curve and table that has been drawn up from up to 20 years of data. As mentioned above, the channels at all 3 sites have been completely changed, either by being re-constructed after the flood or left in their newly scoured state. On top of that, any previous bench marks have been totally destroyed and the only method of computing shifts for the period was to use the latest, pre-flood curve. Before October 15, 2013, new staff gauges were installed as close to the original site as possible. For a starting point, and to try to use the last S-D table, a flow measurement was taken at each site on the day of the install. The flow measurement number in cms. was looked up on the S-D table to get corresponding water level. The gauge plate was set to that level, in hopes that the flow recession would track similarly to the previous gauge. This was also the only way to compute a backwater shift when ice formed in the channel. The ski areas require a flow calculation each day before operating under ice conditions, and of course, without making a flow measurement, a fairly current shift is necessary to get a flow number from the S-D table.

    Taking all the above into consideration, I’m having some problems drawing up a credible S-D curve at one of the sites. The channel is still in a state of flux due to the gravel and boulders in the control area. That and ice conditions that come and go are going to pose some problems to come up with a reliable set of flow data for the winter of 2013-14. That’s where your five best practises are going to be stretched to the limit. You’ve given me a renewed vigor to get some reasonable data for the annual water use report and I thank you for that. In my 45 years of flow metering and hydrology I can’t remember having a bigger challenge – except possibly in computing daily flows in the muskeg of the Horn River Basin in NE B.C. from 2011-13.

    Anyway, thanks for listening – and keeping on top of water quantity flow computations.

    Dick Allison, Hydrologist,
    Lethbridge, Alberta.

    • Stu Hamilton
      Posted at 6:04 pm, February 4, 2014

      Hi Dick,

      I believe you are on the right track to providing the best available data under the circumstances. A key point of the best practices approach is timely awareness of information requirements and appropriate adjustment of monitoring operations to collect the needed information.

      There are several problems with the use of a stage discharge relation during episodes of rapid change in channel morphology during, and following, a disruptive event. Having said that, you really don’t have a viable alternative because every other conceivable method is at least as challenging under these circumstances, if not more so.

      The loss of bench marks, staff gauges, ice conditions, and possible loss of bed armoring are all conditions that require adaptations to your monitoring plan. Without knowledge of local conditions and other constraints on field work I can’t comment on what specifically will work best for you. However, it is pretty clear to me that this is a rating development problem that requires a solution based on good field work rather than fancy analytics.


      • Dick Allison
        Posted at 6:04 pm, February 4, 2014

        Thanks for getting back, Stu. Another thing I run up against is clients who want good data, or are forced to have good data because of a license requirement, but don’t want to pay for the time and equipment needed. Luckily, we still have Government and forward-thinking private companies like yours, that keep moving the ball forward and keep expanding the research part of the industry. Mother Nature just comes along once in a while to help us see if the new technologies work and what adjustments have to be made. Sure is a dynamic field to be a working in!

        Thanks again.

  • Jérôme Le Coz
    Posted at 3:12 pm, February 5, 2014

    Dear Stu,

    I enjoyed your last whitepaper on rating curves: you provided a very useful and insightful summary of the major requirements for a good management of rating curves. I fully agree that understanding hydraulic controls, applying systematic procedures and managing non-stationary effects are key indeed.

    I’d like to share a few more thoughts after reading your document:

    * You state that ‘The curve can be no better than the field observations’. Personally, I don’t think this is always true: some gaugings can be more uncertain than the established curve, especially when a dense set of uncertain gaugings is available to establish a stationary stage-discharge relation, or when a precise structure such as a thin-plate weir is used. I have exactly the same kind of discussions with French colleagues who agree with your statement (hydrometers are often very proud of their measurements!), while I fear they may forget their gaugings also are uncertain, rather than an ideal ground truth.

    * That the gaugings, especially those conducted consecutively in similar conditions, may present errors that are not mutually independent is a very interesting point. We have to think more about that, especially for automated streamgauging stations such as our video-based stations, which provide a lot of successive gaugings during floods, with potentially correlated errors.

    * I think that the Bayesian approach brings practical solutions to some important issues you raise in Best Practices 2 to 5. Is the curve fitting in Aquarius based on Bayesian inference? FYI, we developed such a Bayesian tool (BaRatin) for stationary rating curves and a PhD was recently launched to extend BaRatin to time-varying curves. A recent article introduces the method:

    * I think that ‘data grading’ is not sufficient to efficiently qualify the derived discharge results (BP5), because end-users usually do not know how to proceed with data grading or approval levels in their applications. Most often, they simply ignore the quality grading… Uncertainty analysis is arguably the only way to deliver a quantitative information on the data quality that can be further used on a non-subjective basis. Uncertainty analysis would also help for BP3 and BP4 objectives.

    Best regards

    Jérôme Le Coz
    Irstea (nouveau nom du Cemagref)
    Unité de Recherche Hydrologie-Hydraulique

    • Stu Hamilton
      Posted at 3:13 pm, February 5, 2014

      Hi Jerome,
      I will stick by my argument about the curve being no better than then field measurements for the ‘normal’ case. I would agree with you that the curve can ‘better’ than the measurements for the extraordinary case where errors are aleatoric; there is adequate density of measurements over the full range of stage; and all hydraulic factors affecting the control are stable through time. Even for the special case of a thin plate weir there can be bypass leakage that is only detectable by accurate measurement.

      I don’t think our disagreement on this point is based on pride in measurement but rather firsthand experience with the difficulties of measurement in natural channels. Measurement errors, particularly for influential measurements made under adverse conditions (e.g. flood stage, small flow, aquatic vegetation), are qualitatively understandable but quantitatively uncorrectable.

      I confess to not being able to fully comprehend the Bayesian approach. My motivation for further study of advanced statistical methods for curve fitting is limited by the certainty that most hydrometric programs will never be funded to the level where there is sufficient quality and frequency of measurements to justify an IID assumption. I will get more interested when statistical methods capture relevant information obtained from hydrographs, cross sections, photographs, sketches, notes, hydraulic geometry, wind speed and direction, and equipment diagnostics.

      The reality is that most rating curves are developed with insufficient information to fully constrain the solution. The best practices approach was developed to address this reality. There is a wealth of observational information available to the hydrographer that can provide constraints to the rating solution. The key is to recognize the value of this information and to deliberately seek and document relevant information during field visits.

      I agree that a simple grade on data is inadequate especially without any standardization of how grading should be implemented. My argument is for a comprehensive explanation of data quality. Early efforts at quantifying hydrometric uncertainty have been disinformative. I fear it amy be some time before uncertainty estimates can be produced with low uncertainty. Data users may, and many will, ignore the explanations of data quality. Their right to produce substandard work should not be a reason for hydrographers to do substandard work.


      • Jérôme Le Coz
        Posted at 1:22 pm, February 6, 2014

        Dear Stu,

        Thanks for sharing these views. I definitely think that we fully agree on the hydrometric facts and only discuss the way things could be improved in practice. This a just a short follow-up, hoping I don’t waste too much of your time.

        I totally agree with you that fitting and grading rating curves cannot be reduced to ‘blind’ statistical analysis of gaugings, ignoring the necessary professional judgment, site knowledge and hydraulic understanding that skilled hydrographers are able to mobilize. That’s exactly why I like the Bayesian approach (which as a simple hydraulician I would not be able to comprehend without the help of some more advanced colleagues, by the way!). Because it is a statistical analysis (which is necessary for uncertainty analysis) that takes as an input the prior knowledge of the hydrographer in order to constrain the solution accordingly. We simplified this operation in BaRatin so that field operators are able to use it and input their prior knowledge through very simple mathematical parameters. One advantage is that you can formally separate the observational and the conceptual sources of information that led to the resulting curve.

        Best wishes

        • Stu Hamilton
          Posted at 2:20 am, February 12, 2014


          I am not sure I will be able to fully understand BaRatin the way you have explained it by simply re-reading the paper. I think I need to experience it in the context of a hydrometric workflow before making any further comment.

          From a workflow perspective I don’t understand how you can separate the function of deriving/re-evaluating a rating curve from the function of modeling departures from the curve. Is BaRatin clever enough to transition amongst curves as sediment pulses move downstream past the gauge?

          One thing I would eventually like to do is investigate the influence tools have on analysis (i.e. if the only tool you have is a hammer then every problem starts to look like a nail).

          I have talked to Marianne Watson about comparing the difference in hydrographic analysis of dynamic channels by hydrographers coming from a background of blending rating curves from those coming from a background of USGS style shift corrections. We haven’t done that yet because a robust implementation of the experiment would require a lot more time (and careful contemplation) than either of us currently have available.

          We share the opinion that both methods have merit within relevant context. It is the hydrologic/hydraulic context that should drive choice of method not cultural/technological legacy context. However, in the absence of evidence from an investigative study, any opinion we may share is inconsequential to the larger hydrometric community.

          What would indeed be fun would be to throw BaRatin into the mix. Man vs. machine, Kasparov vs. Deep Blue. Can we design an experiment that would evaluate human interpretation (using analytical paradigms developed independently in different regions of the world) against Bayesian statistical interpretation? There are a number of relevant performance metrics that could, potentially, be quantified with careful experimental design.

          As stated earlier, neither Marianne nor I have a ton of time readily available but your engagement in the conversation may open up new opportunity.

          If nothing else, I would be interested in getting some training in the use of BaRatin so that I can play around with it using datasets I am familiar with. Perhaps the next step could be a webex meeting where you can explain and demonstrate BaRatin to us.


          • Jérôme Le Coz
            Posted at 2:30 am, February 12, 2014

            Dear all,

            These are interesting perspectives. However, I’d just like not to oversell the current version of BaRatin, which is not and does not aim at being the hydrometric Deep Blue (rather Kasparov’s notepad):

            – the results from BaRatin are dependent on the expertise and knowledge that the user puts in the user-defined RC formula (a fixed number of segments defined by power functions) and in the prior values and uncertainties of its parameters. There is no artificial intelligence here, rather a convenient, mathematical formalization of the hydrographer’s judgement. Since the procedure is systematic, two different users with the same knowledge on the station should obtain similar results at the end.In turn, a third user with additional or different experience should be able to improve the results by using more appropriate or more precise priors. Anyway, the assumptions made are easy to explain and discuss on a formal basis.

            – so far, BaRatin does not address non-stationary stage-discharge relations: the ideal RC is assumed not to vary with time. Of course, this is a severe limitation, but we had to start from there. A PhD student’s just started developing BaRatin for time-varying RC. Here again, we want to formalize the physical assumptions made on the non-stationary, and get info from gaugings separately.

            – anyway, using BaRatin for testing the impact of the tool on the results may be of high interest indeed. A demo would help indeed, but I suggest waiting for the next release which should come with a slightly different option for the uncertainty assessment and an English version of the user guide. A better GUI will be developed in 2014. In brief, even if already used by some real hydrographers, it is still a work in progress! Any suggestions to improve the tool are welcome, of course.


Post a Comment