The Heidelberg Tributary Loading Program
What is the Heidelberg Tributary Loading Program?
A tributary loading program measures the “load,” that is, the amount of pollutants that move downstream past a sampling station on a river or creek each year. The tons of pollutants moving past a sampling station represent both the pollutant load to downstream receiving waters, such as Lake Erie or the Ohio River, and the pollutant export from the watershed upstream from the sampling station. Accurate pollutant loading measurements require information on stream flows (cubic feet per second) and frequent pollutant concentration measurements (milligrams per liter) in the water flowing past the sampling station.
All stream flow measurements used by the Heidelberg Tributary Loading Program (HTLP) are provided by the United States Geological Survey (USGS). Heidelberg University’s National Center for Water Quality Research (NCWQR) collects and analyzes approximately 450-500 water samples for pollutants at each monitoring station each year. From that information it calculates annual pollutant loads from each station and the loads of nutrients, sediments and pesticides delivered to Lake Erie or the Ohio River. The loads from tributaries largely determine the nutrient (especially phosphorus) concentrations in lakes that cause harmful algal blooms (HABs). As of 2016, the HTLP includes 18 sampling stations in Ohio and southeastern Michigan. Those stations together permit the calculation of the pollutant export (loads) from over 50% of Ohio’s land area.
The HTLP is funded by a combination of state and federal agencies, foundations and industries, and the resulting data for most of the monitoring stations are publicly available.
What is important about the Heidelberg Tributary Loading Program?
Detailed knowledge of concentrations and loads of nutrients and suspended sediment exported through these river systems has added greatly to our understanding of the impacts of rural, largely agricultural land management practices on stream water quality and ultimately the quality of both the Ohio River and Lake Erie. This information has also permitted detection of trends in water quality, especially changes in loads of several forms of phosphorus and nitrogen that greatly influence the development of harmful algal blooms and oxygen-devoid “dead zones” in Lake Erie, inland lakes and reservoirs, and the Gulf of Mexico.
The HTLP is the only program of its type in the Great Lakes Basin. It has made possible the following initiatives and accomplishments:
- The HTLP comprises the primary source of data used to estimate the total annual loading of phosphorus to all of Lake Erie.
- Data from the HTLP was used in the establishment by the Great Lakes Water Quality Agreement of the target annual phosphorus load of 11,000 metric tons for Lake Erie and provided the evidence in the 1980s that the target annual load was being met except for years of high tributary runoff.
- Based on evidence provided by the HTLP that the loading of dissolved reactive phosphorus into Lake Erie began to increase in the mid-1990s at the same time that harmful algal blooms also increased, the interagency Ohio Lake Erie Phosphorus Task Force was formed in 2007 to investigate the causes of that increased loading and to make recommendations to federal and state agencies for counteracting that increase.
- HTLP data enable direct assessment and evaluation of the watershed-scale effectiveness of BMP implementation programs aimed at nutrient load reductions from watersheds to Lake Erie, Grand Lake St. Marys, and the Ohio River.
- Scientists at numerous universities who are developing models linking land uses to Lake Erie water quality rely on HTLP data to calibrate and validate their models. These universities include the University of Michigan, Ohio State University, University of Toledo, Tufts University, University of California Berkeley, University of Wisconsin Green Bay, University of Waterloo in Canada, and others.
- The HTLP network is a crucial component of the newly formed Eastern Corn Belt node of the Long-Term Agro-ecosystem Research program of the USDA Agricultural Research Service (ARS). The node is a partnership of the NCWQR and ARS’s Soil Drainage Research Unit and National Soil Erosion Research Laboratory.
- HTLP data sets are widely accessed by government agencies, educational institutions, commercial organizations and non-profits, and they attract numerous research and implementation grants to Ohio.
2017 Project Study Plan and Quality Assurance Plan
Data quality control and data screening.
The data provided in the River Data files have all been screened by NCWQR staff. The purpose of the screening is to remove outliers that staff deem likely to reflect sampling or analytical errors rather than outliers that reflect the real variability in stream chemistry. Often, in the screening process, the causes of the outlier values can be determined and appropriate corrective actions taken. These may involve correction of sample concentrations or deletion of those data points.
This micro-site contains data for approximately 126,000 water samples collected beginning in 1974. We cannot guarantee that each data point is free from sampling bias/error, analytical errors, or transcription errors. However, since its beginnings, the NCWQR has operated a substantial internal quality control program and has participated in numerous external quality control reviews and sample exchange programs. These programs have consistently demonstrated that data produced by the NCWQR is of high quality.
A note on detection limits and zero and negative concentrations
It is routine practice in analytical chemistry to determine method detection limits and/or limits of quantitation, below which analytical results are considered less reliable or unreliable. This is something that we also do as part of our standard procedures. Many laboratories, especially those associated with agencies such as the U.S. EPA, do not report individual values that are less than the detection limit, even if the analytical equipment returns such values. This is in part because as individual measurements they may not be considered valid under litigation.
The measured concentration consists of the true but unknown concentration plus random instrument error, which is usually small compared to the range of expected environmental values. In a sample for which the true concentration is very small, perhaps even essentially zero, it is possible to obtain an analytical result of 0 or even a small negative concentration. Results of this sort are often “censored” and replaced with the statement “<DL” or “<2”, where DL is the detection limit, in this case 2. Some agencies now follow the unfortunate convention of writing “-2” rather than “<2”.
Censoring these low values creates a number of problems for data analysis. How do you take an average? If you leave out these numbers, you get a biased result because you did not toss out any other (higher) values. Even if you replace negative concentrations with 0, a bias ensues, because you’ve chopped off some portion of the lower end of the distribution of random instrument error.
For these reasons, we do not censor our data. Values of -9 and -1 are used as missing value codes, but all other negative and zero concentrations are actual, valid results. Negative concentrations make no physical sense, but they make analytical and statistical sense. Users should be aware of this, and if necessary make their own decisions about how to use these values. Particularly if log transformations are to be used, some decision on the part of the user will be required.