obs4MIPs has defined a set of dataset indicators that provide information on a dataset's technical compliance with obs4MIPs standards and it’s suitability relative to climate model evaluation. The motivation for including this information is two-fold.
First, the indicators provide an overview of key features of a given dataset's suitability for model evaluation. For example, does the dataset adhere to the key requirements of obs4MIPs (e.g. having a technical note and meeting the format specifications required to enable ESGF searching)? Similarly, is the model and observation comparison expected to be straightforward (e.g. direct comparison and/or differencing), or will it require the use of a simulator on the model output? Another relevant consideration is the degree to which the dataset has been used for model evaluation and has publications that document that use.
Second, by implementing this small suite of indicators, it allows a wider spectrum of observations to be included in obs4MIPs. In the initial stages of obs4MIPs, there was a basic tenet of the activity to only include relatively mature datasets that had been often exercised by the climate model evaluation community. While this helped ensure the contributions were relevant for model evaluation, it also limited the opportunity for other / newer datasets to be exposed for potential use in model evaluation. The establishment of the new indicators will facilitate the monitoring and characterization of the increasingly broad set of obs4MIPs products hosted on the ESGF.
There are six indicators grouped into three categories: two indicators are associated with obs4MIPs technical requirements, three indicators are related to measures of dataset suitability and maturity relative to climate model evaluation, and one indicator is a measure of the comparison complexity associated with using the observation for model evaluation. The indicators, grouped by these categories, along with their potential values are given in Table 1 below. Each of the values is color coded so that the indicators can be readily shown in a dataset search as illustrated by Figure 1 below.
The values of the indicators for a given dataset are assigned, in consultation with the dataset provider, by the obs4MIPs Task Team. Note that the values of the indicators can change over time as a dataset and/or its use for model evaluation matures or the degree the dataset meets obs4MIPs technical requirements improves. One of the more challenging indicators to assign is the "Closeness or robustness of measurement to the observed reference evaluation". To help guide the assignment of this indicator for a given dataset, see the examples outlined below in Figure 2.
In brief, these indicators are meant to serve as approximate measures that provide an overall summary of a dataset’s suitability for climate model evaluation. They do not represent an authoritative or in-depth scientific evaluation of particular products, which is, for example, accomplished by comprehensive efforts of the GEWEX Data assessment Panel (GDAP).