From BAWiki

Revision as of 07:58, 6 September 2021 by Anton Rosenhagen (talk | contribs)

Basic Information

Name of Program



August 2021


August 2021


differences of synoptic data (with optional restriction of time period)
differences of characteristic numbers
differences for extensive quantities
input data for Taylor diagram
median, percentiles (Q01, Q05, Q95, Q99)
skill score according to Murphy (1988) equation 4
skill score according to Taylor (2001) equations 4 and 5
skill score according to Willmott (1981) Index of agreement (d)
parallelization using OpenMP
automatic quality assurance (value range)
automatic adjustment of number of READ data to chunk size of data
automatic setting of WRITE chunk sizes for written variables
Storage of the content of the ASCII input control files in (as a variable)
Storage of MD5 hash values ​​of input files in (as a variable)
optional use of Message Passing Interface (MPI, MPI Forum)

Acknowledgment: This project took advantage of netCDF software developed by UCAR/Unidata (

Short Description of Functionality

Program computes differences for comparable variables (primary variable pairs) as well as additional statistical data derived from primary differences, but also data for Taylor diagrams (details see Differences of Calculated Results) can be computed for some types of data. Finding the primary variable pairs is done in an essentially automatical manner but can be altered to some extent by the user (see ncdelta.dat). Primary differences are computed according to variant data minus reference data.

Requirements for time-dependent primary variable pairs with individual constant time step:

  1. both data sets may contain a different number of data items (time steps), but the length of the time period to be compared must be identical, whereas the periods to be compared may differ ;
  2. constant time step for each data set may differ from each other, but the larger time step must an integer multiple of the smaller one.

Requirements for time-dependent primary variable pairs with variable time step:

  1. both data sets must contain an identical number of data items (time steps), with periods allowed to be different.

Remarks concerning the spatial location of data:

  1. data sets may differ with respect to their geographical location;
  2. areas for which data are defined must overlap to some degree;
  3. coordinates may be given in different coordinate systems, e. g. Gauß-Krüger and UTM;
  4. data for a location are compared with data for the nearest lying location, as far as the distance between the different locations does not exceed a prescribed maximum (see ncdelta.dat).

Remarks concerning the comparison of extensive variables:

  1. for extensive variables the respective sizes of the computational cells (area, length) are taken into account.


  1. general input data (file type ncdelta.dat);
  2. reference data (file type CF-NETCDF.NC);
  3. variant data (file type CF-NETCDF.NC);
  4. for automatic quality assurance (file type bounds_verify.dat).


  1. results (file type CF-NETCDF.NC)
  2. (optional) informative printer file of program execution (file type ncdelta.sdr)
  3. (optional) trace of program execution (file type ncdelta.trc)


The program is subdivided into the following sections:

  1. read, check and print steering data prescribed by the user;
  2. read metadata for reference data;
  3. read metadata for variant data;
  4. copy metadata for reference as well as variant data to program specific data structures;
  5. compare metadata for substantial discrepancies (e. g. reference locations) between data sets;
  6. classify all reference as well as variant data;
  7. find primary variable pairs: each variant variable has one partner reference variable; primary computational results will be later derived from primary variable pairs;
  8. derive variables to be copied from reference as well as variant data file into the result file;
  9. compute interpolation matrices required for interpolation between reference data locations and variant data locations;
  10. create all metadata for the result file; essentially they stem from variables to be copied, from primary result variables, newly generated coordinate variables (time, vertical), as well as newly derived or to be copied measure and auxiliary variables;
  11. copy selceted variables from reference as well as variant file to the result file;
  12. compute all primary variables, (new) time and vertical coordinate variables, as well as weights and auxiliary variables. For primary variables optionally available axiliary variables with standard_name modifier status_flag are taken into account in case meaning good is represented by an appropriate flag (bit).
  13. with respect of the different skill scores used and computed by the program - references:
    • Murphy, Allan H. (1988) "Skill Scores Based on the Mean Square Error and Their Relationship to the Correlation Coefficient". Monthly Weather Review, Dec. 1988, Pages 2417 - 2424.
    • Taylor, Karl E. (2001) "Summarizing multiple aspects of model performance in a single diagram". Journal of Geophysical Research, Vol 106, No. D7, April 16, 2001, Pages 7183 - 7192.
    • Willmott, Cort J. (1981) "On the validation of models". Physical Geography, Pages 184–194.

Just in case HDF error is detected during read of a data record, the program tries to reconstruct the wanted data set from adjacent (in time) records for the same variable. This type of repair works for time dependent variables only.

Program(s) to run before this Program


Program(s) to run after this Program


Additional Information



Additional software


Original Version

G. Lang, S. Spohr


G. Lang, S. Spohr


back to Program Descriptions