Actions

Testseite ProgamDescription: Difference between revisions

From BAWiki

No edit summary
No edit summary
Line 30: Line 30:


Just in case '''HDF error''' is detected during read of a data record, the program tries to reconstruct the wanted data set from adjacent (in time) records for the same variable. This type of repair works for time dependent variables only.
Just in case '''HDF error''' is detected during read of a data record, the program tries to reconstruct the wanted data set from adjacent (in time) records for the same variable. This type of repair works for time dependent variables only.
|inputfiles=
# No input steering data file required (parameters in command line or interactive specification);
# '''UGRID CF NetCDF data set''' (file type [[CF-NETCDF.NC]]).
|outputfiles=
# '''UGRID CF NetCDF data set''' (file type [[CF-NETCDF.NC]]);
# informative '''printer file''' of program execution (file type NCCHUNKIE.sdr) with informations related to program execution, required time for READ and WRITE of data, effective data transfer rates.
# '''trace''' of program execution (file type NCCHUNKIE.trc)
|methodology=
Some concepts published in [https://support.hdfgroup.org/pubs/papers/2008-06_netcdf4_perf_report.pdf https://support.hdfgroup.org/pubs/papers/2008-06_netcdf4_perf_report.pdf] were used.




}}
}}

Revision as of 08:50, 4 December 2025

Basic Information

Name of Program

NCCHUNKIE

Version-Date

April 2022

Description-Date

September 2022

Catchwords

data conversion
postprocessor
automatic adjustment of number of data READ to chunk size of input data
automatic computation of chunk sizes for result variables to support orthogonal data access
parrallelization (collective IO) using MPI


Acknowledgment: This project took advantage of netCDF software developed by UCAR/Unidata (www.unidata.ucar.edu/software/netcdf/).

Short Description of Functionality

Program NCCHUNKIE can be used to chunk data stored in cf-netcdf.nc files:

  1. Chunk sizes are computed automatically, and all dimensions are chunked to support orthogonal data access;
  2. Resulting chunks sizes lie somewhere between Disc Block Size and Chunk Buffer Size;
  3. Online compression is used during storage of data (low level of compression used, level 1);
  4. A netCDF-4 file is created (serial version creates NetCDF4 classic model format);
  5. Parameters cache size and cache nelems used within netCDF-4 API are automatically determined.

Just in case HDF error is detected during read of a data record, the program tries to reconstruct the wanted data set from adjacent (in time) records for the same variable. This type of repair works for time dependent variables only.

Input-Files

  1. No input steering data file required (parameters in command line or interactive specification);
  2. UGRID CF NetCDF data set (file type CF-NETCDF.NC).

Output-Files

  1. UGRID CF NetCDF data set (file type CF-NETCDF.NC);
  2. informative printer file of program execution (file type NCCHUNKIE.sdr) with informations related to program execution, required time for READ and WRITE of data, effective data transfer rates.
  3. trace of program execution (file type NCCHUNKIE.trc)

Methodology

Some concepts published in https://support.hdfgroup.org/pubs/papers/2008-06_netcdf4_perf_report.pdf were used.

Program(s) to run before this Program

{{{preprocessor}}}

Program(s) to run after this Program

{{{postprocessor}}}

Additional Information

Language

{{{language}}}

Additional software

{{{add_software}}}

Original Version

{{{contact_original}}}

Maintenance

{{{contact_maintenance}}}

Documentation/Literature

{{{documentation}}}


back to Program Descriptions


Overview