Usage

Using a set of YAML configuration files, fre make compiles a FMS-based model, and fre pp postprocesses the history output and runs diagnostic analysis scripts. Please note that model running is not yet supported in FRE 2025; continue to use FRE Bronx frerun.

YAML Framework

In order to utilize these FRE tools, a distributed YAML structure is required. This framework includes a main model yaml, a compile yaml, a platforms yaml, and post-processing yamls. Throughout the compilation and post-processing steps, combined yamls that will be parsed for information are created. Yamls follow a dictionary-like structure with [key]: [value] fields.

Yaml Formatting

Helpful information and format recommendations for creating yaml files.

  1. You can define a block of values as well as individual [key]: [value] pairs:

section name:
  key: value
  key: value
  1. [key]: [value] pairs can be made a list by utilizing a -:

section name:
  - key: value
  - key: value
  1. If you want to associate information with a certain listed element, follow this structure:

section name:
  - key: value
    key: value
  - key: value
    key: value

Where each dash indicates a list.

  1. Yamls also allow for the capability of reusable variables. These variables are defined by:

&ReusableVariable Value
  1. Users can apply a reusable variable on a block of values. For example, everything under “section” is associated with the reusable variable:

section: &ReusableVariable
  - key: value
    key: value
  - key: value
  1. In order to use them as a reference else where in either the same or other yamls, use *:

*ReusableVariable
  1. If the reusable variable must be combined with other strings, the `!join` constructor is used. Simplified example:

&name "experiment-name"
...
pp_dir: !join [/archive/$USER/, *name, /, pp]

In this example, the variable pp_dir will be parsed as /archive/$USER/experiment-name/pp.

Model Yaml

The model yaml defines reusable variables and paths to compile, post-processing, analysis, and cmor yamls. Required fields in the model yaml include: fre_properties, build, and experiments.

  • fre_properties: Reusable variables

    • list of variables

    • these values can be extracted from fre_properties in a group’s XML, if available

    • value type: string

  • build: paths to information needed for compilation

  • experiments: list of post-processing experiments

The model.yaml can follow the structure below:

fre_properties:
  - &variable1  "value1"  (string)
  - &variable2  "value2"  (string)

build:
  compileYaml: "path the compile yaml in relation to model yaml"    (string)
  platformYaml: "path to platforms.yaml in relation to model yaml"  (string)

experiments:
  - name: "name of post-processing experiment"                                       (string)
    pp:
      - "path/to/post-processing/yaml for that experiment in relation to model yaml" (string)
    analysis:
      - "path/to/analysis/yaml for that experiment in relation to model yaml"        (string)
    cmor:
      - "path/to/cmor/yaml for that experiment in relation to model yaml"            (string)

Compile Yaml

To create the compile yaml, reference the compile section on an XML. Certain fields should be included under “compile”. These include experiment, container_addlibs, baremetal_linkerflags, and src.

The experiment can be explicitly defined or can be used in conjunction with defined fre_properties from the model yaml, as seen in the code block below

The compile yaml can follow the below structure:

compile:
  experiment: !join [*group_version, "_compile"]
  container_addlibs: "list of libraries and packages needed for linking in container" (string)
  baremetal_linkerflags: "list of linker flags of libraries and packages needed"      (array with string elements)
  src: (information about each component)
    - component: "component name"                                                     (string)
      requires: ["list of components that this component depends on"]                 (list of string)
      repo: "url of code repository"                                                  (string)
      branch: "version of code to clone"                                              (string / list of strings)
      paths: ["paths in the component to compile"]                                    (list of strings)
      cppdefs: "CPPDEFS ot include in compiling component                            (string)
      makeOverrides: "overrides openmp target for MOM6"                               ('OPENMP=""') (string)
      otherFlags: "Include flags needed to retrieve other necessary code"             (string)
      doF90Cpp: True if the preprocessor needs to be run                              (boolean)
      additionalInstructions: additional instructions to run after checkout           (string)

Platform Yaml

The platform yaml contains user defined information for both bare-metal and container platforms. Information includes the platform name, the compiler used, necessary modules to load, an mk template, fc, cc, container build, and container run. This yaml file is not model specific.

platforms:
  - name: the platform name
    compiler: the compiler you are using
    envSetup: ["array of additional shell commands that are needed to compile the model" (this can include loading/unloading modules)]
    mkTemplate: The location of the mkmf make template
    modelRoot: The root directory of the model (where src, exec, experiments will go)
  - name: container platform name (FOR ONE STAGE BUILD)
    compiler: compiler you are using
    RUNenv: Commands needed at the beginning of a RUN in dockerfile
    modelRoot: The root directory of the model (where src, exec, experiments will go) INSIDE of the container (/apps)
    container: True if this is a container platform
    containerBuild: "podman" - the container build program
    containerRun: "apptainer" - the container run program
    containerBase: the base image used for the container
    mkTemplate: path to the mk template file
    containerOutputLocation: The path (str) to where the output model container will be located
  - name: container platform name (FOR TWO STAGE BUILD)
    compiler: compiler you are using
    RUNenv: Commands needed at the beginning of a RUN in dockerfile
    modelRoot: The root directory of the model (where src, exec, experiments will go) INSIDE of the container (/apps)
    container: True if this is a container platform
    containerBuild: "podman" - the container build program
    containerRun: "apptainer" - the container run program
    containerBase: the base image used for the container
    mkTemplate: path to the mk template file
    container2step: True/False if creating a 2 step container build
    container2base: the base image used for the second build step
    containerOutputLocation: The path (str) to where the output model container will be located

Post-processing Yamls

The post-processing yamls include information specific to experiments, such as component information. The post-processing yamls can further define more fre_properties that may be experiment specific. If there are any repeated reusable variables, the ones set in this yaml will overwrite those set in the model yaml.

Post-processing yamls

The post-processing yamls include pp experiment yamls, along with a settings.yaml, that can be applied to all pp yamls. Users can add however many components are needed, as well as define any experiment specific fre_properties. The pp experiment yamls can follow the structure below:

postprocess:
  components:
    - type: "component name"                                                          (string)
      sources:
        - history_file: "history file to include with component"                      (string)
          variables: "specific variables to postprocess associated with component"    (array with string elements)
      xyInterp: "lat, lon grid configuration"                                         (string)
      interpMethod: "interpolation method"                                            (string)
      sourceGrid: "input grid type"                                                   (string)
      inputRealm: "domain of component"                                               (string)
      static:
        - source: "static history file to include with component"                     (string)
          variables: "specific static variables to postprocess"                       (array with string elements)
        - offline_diagnostic: "path to static offline diagnostic"                     (string)
          variables: "specific static variables to postprocess"                       (array with string elements)
      postprocess_on: "switch to postprocess this component or not"                   (boolean)

Required keys include:

  • type

  • sources

  • postprocess_on

Settings yaml

To define post-processing settings, a settings.yaml must also be created. This configuration file will include post-processing settings and switches and will be listed as the first yaml under the pp section of experiments.

This file can follow the format below:

directories:
  history_dir:
  pp_dir:
  analysis_dir:
  ptmp_dir:

postprocess:
  settings:
    site: "site name from file that defines parameters that can be specific to where the workflow is being run"        (string)
    history_segment: "amount of time covered by a single history file (ISO8601 datetime)"                              (string)
    pp_start: "start of the desired postprocessing (ISO8601 datetime)"                                                 (string)
    pp_stop: "end of the desired postprocessing (ISO8601 datetime)"                                                    (string)
    pp_chunks: "array of ISO8601 datetime durations, specifying the interval of simulated time per postprocessed file" (string)
    pp_grid_spec: "path to FMS grid definition tarfile"                                                                (string)
  switches:
    do_timeavgs: "switch to turn on/off time-average file generation"                                                  (boolean)
    clean_work: "switch to remove intermediate data files when they are no longer needed"                              (boolean)
    do_refinediag: "switch to run refine-diag script(s) on history file to generate additional diagnostics"            (boolean)
    do_atmos_plevel_masking: "switch to mask atmos pressure-level output above/below surface pressure/atmos top"       (boolean)
    do_preanalysis: "switch to run a pre-analysis script on history files"                                             (boolean)
    do_analysis: "switch to launch analysis scripts"                                                                   (boolean)
    do_analysis_only: "switch to only launch analysis scripts"                                                         (boolean)

Required keys include:

  • history_dir

  • pp_dir

  • ptmp_dir

  • site

  • history_segment

  • pp_chunks

  • pp_start

  • pp_stop

  • pp_grid_spec

  • clean_work

  • do_timeavgs

  • do_refinediag

  • do_atmos_plevel_masking

  • do_preanalysis

  • do_analysis

  • do_analysis_only

Build FMS model

fre make can compile a traditional “bare metal” executable or a containerized executable using a set of YAML configuration files.

Through the fre-cli, fre make can be used to create and run a checkout script, makefile, and compile a model.

Capabilities

Fremake Canopy Supports:
  • multiple target use; -t flag to define each target (for multiple platform-target combinations)

  • bare-metal build

  • container creation

  • parallel checkouts for bare-metal build

  • parallel model builds

  • one yaml format

  • additional library support if needed

Note: Users will not be able to create containers without access to podman. To get access, submit a helpdesk ticket.

Required configuration files:

  • Model Yaml

  • Compile Yaml

  • Platforms yaml

These yamls are combined and further parsed through the fre make tools (see the “Guide” section for the step by step process).

The final combined yaml includes the name of the compile experiment, the platform and target passed in the command line subtool, as well as compile and platform yaml information. The platform that was passed corresponds to the one defined in the platforms YAML file. This file details essential configuration info such as setting up the runtime environment, listing what compiler to use, and providing which container applications to use. These configurations vary based on the specific site where the user is building the model executable or container. Additionally the platform and target passed are used to fill in the build directory in which the compile script is created and run.

In regards to container platforms in the YAML file, fre make supports a one and two stage build:

  • one-stage build:

    • contains the intel compiler

    • this container CANNOT be shared as we do not have the license to distribute

  • two-stage build:

    • includes platform information with a second container base to build off of

    • strips out the intel compiler and reduces the size of the container, making it shareable and easier to store.

To perform the two-stage build, choose the right platform to use with the fre make subtools.

Guide

  1. Bare-metal Build:

# Create checkout script
fre make checkout-script -y [model yaml file] -p [platform] -t [target]

# Or create and RUN checkout script
fre make checkout-script -y [model yaml file] -p [platform] -t [target] --execute

# Create Makefile
fre make makefile -y [model yaml file] -p [platform] -t [target]

# Create the compile script
fre make compile-script -y [model yaml file] -p [platform] -t [target]

# Or create and RUN the compile script
fre make compile-script -y [model yaml file] -p [platform] -t [target] --execute

Users can also run all fre make commands in one subtool:

# Run all of fremake: creates checkout script, makefile, compile script, and model executable
fre make all -y [model yaml file] -p [platform] -t [target] [other options...] --execute
  1. Container Build:

For the container build, parallel checkouts are not supported. In addition the platform must be a container platform.

Gaea users will not be able to create containers unless they have requested and been given podman access.

# Create checkout script
fre make checkout-script -y [model yaml file] -p [CONTAINER PLATFORM] -t [target]

# Create Makefile
fre make makefile -y [model yaml file] -p [CONTAINER PLATFORM] -t [target]

# Create a Dockerfile
fre make dockerfile -y [model yaml file] -p [CONTAINER PLATFORM] -t [target]

# Or create and RUN the Dockerfile
fre make dockerfile -y [model yaml file] -p [CONTAINER PLATFORM] -t [target] --execute

To run all of fre make subtools:

# Run all of fremake: create and checkout script, makefile, dockerfile, container
# creation script, and model container
fre make all  -y [model yaml file] -p [CONTAINER PLATFORM] -t [target] --execute

Quickstart

The quickstart instructions can be used with the null model example located in the fre-cli repository: https://github.com/NOAA-GFDL/fre-cli/tree/main/fre/make/tests/null_example

  1. Bare-metal Build:

# Create and run checkout script: checkout script will check out source code as
                                  defined in the compile.yaml
fre make checkout-script -y null_model.yaml -p ncrc5.intel23 -t prod --execute

# Create Makefile
fre make makefile -y null_model.yaml -p ncrc5.intel23 -t prod

# Create and run the compile script to generate a model executable
fre make compile-script -y null_model.yaml -p ncrc5.intel23 -t prod --execute
  1. Bare-metal Build Multi-target:

# Create and run checkout script: checkout script will check out source code as
                                  defined in the compile.yaml
fre make checkout-script -y null_model.yaml -p ncrc5.intel23 -t prod -t debug --execute

# Create Makefile
fre make makefile -y null_model.yaml -p ncrc5.intel23 -t prod -t debug

# Create and run a compile script for each target specified; generates model executables
fre make compile-script -y null_model.yaml -p ncrc5.intel23 -t prod -t debug --execute
  1. Container Build:

In order for the container to build successfully, the parallel checkout feature is disabled.

# Create checkout script
fre make checkout-script -y null_model.yaml -p hpcme.2023 -t prod

# Create Makefile
fre make makefile -y null_model.yaml -p hpcme.2023 -t prod

# Create the Dockerfile and container build script: the container build script (createContainer.sh)
                                                    uses the Dockerfile to build a model container
fre make dockerfile -y null_model.yaml -p hpcme.2023 -t prod --execute
  1. Run all of fremake:

all kicks off the compilation automatically

# Bare-metal: create and run checkout script, create makefile, create and RUN compile script to
              generate a model executable
fre make all -y null_model.yaml -p ncrc5.intel23 -t prod --execute

# Container: create checkout script, makefile, create dockerfile, and create and RUN the container
             build script to generate a model container
fre make all -y null_model.yaml -p hpcme.2023 -t prod --execute

Run FMS model

Check back in the latter half of 2025 or so.

Postprocess FMS history output

fre pp regrids FMS history files and generates timeseries, climatologies, and static postprocessed files, with instructions specified in YAML.

Bronx plug-in refineDiag and analysis scripts can also be used, and a reimagined analysis script ecosystem is being developed and is available now (for adventurous users). The new analysis script framework is independent of and compatible with FRE (https://github.com/NOAA-GFDL/analysis-scripts). The goal is to combine the ease-of-use of legacy FRE analysis scripts with the standardization of model output data catalogs and python virtual environments.

In the future, output NetCDF files will be rewritten by CMOR by default, ready for publication to community archives (e.g. ESGF). Presently, standalone CMOR tooling is available as fre cmor.

By default, an intake-esm-compatible data catalog is generated and updated, containing a programmatic metadata-enriched searchable interface to the postprocessed output. The catalog tooling can be independently assessed as fre catalog.

FMS history files

FRE experiments are run in segments of simulated time. The FMS diagnostic manager, as configured in experiment configuration files (diag yamls) saves a set of diagnostic output files, or “history files.” The history files are organized by label and can contain one or more temporal or static diagnostics. FRE (Bronx frerun) renames and combines the raw model output (that is usually on a distributed grid), and saves the history files in one tarfile per segment, date-stamped with the date of the beginning of the segment. The FMS diagnostic manager requires that variables within one history file be the same temporal frequency (e.g. daily, monthly, annual), but statics are allowed in any history file. Usually, variables in a history file share a horizontal and vertical grid.

Each history tarfile, again, is date-stamped with the date of the beginning of the segment, in YYYYMMDD format. For example, for a 5-year experiment with 6-month segments, there will be 10 history files containing the raw model output. Each history tarfile contains a segment’s worth of time (in this case 6 months).

19790101.nc.tar  19800101.nc.tar  19810101.nc.tar  19820101.nc.tar  19830101.nc.tar
19790701.nc.tar  19800701.nc.tar  19810701.nc.tar  19820701.nc.tar  19830701.nc.tar

Each history file within the history tarfiles are also similarly date-stamped. Atmosphere and land history files are on the native cubed-sphere grid, which have 6 tiles that represent the global domain. Ocean, ice, and global scalar output have just one file that covers the global domain.

For example, if the diagnostic manager were configured to save atmospheric and ocean annual and monthly history files, the 19790101.nc.tar tarfile might contain

tar -tf 19790101.nc.tar | sort

./19790101.atmos_month.tile1.nc
./19790101.atmos_month.tile2.nc
./19790101.atmos_month.tile3.nc
./19790101.atmos_month.tile4.nc
./19790101.atmos_month.tile5.nc
./19790101.atmos_month.tile6.nc
./19790101.atmos_annual.tile1.nc
./19790101.atmos_annual.tile2.nc
./19790101.atmos_annual.tile3.nc
./19790101.atmos_annual.tile4.nc
./19790101.atmos_annual.tile5.nc
./19790101.atmos_annual.tile6.nc
./19790101.ocean_month.nc
./19790101.ocean_annual.nc

The name of the history file, while often predictably named, are arbitrary labels within the Diagnostic Manager configuration (diag yamls). Each history file is a CF-standard NetCDF file that can be inspected with common NetCDF tools such as the NCO or CDO tools, or even ncdump.

Required configuration

  1. Set the history directory in your postprocessing yaml

directories:
  history_dir: /arch5/am5/am5/am5f7c1r0/c96L65_am5f7c1r0_amip/gfdl.ncrc5-deploy-prod-openmp/history
  1. Set the segment size as an ISO8601 duration (e.g. P1Y is “one year”)

postprocess:
  settings:
    history_segment: P1Y
  1. Set the date range to postprocess as ISO8601 dates

postprocess:
  settings:
    pp_start: 1979-01-01T0000Z

    pp_stop: 2020-01-01T0000Z

Postprocess components

The history-file namespace is a single layer as shown above. By longtime tradition, FRE postprocessing namespaces are richer, with a distinction for timeseries, timeaveraged, and static output datasets, and includes frequency and chunk-size in the directory structure.

Postprocessed files within a “component” share a horizontal grid; which can be the native grid or regridded to lat/lon.

Required configuration

  1. Define the atmos and ocean postprocess components

postprocess:
  components:
    - type: atmos

      sources:
        - history_file: "atmos_month"
        - history_file: "atmos_annual"
    - type: ocean

      sources:
        - history_file: "ocean_month"
        - history_file: "ocean_annual"

XY-regridding

Commonly, native grid history files are regridded during postprocessing. To regrid to a lat/lon grid, configure your desired output grid, interpolation method, input grid type, and path to your FMS exchange grid definition.

Optional configuration (i.e. if xy-regridding is desired)

  1. Regrid the atmos and ocean components to a 1x1 degree grid

directories:
  pp_grid_spec: /archive/oar.gfdl.am5/model_gen5/inputs/c96_grid/c96_OM4_025_grid_No_mg_drag_v20160808.tar

postprocess:
  components:
    - type: atmos

      postprocess_on: True

      sources:
         - history_file: "atmos_month"
         - history_file: "atmos_annual"

      sourceGrid: cubedsphere

      inputRealm: atmos

      xyInterp: [180, 360]

      interpMethod: conserve_order2
    - type: ocean

      postprocess_on: True

      sources:
        - history_file: "ocean_month"
        - history_file: "ocean_annual"


      sourceGrid: tripolar

      inputRealm: ocean

      xyInterp: [180, 360]

      interpMethod: conserve_order1

Timeseries

Timeseries output is the most common type of postprocessed output.

Climatologies

annual and monthly climatologies less fine-grained than bronx per-component switch coming now it’s one switch for entire pp

Statics

underbaked, known deficiency currently, takes statics from “source” history files

Analysis scripts

Surface masking for FMS pressure-level history

Legacy refineDiag scripts

Guide

  1. Using the main branch of the fre-workflows repository:

# Load cylc and FRE
module load cylc
module load fre/2025.04

# Clone fre-workflows repository into ~/cylc-src/[experiment name]__[platform name]__[target name]
fre pp checkout -e [experiment name] -p [platform] -t [target]

# Create/configure the combined yaml file, rose-suite.conf, and any necessary rose-app.conf files
fre pp configure-yaml -y [model yaml file] -e [experiment name] -p [platform] -t [target]

# Validate the rose experiment configuration files
fre pp validate -e [experiment name] -p [platform] -t [target]

# Install the experiment
fre pp install -e [experiment name] -p [platform] -t [target]

# Run the experiment
fre pp run -e [experiment name] -p [platform] -t [target]

Users can also run all fre pp subtools in one command:

# Load cylc and FRE
module load cylc
module load fre/2025.04

# Run all of fre pp
fre pp all -e [experiment name] -p [platform] -t [target] -y [model yaml file]
  1. Specifying a certain branch of the fre-workflows repository

# Load cylc and FRE
module load cylc
module load fre/2025.04

# Clone fre-workflows repository into ~/cylc-src/[experiment name]__[platform name]__[target name]
fre pp checkout -e [experiment name] -p [platform] -t [target] -b [branch or tag name]

# Create/configure the combined yaml file, rose-suite.conf, and any necessary rose-app.conf files
fre pp configure-yaml -y [model yaml file] -e [experiment name] -p [platform] -t [target]

# Validate the rose experiment configuration files
fre pp validate -e [experiment name] -p [platform] -t [target]

# Install the experiment
fre pp install -e [experiment name] -p [platform] -t [target]

# Run the experiment
fre pp run -e [experiment name] -p [platform] -t [target]

To run all fre pp subtools in one command:

# Load cylc and FRE
module load cylc
module load fre/2025.04

# Run all of fre pp
fre pp all -e [experiment name] -p [platform] -t [target] -y [model yaml file] -b [branch or tag name]

CMORize Postprocessed Output

fre cmor is the FRE CLI command group for rewriting climate model output with CMIP-compliant metadata, a process known as “CMORization”. This set of tools leverages the external cmor python API within the fre ecosystem.

Background

cmor is an acronym for “climate model output rewriter”. The process of rewriting model-specific output files for model intercomparisons (MIPs) using the cmor module is referred to as “CMORizing”.

The fre cmor tools are designed to work with any MIP project (CMIP6, CMIP7, etc.) by simply changing the table configuration files and controlled vocabulary as appropriate for the target MIP.

Getting Started

fre cmor provides several subcommands:

  • fre cmor run - Core engine for rewriting individual directories of netCDF files according to a MIP table

  • fre cmor yaml - Higher-level tool for processing multiple directories / MIP tables using YAML configuration

  • fre cmor find - Helper for exploring MIP table configurations for information on a specific variable

  • fre cmor varlist - Helper for generating variable lists from directories of netCDF files

  • fre cmor config - Generate a CMOR YAML configuration file from a post-processing directory tree

To see all available subcommands:

fre cmor --help

This cookbook provides practical examples and procedures for using fre cmor to CMORize climate model output. It demonstrates the relationship between the different subcommands and provides guidance on debugging CMORization processes.

Overview

The fre cmor process typically follows this pattern:

  1. Setup and Configuration - Identify your experiment parameters, create variable lists, and prepare experiment configuration

  2. CMORization - Use fre cmor run to process individual directories or fre cmor yaml for bulk processing

  3. Troubleshooting - Diagnose issues as needed (note: fre yamltools combine-yamls --use cmor can help debug YAML configurations)

Setup and Configuration

Before beginning CMORization, gather the following information:

  • Experiment name (-e) - The name of your experiment as defined in the model YAML

  • Platform (-p) - The platform configuration (e.g., gfdl.ncrc6-intel23, ncrc5.intel)

  • Target (-t) - The compilation target (e.g., prod-openmp, debug)

  • Post-processing directory - Location of your model’s post-processed output (e.g., /archive/user/experiment/pp/)

  • Output directory - Where CMORized output should be written

Identifying Parameters from FRE Output

If you have existing FRE output, you can extract the required parameters from the directory structure. The post-processing directory is typically located at:

/archive/username/experiment/platform-target/pp/

From this path, you can identify:

  • experiment = experiment (the experiment name)

  • platform-target = the combined platform and target string (e.g., ncrc5.intel-prod-openmp)

You will need to split the platform-target string appropriately to extract the individual platform and target values for use with fre cmor commands.

Creating Variable Lists

Variable lists map your local variable names to MIP table variable names. Generate a variable list from a directory of netCDF files:

fre cmor varlist \
    -d /path/to/component/output \
    -o generated_varlist.json

This tool examines filenames to extract variable names. It assumes FRE-style naming conventions (e.g., component.YYYYMMDD.variable.nc). Review the generated file and edit as needed to map local variable names to target MIP variable names.

To verify variables exist in MIP tables, search for variable definitions:

fre -v cmor find \
    -r /path/to/cmip6-cmor-tables/Tables/ \
    -v variable_name

Or search for all variables in a varlist:

fre -v cmor find \
    -r /path/to/cmip6-cmor-tables/Tables/ \
    -l /path/to/varlist.json

This displays which MIP table contains the variable and its metadata requirements.

Preparing Experiment Configuration

The experiment configuration JSON file contains required metadata for CMORization (e.g., CMOR_input_example.json). This file should include:

  • Experiment metadata (experiment_id, activity_id, source_id, etc.)

  • Institution and contact information

  • Grid information (grid_label, nominal_resolution)

  • Variant labels (realization_index, initialization_index, etc.)

  • Parent experiment information (if applicable)

  • Calendar type

Refer to CMIP6 controlled vocabularies and your project’s requirements when filling in these fields.

Running Your CMORization

CMORizing One Table/Variable List in a Directory

The fre cmor run command is the fundamental building block for CMORization. It processes netCDF files from a single input directory according to a specified MIP table and variable list.

For processing individual directories or debugging specific issues, use fre cmor run directly:

fre -v -v cmor run \
    --indir /path/to/component/output \
    --varlist /path/to/varlist.json \
    --table_config /path/to/CMIP6_Table.json \
    --exp_config /path/to/experiment_config.json \
    --outdir /path/to/cmor/output \
    --grid_label gn \
    --grid_desc "native grid description" \
    --nom_res "100 km" \
    --run_one

Required arguments:

  • --indir: Directory containing netCDF files to CMORize

  • --varlist: JSON file mapping local variable names to target variable names

  • --table_config: MIP table JSON file (e.g., CMIP6_Omon.json)

  • --exp_config: Experiment configuration JSON with metadata

  • --outdir: Output directory root for CMORized files

Optional but recommended:

  • --grid_label: Grid label (gn for native, gr for regridded)

  • --grid_desc: Description of the grid

  • --nom_res: Nominal resolution (must match controlled vocabulary)

  • --opt_var_name: Process only files matching this variable name

  • --run_one: Process only one file (for testing)

  • --start: Start year (YYYY format)

  • --stop: Stop year (YYYY format)

  • --calendar: Calendar type (e.g., julian, noleap, 360_day)

Bulk CMORization Over Many Tables and Directories

The fre cmor yaml command provides a higher-level interface for CMORizing multiple components and MIP tables. It works by first calling fre yamltools combine-yamls to parse the YAML configuration, then generates and executes a set of fre cmor run commands based on that configuration.

This is the recommended approach for CMORizing multiple components and MIP tables in a systematic way.

Step 1: Test with Dry Run

Test the process without actually CMORizing files:

fre -v -v cmor yaml \
    -y /path/to/model.yaml \
    -e EXPERIMENT_NAME \
    -p PLATFORM \
    -t TARGET \
    --dry_run \
    --run_one

This prints the fre cmor run commands that would be executed, allowing you to verify:

  • Input directories are correct

  • Output paths are as expected

  • Variable lists are found

  • MIP tables are accessible

Step 2: Process One File for Testing

Process only one file to verify the process:

fre -v -v cmor yaml \
    -y /path/to/model.yaml \
    -e EXPERIMENT_NAME \
    -p PLATFORM \
    -t TARGET \
    --run_one

Step 3: Full CMORization

Once validated, remove --run_one for full processing:

fre -v cmor yaml \
    -y /path/to/model.yaml \
    -e EXPERIMENT_NAME \
    -p PLATFORM \
    -t TARGET

Common Issues and Solutions

fre cmor yaml Fails at YAML Combination Step

fre cmor yaml fails with key errors or anchor errors during the YAML combination step.

To debug this issue, manually run the YAML combination step:

fre -v yamltools combine-yamls \
    -y /path/to/model.yaml \
    -e EXPERIMENT_NAME \
    -p PLATFORM \
    -t TARGET \
    --use cmor \
    --output combined_cmor.yaml

Then verify:

  • All referenced YAML files exist and are readable

  • Anchors referenced in CMOR YAML are defined in the model YAML

  • The cmor: section exists in the experiment definition

  • The CMOR YAML path is relative to the model YAML location

No Files Found in Input Directory

fre cmor run reports no files matching the variable list.

Solutions:

  • Verify --indir points to the correct directory

  • Check that files follow expected naming conventions

  • Use fre cmor varlist to generate a list from actual filenames

  • Use --opt_var_name to target a specific variable for testing

Grid Metadata Issues

Errors about missing or invalid grid labels or nominal resolution.

Solutions:

  • Ensure --grid_label matches controlled vocabulary (typically gn or gr)

  • Verify --nom_res is in the controlled vocabulary for your MIP

  • Check that grid descriptions are provided if overriding experiment config

  • Review the experiment configuration JSON for grid-related fields

Calendar or Date Range Issues

Files are skipped or errors related to calendar types.

Solutions:

  • Specify --calendar if the automatic detection fails

  • Use --start and --stop to limit the date range processed

  • Verify that datetime strings in filenames match expected ISO8601 format

  • Check that the calendar type in your data matches the MIP requirements

Example: Ocean Monthly Data CMORization

This example demonstrates CMORizing ocean monthly output for multiple components:

Prepare the model YAML (excerpt from experiments section):

experiments:
  - name: "my_ocean_experiment"
    pp:
      - "pp_yamls/settings.yaml"
      - "pp_yamls/ocean_monthly.yaml"
    cmor:
      - "cmor_yamls/ocean_cmor.yaml"
    grid_yaml:
      - "grid_yamls/ocean_grids.yaml"

Prepare the CMOR YAML (cmor_yamls/ocean_cmor.yaml):

cmor:
  start: "1950"
  stop: "2000"
  mip_era: "CMIP6"
  exp_json: "/path/to/experiment_config.json"

  directories:
    pp_dir: "/path/to/pp"
    table_dir: "/path/to/cmip6-cmor-tables/Tables"
    outdir: "/path/to/cmor/output"

  table_targets:
    - table_name: "Omon"
      freq: "monthly"
      gridding:
        grid_label: "gn"
        grid_desc: "native tripolar ocean grid"
        nom_res: "100 km"

      target_components:
        - component_name: "ocean_monthly"
          variable_list: "/path/to/ocean_varlist.json"
          data_series_type: "ts"
          chunk: "P1Y"

Validate configuration:

fre -v yamltools combine-yamls \
    -y model.yaml \
    -e my_ocean_experiment \
    -p ncrc5.intel \
    -t prod-openmp \
    --use cmor \
    --output test_ocean.yaml

Test with dry run:

fre -v -v cmor yaml \
    -y model.yaml \
    -e my_ocean_experiment \
    -p ncrc5.intel \
    -t prod-openmp \
    --dry_run

Process one file:

fre -v -v cmor yaml \
    -y model.yaml \
    -e my_ocean_experiment \
    -p ncrc5.intel \
    -t prod-openmp \
    --run_one

Full processing:

fre cmor yaml \
    -y model.yaml \
    -e my_ocean_experiment \
    -p ncrc5.intel \
    -t prod-openmp

Tips

  • Use fre yamltools combine-yamls before attempting CMORization to help figure out YAML issues

  • Use --dry_run with fre cmor yaml to preview the equivalent fre cmor run calls before execution

  • Use --no-print_cli_call with --dry_run to see the Python cmor_run_subtool(...) call instead of the CLI invocation — useful for debugging

  • Use --run_one with fre cmor run for testing to only process a single file and catch issues early

  • Use --run_one with fre cmor yaml to process a single file per fre cmor run call for quicker debugging

  • Use fre cmor config to auto-generate a CMOR YAML configuration from a post-processing directory tree — it scans components, cross-references against MIP tables, and writes both variable lists and the YAML that fre cmor yaml expects

  • Increase verbosity when debugging - Use -v to see INFO logging, and -vv (or -v -v) for DEBUG logging

  • Version control your YAML files - Track changes to your CMORization configuration and commit them to git!

  • Check controlled vocabulary - Verify grid labels and nominal resolutions are CV-compliant

  • Review experiment config - Ensure all required metadata fields are populated

Additional Resources

Generate data catalogs

For more complete information on the catalogbuilder tool, please see its official documentation.

build

Generate a catalog.

  • Builds json and csv format catalogs from user input directory path

  • Minimal Syntax: fre catalog build -i [input path] -o [output path]

  • Module(s) needed: n/a

  • Example: fre catalog build -i /archive/am5/am5/am5f3b1r0/c96L65_am5f3b1r0_pdclim1850F/gfdl.ncrc5-deploy-prod-openmp/pp -o ~/output --overwrite

validate

Validate a catalog

  • Runs the comprehensive validator tool (validates vocabulary and ensures catalogs were generated properly)

  • Minimal Syntax: fre catalog validate [json_path] --vocab OR

    fre catalog validate [json_path] --proper_generation

  • Module(s) needed: n/a

  • Example: fre catalog validate ~/example_catalog.json --vocab