10. Ocean State Estimation Packages¶
This chapter describes packages that have been introduced for ocean state estimation purposes and in relation with automatic differentiation (see Automatic Differentiation). Various examples in this chapter rely on two model configurations that can be setup as explained in Test Cases For Estimation Package Capabilities
10.1. ECCO: modeldata comparisons using gridded data sets¶
Author: Gael Forget
The functionalities implemented in pkg/ecco are: (1) output timeaveraged model fields to compare with gridded data sets; (2) compute normalized modeldata distances (i.e., cost functions); (3) compute averages and transports (i.e., integrals). The former is achieved as the model runs forwards in time whereas the others occur after timeintegration has completed. Following [FCH+15] the total cost function is formulated generically as
using symbols defined in Table 10.1. Per Equation (10.3) model counterparts (\(\vec{m}_i\)) to observational data (\(\vec{o}_i\)) derive from adjustable model parameters (\(\vec{v}\)) through model dynamics integration (\(\mathcal{M}\)), diagnostic calculations (\(\mathcal{D}\)), and averaging in space and time (\(\mathcal{S}\)). Alternatively \(\mathcal{S}\) stands for subsampling in space and time in the context of Section 10.2 (PROFILES: modeldata comparisons at observed locations). Plain modeldata misfits (\(\vec{m}_i\vec{o}_i\)) can be penalized directly in Eq. (10.1) but penalized misfits (\(\vec{d}_i\)) more generally derive from \(\vec{m}_i\vec{o}_i\) through the generic \(\mathcal{P}\) postprocessor (Eq. (10.2)). Eqs. (10.4)(10.5) pertain to model control parameter adjustment capabilities described in Section 10.3 (CTRL: Model Parameter Adjustment Capability).
symbol 
definition 

\(\vec{u}\) 
vector of nondimensional control variables 
\(\vec{v}\) 
vector of dimensional control variables 
\(\alpha_i, \beta_j\) 
misfit and control cost function multipliers (1 by default) 
\(R_i\) 
data error covariance matrix (\(R_i^{1}\) are weights) 
\(\vec{d}_i\) 
a set of modeldata differences 
\(\vec{o}_i\) 
observational data vector 
\(\vec{m}_i\) 
model counterpart to \(\vec{o}_i\) 
\(\mathcal{P}\) 
postprocessing operator (e.g., a smoother) 
\(\mathcal{M}\) 
forward model dynamics operator 
\(\mathcal{D}\) 
diagnostic computation operator 
\(\mathcal{S}\) 
averaging/subsampling operator 
\(\mathcal{Q}\) 
Preprocessing operator 
\(\mathcal{R}\) 
Preconditioning operator 
10.1.1. Generic Cost Function¶
The parameters available for configuring generic cost function terms in
data.ecco
are given in Table 10.2 and
examples of possible specifications are available in:
MITgcm_contrib/verification_other/global_oce_cs32/input/data.ecco
MITgcm_contrib/verification_other/global_oce_cs32/input_ad.sens/data.ecco
MITgcm_contrib/gael/verification/global_oce_llc90/input.ecco_v4/data.ecco
The gridded observation file name is specified by gencost_datafile. Observational time series may be provided as on big file or split into yearly files finishing in ‘_1992’, ‘_1993’, etc. The corresponding \(\vec{m}_i\) physical variable is specified via the gencost_barfile root (see Table 10.3). A file named as specified by gencost_barfile gets created where averaged fields are written progressively as the model steps forward in time. After the final time step this file is reread by cost_generic.F to compute the corresponding cost function term. If gencost_outputlevel = 1 and gencost_name=‘foo’ then cost_generic.F outputs modeldata misfit fields (i.e., \(\vec{d}_i\)) to a file named ‘misfit_foo.data’ for offline analysis and visualization.
In the current implementation, modeldata error covariance matrices
\(R_i\) omit nondiagonal terms. Specifying \(R_i\) thus boils
down to providing uncertainty fields (\(\sigma_i\) such that
\(R_i=\sigma_i^2\)) in a file specified via gencost_errfile. By
default \(\sigma_i\) is assumed to be timeinvariant but a
\(\sigma_i\) time series of the same length as the \(\vec{o}_i\)
time series can be provided using the variaweight
option
(Table 10.4). By
default cost functions are quadratic but
\(\vec{d}_i^T R_i^{1} \vec{d}_i\) can be replaced with
\(R_i^{1/2} \vec{d}_i\) using the nosumsq
option
(Table 10.4).
In principle, any averaging frequency should be possible, but only ‘day’, ‘month’, ‘step’, and ‘const’ are implemented for gencost_avgperiod. If two different averaging frequencies are needed for a variable used in multiple cost function terms (e.g., daily and monthly) then an extension starting with ‘_’ should be added to gencost_barfile (such as ‘_day’ and ‘_mon’). 1 If two cost function terms use the same variable and frequency, however, then using a common gencost_barfile saves disk space.
Climatologies of \(\vec{m}_i\) can be formed from the time series of model averages in order to compare with climatologies of \(\vec{o}_i\) by activating the ‘clim’ option via gencost_preproc and setting the corresponding gencost_preproc_i integer parameter to the number of records (i.e., a # of months, days, or time steps) per climatological cycle. The generic postprocessor (\(\mathcal{P}\) in Eq. (10.2)) also allows modeldata misfits to be, for example, smoothed in space by setting gencost_posproc to ‘smooth’ and specifying the smoother parameters via gencost_posproc_c and gencost_posproc_i (see Table 10.4). Other options associated with the computation of Eq. (10.1) are summarized in Table 10.4 and further discussed below. Multiple gencost_preproc / gencost_posproc options may be specified per cost term.
In general the specification of gencost_name is optional, has no impact on the endresult, and only serves to distinguish between cost function terms amongst the model output (STDOUT.0000, STDERR.0000, costfunction000, misfit*.data). Exceptions listed in Table 10.6 however activate alternative cost function codes (in place of cost_generic.F) described in Section 10.1.3. In this section and in Table 10.3 (unlike in other parts of the manual) ‘zonal’ / ‘meridional’ are to be taken literally and these components are centered (i.e., not at the staggered model velocity points). Preparing gridded velocity data sets for use in cost functions thus boils down to interpolating them to XC / YC.
The gencost_kLev_select option allows the user to select the
vertical level of a 3D model field, thereby creating a 2D field out of that
slice which is used for the cost computation. For example, drifter velocities
correspond to the second depth level of the grid used in ECCOv4, so model
velocities are selected from this depth level to compare to the drifter
observations. The user can specify this in data.ecco
with:
gencost_kLev_select ( i ) = 2
, where i is replaced with the
index for that cost function term.
parameter 
type 
function 


character(*) 
Name of cost term 

character(*) 
File to receive model counterpart \(\vec{m}_i\) (See Table 10.3) 

character(*) 
File containing observational data \(\vec{o}_i\) 

character(5) 
Averaging period for \(\vec{o}_i\) and \(\vec{m}_i\) (see text) 

integer 
Greater than 0 will output misfit fields 

character(*) 
Uncertainty field name (not used in Section 10.1.2) 

character(*) 
Mask file name root (used only in Section 10.1.2) 

real 
Multiplier \(\alpha_i\) (default: 1) 

character(*) 
Preprocessor names 

character(*) 
Preprocessor character arguments 

integer(*) 
Preprocessor integer arguments 

real(*) 
Preprocessor real arguments 

character(*) 
Postprocessor names 

character(*) 
Postprocessor character arguments 

integer(*) 
Postprocessor integer arguments 

real(*) 
Postprocessor real arguments 

real 
Data less than this value will be omitted 

real 
Data greater than this value will be omitted 

real 
Data points equal to this value will be omitted 

integer 
Start date of observations (YYYMMDD) 

integer 
Start date of observations (HHMMSS) 

logical 
Needs to be true for 3D fields 

integer 
Not fully implemented (used only in Section 10.1.3) 

integer 
Not fully implemented (used only in Section 10.1.3) 

integer 
Vertical level of a 3D field to create a 2D field for cost computation 

logical 
Needs to be true if density following feature is used 

real 
Use to define minimum density surface chosen 

real 
Used to define maximum density surface chosen 

real 
Defines reference pressure used in density following feature 

real 
Used in defining density levels in density following feature 
variable name 
description 
remarks 


sea surface height 
free surface + ice + global steric correction 

sea surface temperature 
first level potential temperature 

sea surface salinity 
first level salinity 

bottom pressure 
phiHydLow 

seaice area 
from pkg/seaice 

seaice effective thickness 
from pkg/seaice 

snow effective thickness 
from pkg/seaice 

potential temperature 
threedimensional 

salinity 
threedimensional 

zonal velocity 
threedimensional 

meridional velocity 
threedimensional 

zonal wind stress 
from pkg/exf 

meridional wind stress 
from pkg/exf 

zonal wind 
from pkg/exf 

meridional wind 
from pkg/exf 

atmospheric temperature 
from pkg/exf 

atmospheric specific humidity 
from pkg/exf 

precipitation 
from pkg/exf 

downward shortwave 
from pkg/exf 

downward longwave 
from pkg/exf 

wind speed 
from pkg/exf 

vertical/diapycnal diffusivity 
threedimensional, constant 

GM diffusivity 
threedimensional, constant 

isopycnal diffusivity 
threedimensional, constant 

geothermal heat flux 
constant 

bottom drag 
constant 
name 
description 





Use climatological misfits 
integer: no. of records per climatological cycle 

Use time mean of misfits 
— 

Use anomalies from time mean 
— 

Use timevarying weight \(W_i\) 
— 

Use linear misfits 
— 

Multiply \(\vec{m}_i\) by a scaling factor 
real: the scaling factor 

subtract mean misfit 
— 

mask (ignore) misfit above minimum depth 
real: minimum water depth (\(< 0\)) 



Smooth misfits 
character: smoothing scale file 
integer: smoother # of time steps 
10.1.2. Generic Integral Function¶
The functionality described in this section is operated by
cost_gencost_boxmean.F. It is
primarily aimed at obtaining a mechanistic understanding of a chosen physical
variable via adjoint sensitivity computations (see Automatic Differentiation) as
done for example in [FWL+15, HWP+11, MGZ+99]. Thus the
quadratic term in Eq. (10.1) (\(\vec{d}_i^T R_i^{1} \vec{d}_i\)) is
by default replaced with a \(d_i\) scalar 2 that derives from model
fields through a generic integral formula (Eq. (10.3)). The
specification of gencost_barfile again selects the physical variable
type. Current valid options to use cost_gencost_boxmean.F are reported in
Table 10.5. A suffix starting with ‘_’
can again be
appended to gencost_barfile.
The integral formula is defined by masks provided via binary files which
names are specified via gencost_mask. There are two cases: (1) if
gencost_mask = ‘foo_mask’
and gencost_barfile is of the
‘m_boxmean*’ type then the model will search for horizontal, vertical,
and temporal mask files named foo_maskC
, foo_maskK
, and
foo_maskT
; (2) if instead gencost_barfile is of the
‘m_horflux_’ type then the model will search for foo_maskW
,
foo_maskS
, foo_maskK
, and foo_maskT
.
The ‘C’ mask or the ‘W’ / ‘S’ masks are expected to be twodimensional fields. The ‘K’ and ‘T’ masks (both optional; all 1 by default) are expected to be onedimensional vectors. The ‘K’ vector length should match Nr. The ‘T’ vector length should match the # of records that the specification of gencost_avgperiod implies but there is no restriction on its values. In case #1 (‘m_boxmean*’) the ‘C’ and ‘K’ masks should consists of +1 and 0 values and a volume average will be computed accordingly. In case #2 (‘m_horflux*’) the ‘W’, ‘S’, and ‘K’ masks should consists of +1, 1, and 0 values and an integrated horizontal transport (or overturn) will be computed accordingly.
In order to define a control volume using both a depth range and a
density range, use a ‘K’ mask and also set
gencost_useDensityMask =.TRUE.
. When the density range
feature is active, the control volume is defined at each timestep by
the bounds set in the ‘K’ mask and also by the density range specified
by the parameters gencost_sigmaLow (the minimum density to
be included in the control volume) and gencost_sigmaHigh
(the maximum density to be included in the control volume). As a default
gencost_refPressure should be set to 0, but other values can
be used (e.g. 1000 dbar, 2000 dbar).
variable name 
description 
remarks 


mean of theta over box 
specify box 

mean of salt over box 
specify box 

mean of SSH over box 
specify box 

total shelfice freshwater flux over box 
specify box 

total shelfice heat flux over box 
specify box 

volume transport through section 
specify transect 
10.1.3. Custom Cost Functions¶
This section (very much a work in progress…) pertains to the special cases of
cost_gencost_bpv4.F,
cost_gencost_seaicev4.F,
cost_gencost_sshv4.F,
cost_gencost_sstv4.F,
cost_gencost_transp.F, and
cost_gencost_moc.F. The
cost_gencost_transp.F function can
be used to compute a transport of volume, heat, or salt through a specified
section (non quadratic cost function). To this end one sets gencost_name =
‘transp*’
, where *
is an optional suffix starting with ‘_’
, and set
gencost_barfile to one of m_trVol
, m_trHeat
, and
m_trSalt
.
The cost_gencost_moc.F function is
similar to transport function, but is intended to compute the meridional
overturning streamfunction maximum based on the volumetric transport integrated
from the floor to surface, as in Smith and Heimbach (2019) [SH19].
Therefore, this function is intended to work with gencost_barfile
= m_trVol
, and note that the first 3 characters of gencost_name
must be moc
, as depicted in Table 10.6. Users can specify
a latitude band to compute the MOC with appropriately defined West (‘W’) and
South (‘S’) masks as described in Section 10.1.2. As an example see
parameter group (3) in this data.ecco file
.
Note: the functionality in cost_gencost_transp.F is not regularly tested. Users interested in
computing volumetric transports through a section are recommended to use the
m_horflux_vol
capabilities described above as it is regularly tested. Users
interested in computing heat and salt transport should note the following about
m_trHeat
and m_trSalt
:
The associated advection scheme with transports may be inconsistent with the model unless
ENUM_CENTERED_2ND
is implementedBolus velocities are not included
Diffusion components are not included
name 
description 
remarks 


sea surface height 
mean dynamic topography (SSH  geod) 

sea surface height 
AlongTrack Topex/Jason SLA (level 3) 

sea surface height 
AlongTrack ERS/Envisat SLA (level 3) 

sea surface height 
AlongTrack GFO class SLA (level 3) 

sea surface height 
LargeScale SLA (from the above) 

sea surface height 
GlobalMean SLA (from the above) 

bottom pressure 
GRACE maps (level 4) 

sea surface temperature 
AlongSwath SST (level 3) 

sea surface temperature 
LargeScale SST (from the above) 

sea ice concentration 
needs seaice adjoint (level 4) 

model sea ice deficiency 
proxy penalty (from the above) 

model sea ice excess 
proxy penalty (from the above) 

volume transport 
specify masks (Section 10.1.2) 

heat transport 
specify masks (Section 10.1.2) 

salt transport 
specify masks (Section 10.1.2) 

meridional ovt. streamfn. maximum 
specify masks (Section 10.1.2) 
10.1.4. Key Routines¶
TBA … ecco_readparms.F, ecco_check.F, ecco_summary.F, cost_generic.F, cost_gencost_boxmean.F, ecco_toolbox.F, ecco_phys.F, cost_gencost_customize.F, cost_averagesfields.F, …
10.1.5. Compile Options¶
TBA … ALLOW_GENCOST_CONTRIBUTION, ALLOW_GENCOST3D, ALLOW_PSBAR_STERIC, ALLOW_SHALLOW_ALTIMETRY, ALLOW_HIGHLAT_ALTIMETRY, ALLOW_PROFILES_CONTRIBUTION, ALLOW_ECCO_OLD_FC_PRINT, …
packages required for some functionalities: smooth, profiles, ctrl
10.2. PROFILES: modeldata comparisons at observed locations¶
Author: Gael Forget
The purpose of pkg/profiles is to allow sampling of MITgcm runs according to a chosen pathway (after a ship or a drifter, along altimeter tracks, etc.), typically leading to easy modeldata comparisons. Given input files that contain positions and dates, pkg/profiles will interpolate the model trajectory at the observed location. In particular, pkg/profiles can be used to do modeldata comparison online and formulate a leastsquares problem (ECCO application).
The pkg/profiles namelist is called data.profiles. In the example below, it includes two input netcdf file names (ARGOifremer_r8.nc and XBT_v5.nc) that should be linked to the run directory and cost function multipliers that only matter in the context of automatic differentiation (see Automatic Differentiation). The first index is a file number and the second index (in mult* only) is a variable number. By convention, the variable number is an integer ranging 1 to 6: temperature, salinity, zonal velocity, meridional velocity, sea surface height anomaly, and passive tracer.
The netcdf input file structure is illustrated in the case of XBT_v5.nc To create such files, one can use the MITprof matlab toolbox obtained from https://github.com/gaelforget/MITprof . At run time, each file is scanned to determine which variables are included; these will be interpolated. The (final) output file structure is similar but with interpolated model values in prof_T etc., and it contains model mask variables (e.g. prof_Tmask). The very model output consists of one binary (or netcdf) file per processor. The final netcdf output is to be built from those using netcdf_ecco_recompose.m (offline).
When the k2 option is used (e.g. for cubed sphere runs), the input file is to be completed with interpolation grid points and coefficients computed offline using netcdf_ecco_GenericgridMain.m. Typically, you would first provide the standard namelist and files. After detecting that interpolation information is missing, the model will generate special grid files (profilesXCincl1PointOverlap* etc.) and then stop. You then want to run netcdf_ecco_GenericgridMain.m using the special grid files. This operation could eventually be inlined.
Example: data.profiles
#
# \*****************\*
# PROFILES cost function
# \*****************\*
&PROFILES_NML
#
profilesfiles(1)= ’ARGOifremer_r8’,
mult_profiles(1,1) = 1.,
mult_profiles(1,2) = 1.,
profilesfiles(2)= ’XBT_v5’,
mult_profiles(2,1) = 1.,
#
/
Example: XBT_v5.nc
netcdf XBT_v5 {
dimensions:
iPROF = 278026 ;
iDEPTH = 55 ;
lTXT = 30 ;
variables:
double depth(iDEPTH) ;
depth:units = "meters" ;
double prof_YYYYMMDD(iPROF) ;
prof_YYYYMMDD:missing_value = 9999. ;
prof_YYYYMMDD:long_name = "year (4 digits), month (2 digits), day (2 digits)" ;
double prof_HHMMSS(iPROF) ;
prof_HHMMSS:missing_value = 9999. ;
prof_HHMMSS:long_name = "hour (2 digits), minute (2 digits), second (2 digits)" ;
double prof_lon(iPROF) ;
prof_lon:units = "(degree E)" ;
prof_lon:missing_value = 9999. ;
double prof_lat(iPROF) ;
prof_lat:units = "(degree N)" ;
prof_lat:missing_value = 9999. ;
char prof_descr(iPROF, lTXT) ;
prof_descr:long_name = "profile description" ;
double prof_T(iPROF, iDEPTH) ;
prof_T:long_name = "potential temperature" ;
prof_T:units = "degree Celsius" ;
prof_T:missing_value = 9999. ;
double prof_Tweight(iPROF, iDEPTH) ;
prof_Tweight:long_name = "weights" ;
prof_Tweight:units = "(degree Celsius)2" ;
prof_Tweight:missing_value = 9999. ;
}
10.3. CTRL: Model Parameter Adjustment Capability¶
Author: Gael Forget
Package ctrl provides an interface to defining the
control variables for an optimization. After defining CPPflags
ALLOW_GENTIM2D_CONTROL, ALLOW_GENARR2D_CONTROL,
ALLOW_GENARR3D_CONTROL in CTRL_OPTIONS.h
<pkg/ctrl/CTRL_OPTIONS.h, the parameters available for configuring generic
cost terms in data.ctrl
are given in Table 10.7. The
control variables are stored as fields on the model grid in files
$ctrlvar.$iternumber.data/meta
, and corresponding gradients in
ad$ctrlvar.$iternumber.data/meta
, where $ctrl
is defined in
data.ctrl
(see Table 10.8 for possible options) and
$iternumber
is the 10digit iteration number of the optimization. Further,
ctrl maps the gradient fields to a vector that can be
handed over to an optimization routine (see Section 10.5) and maps
the resulting new control vector to the model grid unless CPPflag
EXCLUDE_CTRL_PACK is defined in CTRL_OPTIONS.h.
parameter 
type 
function 


character(*) 
Control Name: prefix from Table 10.8 + suffix. 

character(*) 
Weights in the form of \(\sigma_{\vec{u }_j}^{2}\) 

real(5) 
Apply bounds 

character(*) 
Control preprocessor(s) (see Table 10.9 ) 

character(*) 
Preprocessor character arguments (see Table 10.10) 

integer(*) 
Preprocessor integer arguments 

real(*) 
Preprocessor real arguments 

real 
Preconditioning factor (\(=1\) by default) 

real 
Cost function multiplier \(\beta_j\) (\(= 1\) by default) 

real 
Frequency of adjustments (in seconds) 

integer 
Adjustment start date 

integer 
Default: model start date 

logical 
Accumulate control adjustments 

logical 
Global sum of adjustment (output is still 2D) 
name 
description 


2D, timeinvariant controls 



initial sea surface height 


bottom drag 


geothermal heat flux 


shelfice thermal transfer coefficient (see Section 10.3.1) 


shelfice salinity transfer coefficient (see Section 10.3.1) 


shelfice drag coefficient (see Section 10.3.1) 


bottom topography requires to define ALLOW_DEPTH_CONTROL 

3D, timeinvariant controls 



initial potential temperature 


initial salinity 


initial zonal velocity 


initial meridional velocity 


GM coefficient 


isopycnal diffusivity 


diapycnal diffusivity 

2D, timevarying controls 



atmospheric temperature 


atmospheric specific humidity 


downward shortwave 


downward longwave 


precipitation 


river runoff 


zonal wind 


meridional wind 


zonal wind stress 


meridional wind stres 


globally averaged precipitation? 


net heat flux 


net salt (EmPmR) flux 


shelfice melt rate 
name 
description 
arguments 


Correlation modeling 
integer: operator type (default: 1) 

Smoothing without normalization 
integer: operator type (default: 1) 

Average period replication 
integer: cycle length 

Alias for 
(units of


Periodic average subtraction 
integer: cycle length 

Use timevarying weight 
— 

Do not scale with

— 

Sets

— 

Sets

— 
name 
description 
arguments 


Control adjustments to
base 10 logarithm of
2D or 3D array
(not available for

See Section 10.3.2 
The control problem is nondimensional by default, as reflected in the
omission of weights in control penalties [(\(\vec{u}_j^T\vec{u}_j\)
in (10.1)]. Nondimensional controls
(\(\vec{u}_j\)) are scaled to physical units (\(\vec{v}_j\))
through multiplication by the respective uncertainty fields
(\(\sigma_{\vec{u}_j}\)), as part of the generic preprocessor
\(\mathcal{Q}\) in (10.4). Besides the
scaling of \(\vec{u}_j\) to physical units, the preprocessor
\(\mathcal{Q}\) can include, for example, spatial correlation
modeling (using an implementation of Weaver and Coutier, 2001) by
setting xx_gen*_preproc = ’WC01’
. Alternatively, setting
xx_gen*_preproc = ’smooth’
activates the smoothing part of WC01
,
but omits the normalization. Additionally, bounds for the controls can
be specified by setting xx_gen*_bounds
. In forward mode, adjustments
to the \(i^\text{th}\) control are clipped so that they remain
between xx_gen*_bounds(i,1)
and xx_gen*_bounds(i,4)
. If
xx_gen*_bounds(i,1)
\(<\) xx_gen*_bounds(i+1,1)
for
\(i = 1, 2, 3\), then the bounds will “emulate a local
minimum;” otherwise, the bounds have no effect in adjoint mode.
For the case of timevarying controls, the frequency is specified by
xx_gentim2d_period. The generic control package interprets special
values of xx_gentim2d_period in the same way as the exf
package:
a value of \(12\) implies cycling monthly fields while a value of
\(0\) means that the field is steady. Time varying weights can be
provided by specifying the preprocessor variaweight
, in which case
the xx_gentim2d_weight file must contain as many records as the
control parameter time series itself (approximately the run length
divided by xx_gentim2d_period).
The parameter mult_gen*
sets the multiplier for the corresponding
cost function penalty [\(\beta_j\) in (10.1);
\(\beta_j = 1\) by default). The preconditioner, \(\cal{R}\),
does not directly appear in the estimation problem, but only serves to
push the optimization process in a certain direction in control space;
this operator is specified by gen*Precond
(\(=1\) by default).
Note that control parameters exist for each individual near surface atmospheric state variable, as well as the net heat and salt (EmPmR) fluxes. The user must be mindful of control parameter combinations that make sense according to their specific setup, e.g., with the EXF package.
10.3.1. Shelfice Control Parameters¶
The available iceshelf control parameters depend on the form of transfer coefficient used in the simulation.
The adjustments xx_shicoefft
and xx_shicoeffs
are available when the
velocity independent form of transfer coefficients is used, by setting
#undef
SHI_ALLOW_GAMMAFRICT
in SHELFICE_OPTIONS.h at
compile time (see Table 8.22) and
SHELFICEuseGammaFrict =.FALSE.
in data.shelfice
(see
Table 8.23). These parameters provide
adjustments to \(\gamma_T\) and/or \(\gamma_S\) directly. If only one
of either is used, the value of the other is set based on the control
adjustments used together with SHELFICEsaltToHeatRatio, which can be
set in data.shelfice
. See Runtime parameters and default values; all parameters are in namelist group SHELFICE_PARM01 for
the default.
The adjustment xx_shicdrag
is available in the velocity dependent form
of the iceocean transfer coefficients, which is specified by #define
SHI_ALLOW_GAMMAFRICT and SHELFICEuseGammaFrict
=.TRUE.
at compile time and run time respectively. This parameter provides
adjustments to the drag coefficient at the ice ocean boundary, but by default
only adjusts the drag coefficient used to compute the thermal and freshwater
fluxes, neglecting the momentum contributions. To allow the contribution
directly to momentum fluxes, specify xx_genarr2d_preproc_c(*,iarr) = 'mom'
in data.ctrl
.
10.3.2. Logarithmic Control Parameters¶
As indicated in Table 10.10, the base10 logarithm of a
control field can be adjusted by specifying the character option
genarr*d_preproc_c(k2,iarr) = 'log10ctrl'
, with k2
and iarr
as appropriate, and *d
denoting that 2d
or 3d
are available.
As a concrete example, if the control parameter is updating fld2d
,
then the field will be set as follows:
fld2d(i,j,bi,bj) = 10**( log10InitVal + xx_genarr2d(i,j,bi,bj,iarr) )
where log10InitVal
is a scalar with a default value of 0, but can be changed
by setting gencost_preproc_r(k2,iarr)
. This is useful in the case where
doInitXX=.TRUE.
.
Concretely, if we had an initial guess for fld2d = 10^4
then one could set
the following in data.ctrl
:
xx_genarr2d_file(1) = 'xx_fld2d'
xx_genarr2d_weight(1) = 'nonzero_weights.data'
xx_genarr2d_preproc_c(1,1) = 'log10ctrl'
xx_genarr2d_preproc_r(1,1) = 4. ,
Note that the log10ctrl
option can only be used when a weight file
is provided, and finally that this logoption cannot be used with
xx_gen*_preproc(k2,iarr) = 'noscaling',
.
10.4. SMOOTH: Smoothing And Covariance Model¶
Author: Gael Forget
TO BE CONTINUED…
10.5. The line search optimisation algorithm¶
Author: Patrick Heimbach
10.5.1. General features¶
The line search algorithm is based on a quasiNewton variable storage method which was implemented by [GLemarechal89].
TO BE CONTINUED…
10.5.2. The online vs. offline version¶
 Online versionEvery call to simul refers to an execution of the forward and adjoint model. Several iterations of optimization may thus be performed within a single run of the main program (lsopt_top). The following cases may occur:
cold start only (no optimization)
cold start, followed by one or several iterations of optimization
warm start from previous cold start with one or several iterations
warm start from previous warm start with one or several iterations
 Offline versionEvery call to simul refers to a read procedure which reads the result of a forward and adjoint run Therefore, only one call to simul is allowed, itmax = 0, for cold start itmax = 1, for warm start Also, at the end, x(i+1) needs to be computed and saved to be available for the offline model and adjoint run
In order to achieve minimum difference between the online and offline code xdiff(i+1) is stored to file at the end of an (offline) iteration, but recomputed identically at the beginning of the next iteration.
10.5.3. Number of iterations vs. number of simulations¶
10.5.3.1. Summary¶
10.5.3.2. Description¶
\[\tt xdiff(i,1) = xx(i1) + tact(i1,1)*dd(i1)\]serves as input for a forward and adjoint model run yielding a new gg(i,1). In general, the new solution passes the 1st and 2nd Wolfe tests so xdiff(i,1) represents the solution sought:
\[{\tt xx(i) = xdiff(i,1)}\]If one of the two tests fails, an inter or extrapolation is invoked to determine a new step size tact(i1,2). If more than one function call is permitted, the new step size is used together with the “old” descent direction dd(i1) (i.e. dd is not updated using the new gg(i)), to compute a new
\[{\tt xdiff(i,2) = xx(i1) + tact(i1,2)*dd(i1)}\]that serves as input in a new forward and adjoint run, yielding gg(i,2). If now, both Wolfe tests are successful, the updated solution is given by
\[\tt xx(i) = xdiff(i,2) = xx(i1) + tact(i1,2)*dd(i1)\]
In order to save memory both the fields dd and xdiff have a double usage.
  in lsopt_top: used as x(i)  x(i1) for Hessian update in lsline: intermediate result for control update x = x + tact*dd
  in lsopt_top, lsline: descent vector, dd = gg and hessupd in dgscale: intermediate result to compute new preconditioner
10.5.3.3. The parameter file lsopt.par¶
NUPDATE max. no. of update pairs (gg(i)gg(i1), xx(i)xx(i1)) to be stored in OPWARMD to estimate Hessian [pair of current iter. is stored in (2*jmax+2, 2*jmax+3) jmax must be > 0 to access these entries] Presently NUPDATE must be > 0 (i.e. iteration without reference to previous iterations through OPWARMD has not been tested)
EPSX relative precision on xx bellow which xx should not be improved
EPSG relative precision on gg below which optimization is considered successful
IPRINT controls verbose (>=1) or nonverbose output
NUMITER max. number of iterations of optimisation; NUMTER = 0: cold start only, no optimization
ITER_NUM index of new restart file to be created (not necessarily = NUMITER!)
NFUNC max. no. of simulations per iteration (must be > 0); is used if step size tact is inter/extrapolated; in this case, if NFUNC > 1, a new simulation is performed with same gradient but “improved” step size
FMIN first guess cost function value (only used as long as first iteration not completed, i.e. for jmax <= 0)
10.5.3.4. OPWARMI, OPWARMD files¶
Two files retain values of previous iterations which are used in latest iteration to update Hessian:
OPWARMI: contains index settings and scalar variables
n = nn
no. of control variables
fc = ff
cost value of last iteration
isize
no. of bytes per record in OPWARMD
m = nupdate
max. no. of updates for Hessian
jmin, jmax
pointer indices for OPWARMD file (cf. below)
gnorm0
norm of first (cold start) gradient gg
iabsiter
total number of iterations with respect to cold start
OPWARMD: contains vectors (control and gradient)
entry
name
description
1
xx(i)
control vector of latest iteration
2
gg(i)
gradient of latest iteration
3
xdiff(i),diag
preconditioning vector; (1,…,1) for cold start
2*jmax+2
gold=g(i)g(i1)
for last update (jmax)
2*jmax+3
xdiff=tact*d=xx(i)xx (i1)
for last update (jmax)
Example 1: jmin = 1, jmax = 3, mupd = 5
1 2 3  4 5 6 7 8 9 empty empty
_________  ______ ______ ______ ______ ______
0  1 2 3
Example 2: jmin = 3, jmax = 7, mupd = 5 > jmax = 2
1 2 3 
_________  ______ ______ ______ ______ ______
 6 7 3 4 5
10.5.3.5. Error handling¶
lsopt_top

 check arguments
 CALL INSTORE
 
  determine whether OPWARMI available:
 * if no: cold start: create OPWARMI
 * if yes: warm start: read from OPWARMI
 create or open OPWARMD

 check consistency between OPWARMI and model parameters

 >>> if COLD start: <<<
  first simulation with f.g. xx_0; output: first ff_0, gg_0
  set first preconditioner value xdiff_0 to 1
  store xx(0), gg(0), xdiff(0) to OPWARMD (first 3 entries)
 
 >>> else: WARM start: <<<
 read xx(i), gg(i) from OPWARMD (first 2 entries)
 for first warm start after cold start, i=0



 /// if ITMAX > 0: perform optimization (increment loop index i)
 (
 ) save current values of gg(i1) > gold(i1), ff > fold(i1)
 ( CALL LSUPDXX
 ) 
 (  >>> if jmax=0 <<<
 )   first optimization after cold start:
 (   preconditioner estimated via ff_0  ff_(first guess)
 )   dd(i1) = gg(i1)*preco
 (  
 )  >>> if jmax > 0 <<<
 (  dd(i1) = gg(i1)
 )  CALL HESSUPD
 (  
 )   dd(i1) modified via Hessian approx.
 ( 
 )  >>> if <dd,gg> >= 0 <<<
 (  ifail = 4
 ) 
 (  compute step size: tact(i1)
 )  compute update: xdiff(i) = xx(i1) + tact(i1)*dd(i1)
 (
 ) >>> if ifail = 4 <<<
 ( goto 1000
 )
 ( CALL OPTLINE / LSLINE
 ) 
... ... ...
... ...
 )
 ( CALL OPTLINE / LSLINE
 ) 
 (  /// loop over simulations
 ) (
 ( ) CALL SIMUL
 ) ( 
 ( )  input: xdiff(i)
 ) (  output: ff(i), gg(i)
 ( )  >>> if ONLINE <<<
 ) ( runs model and adjoint
 ( ) >>> if OFFLINE <<<
 ) ( reads those values from file
 ( )
 ) ( 1st Wolfe test:
 ( ) ff(i) <= tact*xpara1*<gg(i1),dd(i1)>
 ) (
 ( ) 2nd Wolfe test:
 ) ( <gg(i),dd(i1)> >= xpara2*<gg(i1),dd(i1)>
 ( )
 ) ( >>> if 1st and 2nd Wolfe tests ok <<<
 ( )  320: update xx: xx(i) = xdiff(i)
 ) ( 
 ( ) >>> else if 1st Wolfe test not ok <<<
 ) (  500: INTERpolate new tact:
 ( )  barr*tact < tact < (1barr)*tact
 ) (  CALL CUBIC
 ( ) 
 ) ( >>> else if 2nd Wolfe test not ok <<<
 ( ) 350: EXTRApolate new tact:
 ) ( (1+barmin)*tact < tact < 10*tact
 ( ) CALL CUBIC
 ) (
 ( ) >>> if new tact > tmax <<<
 ) (  ifail = 7
 ( ) 
 ) ( >>> if new tact < tmin OR tact*dd < machine precision <<<
 ( )  ifail = 8
 ) ( 
 ( ) >>> else <<<
 ) ( update xdiff for new simulation
 ( )
 ) \\\ if nfunc > 1: use inter/extrapolated tact and xdiff
 ( for new simulation
 ) N.B.: new xx is thus not based on new gg, but
 ( rather on new step size tact
 )
 ( store new values xx(i), gg(i) to OPWARMD (first 2 entries)
 ) >>> if ifail = 7,8,9 <<<
 ( goto 1000
 )
... ...
... ...
 )
 ( store new values xx(i), gg(i) to OPWARMD (first 2 entries)
 ) >>> if ifail = 7,8,9 <<<
 ( goto 1000
 )
 ( compute new pointers jmin, jmax to include latest values
 ) gg(i)gg(i1), xx(i)xx(i1) to Hessian matrix estimate
 ( store gg(i)gg(i1), xx(i)xx(i1) to OPWARMD
 ) (entries 2*jmax+2, 2*jmax+3)
 (
 ) CALL DGSCALE
 ( 
 )  call dostore
 (  
 )   read preconditioner of previous iteration diag(i1)
 (  from OPWARMD (3rd entry)
 ) 
 (  compute new preconditioner diag(i), based upon diag(i1),
 )  gg(i)gg(i1), xx(i)xx(i1)
 ( 
 )  call dostore
 ( 
 )  write new preconditioner diag(i) to OPWARMD (3rd entry)
 (
 \\\ end of optimization iteration loop



 CALL OUTSTORE
 
  store gnorm0, ff(i), current pointers jmin, jmax, iterabs to OPWARMI

 >>> if OFFLINE version <<<
 xx(i+1) needs to be computed as input for offline optimization
 
  CALL LSUPDXX
  
   compute dd(i), tact(i) > xdiff(i+1) = x(i) + tact(i)*dd(i)
 
  CALL WRITE_CONTROL
  
   write xdiff(i+1) to special file for offline optim.

 print final information

O
10.5.4. Alternative code to optim and lsopt¶
The nonMITgcm package optim_m1qn3 is based on the same
quasiNewton variable storage method (BFGS) [GLemarechal89] as the
package in subdirectory lsopt
, but it uses a reverse communication
version of the latest (and probably last) release of the subroutine
m1qn3.
This avoids having to define a dummy subroutine simul
and also
simplifies the code structure. As a consequence this package is
simple(r) to compile and use, because m1qn3.f
contains all necessary
subroutines and only one extra routine (ddot
, which was copied
from BLAS) is required.
The principle of reverse communication is outlined in this example:
external simul_rc
...
reverse = .true.
do while (.true.)
call m1qn3 (simul_rc,...,x,f,g,...,reverse,indic,...)
if (reverse) break
call simul (indic,n,x,f,g)
end while
simul_rc
is an empty ‘’model simulator’’, and simul
generates a
new state based on the value of indic
.
The original m1qn3
has been modified to work “offline”, i.e. the
simulator and the driver of m1qn3_offline
are separate programs
that are called alternatingly from a (shell)script. This requires
that the “state” of m1qn3
is saved before this program
terminates. This state is saved in a single file OPWARM.optXXX
per
simulation, where XXX
is the simulation number. Communication with
the routine, writing and restoring the state of m1qn3
is achieved
via three new commonblocks that are contained in three header
files. simul
is replaced by reading and storing the model state
and gradient vectors. Schematically the driver routine optim_sub
does the following:
external simul_rc
...
call optim_readdata( nn, ctrlname, ..., xx ) ! read control vector
call optim_readdata( nn, costname, ..., adxx ) ! read gradient vector
call optim_store_m1qn3( ..., .false. ) ! read state of m1qn3
reverse = .true.
call m1qn3 (simul_rc,...,xx,objf,adxx,...,reverse,indic,...)
call optim_store_m1qn3( ..., .true. ) ! write state of m1qn3
call optim_writedata( nn, ctrlname, ..., xx ) ! write control vector
The optimization loop is executed outside of this program within a script.
The code can be obtained at https://github.com/mjlosch/optim_m1qn3. The README contains short instructions how to build and use the code in combination with the verification/tutorial_global_oce_optim experiment. The usage is very similar to the optim package.
10.6. Test Cases For Estimation Package Capabilities¶
First, if you have not done so already, download the model as explained in Getting Started with MITgcm via the MITgcm git repository:
% git clone https://github.com/MITgcm/MITgcm.git
Then, download the setup from the MITgcm verification_other git repository:
% cd MITgcm
% git clone https://github.com/MITgcm/verification_other
and follow the additional directions provided for global_oce_cs32 or for global_oce_llc90. These model configurations are used for daily regression tests to ensure continued availability of the tested estimation package features discussed in Ocean State Estimation Packages. Daily results of these tests, which currently run on the glacier cluster, are reported on https://mitgcm.org/testingsummary. To this end, one sets a crontab job that typically executes the script reported below. The various commands can also be used to run these examples outside of crontab, directly at the command line via the testreport capability.
Note
Users are advised against running global_oce_llc90 tests with fewer than 12 cores (96 for adjoint tests) to avoid potential memory overloads. global_oce_llc90 (595M) uses the same LLC90 grid as the production ECCO version 4 setup [FCH+15]. The much coarser resolution global_oce_cs32 (614M) uses the CS32 grid and can run on any modern laptop.
% #!/bin/csh f
% setenv PATH ~/bin:$PATH
% setenv MODULESHOME /usr/share/Modules
% source /usr/share/Modules/init/csh
% module use /usr/share/Modules
% module load openmpix86_64
% setenv MPI_INC_DIR $MPI_INCLUDE
%
% cd ~/MITgcm
% #mkdir gitpull.log
% set D=`date +%Y%m%d`
% git pull v > gitpull.log/gitpull.$D.log
%
% cd verification
%
% #ieee case:
% ./testreport clean t 'global_oce_*'
% ./testreport of=../tools/build_options/linux_amd64_gfortran MPI 24 t 'global_oce_*' addr username@something.whatever
% ../tools/do_tst_2+2 t 'global_oce_*' mpi exe 'mpirun np 24 ./mitgcmuv' a username@something.whatever
%
% #devel case:
% ./testreport clean t 'global_oce_*'
% ./testreport of=../tools/build_options/linux_amd64_gfortran MPI 24 devel t 'global_oce_*' addr username@something.whatever
% ../tools/do_tst_2+2 t 'global_oce_*' mpi exe 'mpirun np 24 ./mitgcmuv' a username@something.whatever
%
% #fast case:
% ./testreport clean t 'global_oce_*'
% ./testreport of=../tools/build_options/linux_amd64_gfortran MPI 24 t 'global_oce_*' fast addr username@something.whatever
% ../tools/do_tst_2+2 t 'global_oce_*' mpi exe 'mpirun np 24 ./mitgcmuv' a username@something.whatever
%
% #adjoint case:
% ./testreport clean t 'global_oce_*'
% ./testreport of=../tools/build_options/linux_amd64_gfortran MPI 24 ad t 'global_oce_*' addr username@something.whatever