5. Calibration
The IH-SET tool leverages advanced optimization algorithms to calibrate equilibrium-based shoreline evolution models effectively. To achieve a balance between accuracy and computational efficiency, the software supports both single-objective and multi-objective calibration tasks. Single-objective tasks finds the optimal solution to a problem with one performance error metric, while multi-objective tasks uses two or more metrics to find an optimal solution. Below is an overview of the key algorithms and metrics used in IH-SET and their roles in the calibration process.
5.1. Shuffled Complex Evolution - University of Arizona (SCE-UA)
Background
This algorithm performs single-objective global optimization in complex and high-dimensional parameter spaces by searching for the global optimum through systematic competitive evolution. This process takes clusters of samples, called complexes, from the parameter space and applies deterministic search strategies, random elements, and a clustering strategy to move the complexes toward a global solution. The SCE-UA method (Duan et al., 1992) was originally developed for use in rainfall-runoff models, though its use has expanded into other scientific and engineering fields.
How It Works
Divides the parameter space into multiple complexes.
Each complex evolves through selection, recombination, and mutation.
Periodic global shuffling enhances the ability to search globally and avoid local minima.
Use Case
Primarily used in single-objective calibration scenarios.
Example: Minimizing error metrics between observed and modeled shoreline configurations.
Configuration
Calibrate Initial Position?
switch_Yini: Whether to calibrate the initial observation.Initial and Final Dates for Calibration
start_date: Example:1979-01-01.end_date: Example:2016-01-01.
Metrics to be Minimized
metrics: Metrics list. Example:['mss']. For more details, please check Metrics.Population Size and Generations
population_size: Range:50–1000. Example:200.num_generations: Range:5–1000. Example:50.
Crossover Probability
cross_prob: Example:0.95.Mutation and Regeneration Rates
mutation_rate: Example:0.5.regeneration_rate: Example:0.1.
Mutation Parameter
eta_mut: Polynomial mutation intensity. Example:3.Number of Complexes
num_complexes: Partitions of the population. Example:5.Stopping Criteria
kstop: Maximum stagnation before restarting. Example:100.pcento: Percentage improvement in last iterations. Example:0.1.peps: Minimum value for standard deviation. Example:0.000001.
Bounds for Parameters
lb: Lower bounds. Example:[0.1, -365, 1e-10, 1e-10].ub: Upper bounds. Example:[100, 365, 1e-5, 1e-5].
5.2. Non-dominated Sorting Genetic Algorithm II (NSGA-II)
Background
As a multi-objective evolutionary algorithm (MOEA), the NSGA-II handles multi-objective optimization, finding the best trade-offs between conflicting objectives. This method was proposed to improve upon previously developed MOEAs that use non-dominated sorting by creating a parameterless, elitist algorithm with reduced computational complexity. In addition to creating a fast non-dominated sorting approach, this algorithm achieves these goals by introducing a selection operator that mixes the parent and offspring populations and selects the best solutions from the resulting mating pool.
How It Works
Organizes candidate solutions using non-dominated sorting to identify Pareto-optimal sets.
Maintains diversity using a crowding-distance metric.
Balances exploration (searching new areas) and exploitation (refining known good solutions).
Use Case
Useful in multi-objective calibration tasks, such as balancing accuracy and computational cost or fitting multiple shoreline datasets simultaneously.
Configuration
Calibrate Initial Position?
switch_Yini: Whether to calibrate the initial observation.Initial and Final Dates for Calibration
start_date: Starting date for calibration (e.g.,1979-01-01).end_date: Ending date for calibration (e.g.,2010-01-01).
Metrics to be Minimized
metrics: List of metrics to optimize. Example:['kge', 'mss', 'rsr']. For more details, please check Metrics.Population Size and Generations
population_size: Number of individuals in the population. Example:400.num_generations: Number of iterations for the algorithm. Example:50.
Crossover Probability
cross_prob: Probability of recombining genes. Range:0.6–1.0. Example:0.9.Mutation and Regeneration Rates
mutation_rate: Probability of randomly altering genes. Range:0.01–0.8. Example:0.5.regeneration_rate: Fraction of the population to regenerate in each generation. Range:0.1–0.5. Example:0.3.
Pressure
pressure: Tournament selection pressure. Range:2–5. Example:2.Stopping Criteria
kstop: Maximum stagnation iterations before restarting. Range:5–100. Example:5.pcento: Percentage improvement in pastkstopiterations to continue. Range:0.01–0.2. Example:0.02.peps: Convergence threshold for geometric range of parameters. Range:1e-6–1e-3. Example:0.0001.
Bounds for Parameters
lb: Lower bounds. Example:[0.1, -365, 1e-10, 1e-10].ub: Upper bounds. Example:[100, 365, 1e-5, 1e-5].
5.3. Strength Pareto Evolutionary Algorithm 2 (SPEA2)
Background
The SPEA2, similar to NSGA-II, is designed for multi-objective optimization with an emphasis on maintaining solution diversity. This method improves on the previously developed SPEA (Zitzler and Thiele, 1998) by incorporating a fine-grained fitness assignment strategy, a density estimation technique, and an enhanced archive truncation method. Each of these additions respectively tracks dominance by individuals, guides the search process, and preserves boundary solutions during the evolution process.
How It Works
SPEA2 uses an external archive to store and track Pareto-optimal solutions.
The fitness of each individual is calculated based on both its own fitness and its distance from the archive.
The algorithm performs optimization by maintaining solution diversity through selection, crossover, and mutation.
Use Case
Suitable for scenarios requiring a diverse set of Pareto solutions, often complementing NSGA-II.
Configuration
Calibrate Initial Position?
switch_Yini: Whether to calibrate the initial observation.Initial and Final Dates for Calibration
start_date: Starting date (e.g.,1979-01-01).end_date: Ending date (e.g.,2016-01-01).
Metrics to be Minimized
metrics: Metrics list. Example:['kge', 'mss', 'rmse']. For more details, please check Metrics.Population Size and Generations
population_size: Number of individuals. Range:50–1000. Example:200.num_generations: Iteration count. Range:5–1000. Example:50.
Crossover Probability
cross_prob: Range:0.6–1.0. Example:0.9.Mutation and Regeneration Rates
mutation_rate: Range:0.01–0.8. Example:0.5.regeneration_rate: Range:0.1–0.5. Example:0.3.
Pressure and Selection Parameter
pressure: Tournament pressure. Range:2–5. Example:2.m: Environmental selection parameter. Range:1–5. Example:3.
Additional Mutation Parameter
eta_mut: Intensity of polynomial mutation. Range:5–20. Example:7.Stopping Criteria
kstop: Maximum stagnation iterations. Range:5–100. Example:5.pcento: Percentage improvement in past iterations. Example:0.01.peps: Convergence threshold for geometric range. Example:0.0001.
Bounds for Parameters
lb: Lower bounds. Example:[0.1, -365, 1e-10, 1e-10].ub: Upper bounds. Example:[100, 365, 1e-5, 1e-5].
5.4. Simulated Annealing (SA)
Background
The SA algorithm is a single-objective, probabilistic optimization technique for escaping local minima and achieving global optimization. This is done by replicating the physical process of a solid becoming frozen at a minimum energy configuration using several elements, including a Markov chain and a nonincreasing function of temperature called a cooling schedule. Once the Markov chain reaches its equilibrium, the temperature is increased instantaneously, with the hopes that this sudden upward movement will expel the search out of any local minima. Many of the initial studies utilizing SA applied it to image processing, and it is often applied when a problem’s structure is not well understood.
How It Works
Begins with an initial solution and explores the parameter space by probabilistically accepting worse solutions.
Gradually reduces the probability of accepting worse solutions as the “temperature” decreases.
Ensures convergence to an optimal or near-optimal solution.
Use Case
Effective in problems with large, complex parameter spaces containing multiple local optima.
Configuration
Calibrate Initial Position?
switch_Yini: Whether to calibrate the initial observation.Initial and Final Dates for Calibration
start_date: Example:1979-01-02.end_date: Example:2016-01-01.
Metrics to be Minimized
metrics: Example:['mss']. For more details, please check Metrics.Maximum Iterations
max_iterations: Total iterations. Range:100–100000. Example:15000.Initial Temperature
initial_temperature: Starting value. Range:100–1000. Example:1000.Cooling Rate
cooling_rate: Rate of temperature decay. Range:0.9–0.99. Example:0.95.Bounds for Parameters
lb: Lower bounds. Example:[0.1, -365, 1e-10, 1e-10].ub: Upper bounds. Example:[100, 365, 1e-5, 1e-5].
5.5. Metrics
The four algorithms described above are used to calibrate the models included in IH-SET by minimizing error. Within these algorithms, several different metrics can be utilized to quantify error and evalaute the strength of the model and the calibrated parameters. Of these metrics, any one can be used in SA and SCE-UA, while at least two must be selected for NSGAII and SPEA2. The following section briefly describes each of the metrics available for use within IH-SET.
5.5.1. Mielke Skill Score (MSS)
Definition: A statistical metric used to evaluate the predictive ability of models, particularly in environmental and spatial statistics. It measures the match between predicted and observed values.
Formula:
Where:
\(x_i\): Predicted values (model output)
\(y_i\): Observed values
\(\bar{x}\), \(\bar{y}\): Mean of \(x\) and \(y\), respectively
\(n\): Number of observations
5.5.2. Nash-Sutcliffe Efficiency (NSE)
Definition: Evaluates how well a model predicts observed data compared to a mean-based model.
Formula:
Where:
\(x_i\): Predicted values (model output)
\(y_i\): Observed values
\(\bar{y}\): Mean of \(y\)
\(n\): Number of observations
5.5.3. Pearson Correlation (\(\rho\))
Definition: Measures linear correlation between two variables. Values range from -1 to 1.
Formula:
Where:
\(x_i\): Predicted values (model output)
\(y_i\): Observed values
\(\bar{x}\), \(\bar{y}\): Mean of \(x\) and \(y\), respectively
\(n\): Number of observations
5.5.4. Spearman Correlation (\(S_{\rho}\))
Definition: A rank-based correlation measure to capture non-linear relationships.
Formula:
Where:
\(d_i\): Difference between ranks of \(x_i\) and \(y_i\)
\(n\): Number of observations
5.5.5. Agreement Index (AI)
Definition: A metric for quantifying the agreement between observed and predicted values.
Formula:
Where:
\(x_i\): Predicted values (model output)
\(y_i\): Observed values
\(\bar{y}\): Mean of \(y\)
\(n\): Number of observations
5.5.6. Kling-Gupta Efficiency (KGE)
Definition: An extension of NSE focusing on bias, variability, and correlation.
Formula:
Where:
\(r\): Pearson correlation coefficient between \(x\) and \(y\)
\(\beta = \frac{\bar{x}}{\bar{y}}\): Bias ratio
\(\gamma = \frac{\sigma_x / \bar{x}}{\sigma_y / \bar{y}}\): Variability ratio
\(\bar{x}\), \(\bar{y}\): Mean of \(x\) and \(y\)
\(\sigma_x\), \(\sigma_y\): Standard deviation of \(x\) and \(y\)
5.5.7. Non-Parametric KGE (npKGE)
Definition: A non-parametric version of KGE that avoids assumptions about data distribution.
Formula:
The same general structure as KGE, but uses rank-based or non-parametric statistics for \(r\), \(\beta\), and \(\gamma\).
5.5.8. Logarithmic probability Distribution (LPD)
Definition: A metric evaluating log-scaled probabilities in predictive models based in Pointwise Predictive Density.
Formula:
Where:
\(p(y_i)\): Probability of the observed value \(y_i\) under the predicted distribution
5.5.9. Bias (BIAS)
Definition: The difference between observed and predicted averages.
Formula:
Where:
\(\bar{x}\): Mean of predicted values
\(\bar{y}\): Mean of observed values
5.5.10. Percent Bias (PBIAS)
Definition: Measures relative deviation between observed and predicted data as a percentage.
Formula:
Where:
\(x_i\): Predicted values (model output)
\(y_i\): Observed values
5.5.11. Mean Squared Error (MSE)
Definition: Represents the average squared difference between observed and predicted values.
Formula:
Where:
\(x_i\): Predicted values (model output)
\(y_i\): Observed values
\(n\): Number of observations
5.5.12. Root Mean Squared Error (RMSE)
Definition: The square root of MSE, providing error in original units.
Formula:
Where:
\(x_i\): Predicted values (model output)
\(y_i\): Observed values
\(n\): Number of observations
5.5.13. Mean Absolute Error (MAE)
Definition: The average absolute difference between observed and predicted values.
Formula:
Where:
\(x_i\): Predicted values (model output)
\(y_i\): Observed values
\(n\): Number of observations
5.5.14. Relative RMSE (RRMSE)
Definition: RMSE normalized by the mean of observed data to provide a relative measure of error.
Formula:
Where:
\(\bar{y}\): Mean of observed values
5.5.15. RMSE-Observations Standard Deviation Ratio (RSR)
Definition: Normalizes RMSE by the standard deviation of observations.
Formula:
Where:
\(\sigma_y\): Standard deviation of observed values
5.5.16. Decomposed MSE (DMSE)
Definition: Breaks MSE into components for better diagnostic understanding.
Formula:
Where:
\((\bar{y} - \bar{x})^2\) : Bias component
\((\sigma_y - \sigma_x)^2\) : Variance component
\(2(1-\rho)\sigma_y \sigma_x\) : Covariance component
\(\sigma_x\), \(\sigma_y\) : Standard deviation of \(x\) and \(y\)
\(\rho\) : Pearson correlation coefficient
5.5.17. Covariance
Definition: Quantifies the co-variability between predicted and observed values.
Formula:
Where:
\(x_i\): Predicted values (model output)
\(y_i\): Observed values
\(\bar{x}\), \(\bar{y}\): Mean of \(x\) and \(y\)
