API Reference
Main Scripts
- create_user_model.create_user_model(session_dir, input_reader)[source]
Generate a user model skeleton from XML input configuration.
This function creates a Python code skeleton (user_model.py) that the user can populate with their specific ODE system definitions. It uses an LLM-based agent to analyze the XML configuration and generate appropriate Python code templates that follow the framework’s requirements.
- Parameters:
session_dir (Path) – Path to the session directory where the user model will be generated
input_reader (XMLReader) – XMLReader instance containing configuration information about directory names and file paths
The function performs the following operations: 1. Creates necessary directories if they don’t exist 2. Reads the user_input.xml configuration file 3. Uses an LLM agent to generate a Python code skeleton based on the XML 4. Writes the generated user_model.py to the session’s generated directory 5. Provides instructions for the user to populate the generated functions
- Returns:
The user model skeleton is written to the generated directory
- Return type:
None
- Parameters:
session_dir (Path)
input_reader (XMLReader)
- check_user_input.check_input(session_dir, input_reader)[source]
” Check user input with session-specific paths and validate the setup.
This function performs a comprehensive validation of the user’s input configuration by checking directory structures, XML files, and user-defined Python code. It uses an LLM-based agent to analyze the inputs and generate a detailed report of any errors or warnings that need to be addressed before proceeding to parameter fitting.
- Parameters:
session_dir (Path) – Path to the session directory containing the user’s input files and generated code
input_reader (XMLReader) – XMLReader instance that contains configuration information about directory names and file paths
The function performs the following operations: 1. Creates necessary directories if they don’t exist 2. Validates the user_input.xml file against the user_model.py code 3. Generates a detailed report of critical errors and warnings 4. Refines the validation report for better readability 5. Displays the final validation results to the terminal
- Returns:
Results are written to output files and displayed in terminal
- Return type:
None
- Parameters:
session_dir (Path)
input_reader (XMLReader)
- fit_parameters.run_driver(session_dir, input_reader)[source]
Execute the complete parameter fitting workflow for the user’s ODE system.
This function orchestrates the entire parameter estimation process by generating the necessary optimization scripts and executing the fitting algorithm. It uses an LLM-based agent to create optimized code for the specific ODE system defined in the user’s configuration, then runs the parameter fitting using a two-layer optimization strategy.
- Parameters:
session_dir (Path) – Path to the session directory containing the user’s input files, generated code, and where outputs will be stored
input_reader (XMLReader) – XMLReader instance containing configuration information about directory names and file paths
The function performs the following operations: 1. Cleans and recreates the output directory for fresh results 2. Uses an LLM agent to generate optimized fitting scripts based on the user’s model 3. Executes the parameter fitting process using the generated scripts 4. Applies a two-layer optimization strategy (population-based followed by gradient-based) 5. Stores the fitted parameters and optimization results in the output directory
- Returns:
Results are written to the output directory and displayed in terminal
- Return type:
None
- Parameters:
session_dir (Path)
input_reader (XMLReader)
Core Classes
- class lib.utils.classes.ProblemObjectBase[source]
Bases:
object
Abstract base class for defining optimization problems in the OSParamFitting framework.
This class provides a common interface for all problem types, including ODE systems, parameter estimation problems, and other optimization tasks. It defines the basic structure and methods that concrete problem implementations must provide.
The base class handles common attributes like parameter names, initial conditions, time evaluation points, and dataset information. Concrete implementations should override the abstract methods to provide specific problem-solving logic.
- params_to_fit_names
Dictionary mapping parameter names to their indices
- Type:
dict
- y0
Initial conditions for unsteady problems
- Type:
numpy.ndarray, optional
- t_eval
Time points for solution evaluation
- Type:
numpy.ndarray, optional
- dataset
Experimental or reference data to fit against
- Type:
numpy.ndarray, optional
- num_columns_to_fit
Number of data columns to fit
- Type:
int, optional
- fixed_params_names
Names of parameters that are held constant
- Type:
list, optional
- fixed_param_values
Values of the fixed parameters
- Type:
list, optional
- __init__()[source]
Initialize the ProblemObjectBase with default attribute values.
Sets up the basic structure for problem objects with empty or None values that should be populated by concrete implementations or external configuration.
- compute_all_losses(population_points)[source]
Compute loss values for an entire population of parameter sets.
This method evaluates the loss function for multiple parameter combinations, which is useful for population-based optimization algorithms like PSO.
- Parameters:
population_points (numpy.ndarray) – Array of parameter sets with shape (n_individuals, n_parameters)
- Returns:
Array of loss values for each parameter set
- Return type:
numpy.ndarray
Note
Concrete implementations must override _compute_all_losses to provide efficient batch loss computation logic.
- compute_loss(*args)[source]
Compute the loss/objective function value for given parameters.
This is a public interface method that delegates to the concrete implementation’s _compute_loss method. The loss function quantifies how well the model fits the experimental data.
- Parameters:
*args – Variable arguments (typically parameter values) passed to the loss computation
- Returns:
The computed loss value (lower values indicate better fits)
- Return type:
float
- integrate_system(*args)[source]
Integrate the system for given parameters and return the solution.
This is a public interface method that delegates to the concrete implementation’s _integrate_system method.
- Parameters:
*args – Variable arguments passed to the concrete integration method
- Returns:
The result of the system integration (typically solution trajectories)
Note
Concrete implementations must override _integrate_system to provide actual integration logic.
- plot_result(design_point, label='default')[source]
Plot the result for a given parameter set.
This method provides visualization capabilities for the problem results, allowing users to inspect the quality of fits and parameter estimates.
- Parameters:
design_point (numpy.ndarray) – Parameter set to visualize
label (str, optional) – Label for the plot. Defaults to “default”
- Returns:
The result of the plotting operation (typically matplotlib figure or None)
Note
Concrete implementations must override _plot_result to provide actual plotting logic. This method is useful for debugging and result analysis.
- class lib.utils.helper_functions.CreatedClass(dataset, t_eval, y0, input_reader, compute_loss_problem, write_problem_result)[source]
Bases:
ProblemObjectBase
- Parameters:
dataset (ndarray)
t_eval (ndarray)
y0 (Array)
input_reader (XMLReader)
- __init__(dataset, t_eval, y0, input_reader, compute_loss_problem, write_problem_result)[source]
Initialize the CreatedClass instance with problem configuration.
- Parameters:
dataset (numpy.ndarray) – Experimental data array with shape (time_steps, variables)
t_eval (numpy.ndarray) – Time points for solution evaluation
y0 (jax.numpy.ndarray) – Initial conditions for the ODE system
input_reader (XMLReader) – Configuration reader containing all problem parameters
compute_loss_problem (callable) – Function to compute loss for given parameters
write_problem_result (callable) – Function to write problem results
The constructor sets up the complete problem environment including: - Parameter management (trainable vs fixed) - Integration settings (tolerance, step size, max steps) - Time domain configuration - Loss computation and result writing functions
- _compute_all_losses(population)[source]
Compute losses for an entire population of parameter sets.
- Parameters:
population (numpy.ndarray) – Array of parameter sets, shape (n_individuals, n_parameters)
- Returns:
- Array of loss values for each parameter set, with NaN/Inf values
replaced by 1e10 to prevent optimization issues
- Return type:
numpy.ndarray
This method iterates through each parameter set in the population and computes the corresponding loss using the JIT-compiled _compute_loss method.
- _compute_loss(design_pt)[source]
Compute loss for a single parameter set using JIT compilation.
- Parameters:
design_pt (jax.numpy.ndarray) – Single parameter set to evaluate
- Returns:
Computed loss value for the given parameters
- Return type:
float
This method is JIT-compiled for performance and calls the user-defined loss computation function with the problem constants and parameters.
- set_is_logscale(is_logscale)[source]
Set log-scale flag for parameter axes.
- Parameters:
is_logscale (numpy.ndarray) – Boolean array indicating which parameters should use log-scale transformation
- Return type:
None
This affects how the optimization algorithms handle parameter scaling and search space exploration.
- set_max_limit(max_lim)[source]
Set maximum bounds for parameter search space.
- Parameters:
max_lim (numpy.ndarray) – Array of maximum values for each parameter
- Return type:
None
The limits are stored in the constants dictionary and used by the optimization algorithms to constrain the search space.
- set_min_limit(min_lim)[source]
Set minimum bounds for parameter search space.
- Parameters:
min_lim (numpy.ndarray) – Array of minimum values for each parameter
- Return type:
None
The limits are stored in the constants dictionary and used by the optimization algorithms to constrain the search space.
- write_problem_result(design_point, input_reader, label='default')[source]
Write problem solution results to CSV files.
- Parameters:
design_point (numpy.ndarray) – Parameter set that produced the solution
input_reader (XMLReader) – Configuration reader containing output directory info
label (str, optional) – Label for the output file. Defaults to “default”
- Return type:
None
The method calls the user-defined result writing function and saves the output to a CSV file in the format “{label}_solution.csv”.
- lib.utils.helper_functions.fit_equation_system(input_reader, y0, t_eval, dataset, problem_obj)[source]
Fit a system of equations using a two-phase optimization approach.
This function performs parameter fitting for a system of equations using: 1. Population-based optimization (PSO) for global search 2. Gradient-based optimization (NODE) for local refinement
The process includes: 1. Running PSO to find initial parameter estimates 2. Using PSO results as initial guess for NODE 3. Running NODE to refine the parameters 4. Writing results to output directory
- Parameters:
input_reader (XMLReader) – Reader object containing optimization parameters from XML
y0 (jax.numpy.ndarray) – Initial conditions for the system of equations
t_eval (numpy.ndarray) – Time points at which to evaluate the solution
dataset (numpy.ndarray) – Experimental data to fit against
problem_obj (CreatedClass) – Problem object containing loss computation and result writing methods
- Returns:
Best parameter set found during optimization
- Return type:
numpy.ndarray
Notes
PSO is used first to explore the parameter space globally
NODE uses the best PSO result as its initial guess
Results are written to the output directory specified in input_reader
Progress is logged to the output directory
Final parameters are saved to final_design_point.csv
- lib.utils.helper_functions.fit_generic_system(path_to_input, path_to_output_dir, generated_dir, session_path)[source]
Fit a generic system using a two-phase optimization approach.
This function performs parameter fitting using a combination of: 1. Population-based optimization (PSO) for global search 2. Gradient-based optimization (NODE) for local refinement
The process includes: 1. Reading input parameters from XML 2. Running PSO to find initial parameter estimates 3. Using PSO results as initial guess for NODE 4. Running NODE to refine the parameters 5. Writing results to output directory
Parameters
- path_to_inputstr or Path
Path to the input XML file containing optimization parameters
- path_to_output_dirstr or Path
Directory where output files will be written
- generated_dirstr or Path
Directory containing generated files (user_model.py, etc.)
Notes
PSO is used first to explore the parameter space globally
NODE uses the best PSO result as its initial guess
- Results are written to the output directory:
final_design_point.csv : Best parameters found
result_solution.csv : Solution trajectory
fitting_error.txt : Error messages if any
Progress is logged to the output directory
Returns
numpy.ndarray: Best parameter set found during optimization
See Also
FitParamsPSO : Class for PSO optimization parameters FitParamsNODE : Class for NODE optimization parameters
- Parameters:
path_to_input (Path)
path_to_output_dir (Path)
generated_dir (Path)
session_path (Path)
- Return type:
ndarray
- lib.utils.helper_functions.get_input_reader(path_to_input)[source]
Parse XML input file and create an XMLReader instance.
- Parameters:
path_to_input (Path) – Path to the XML configuration file
- Returns:
Configured reader instance containing all problem parameters
- Return type:
XMLReader
This function parses the XML file using ElementTree and initializes an XMLReader object with the parsed configuration data.
- lib.utils.helper_functions.optimize_function(fit_obj, input_reader, file_obj)[source]
Execute PSO optimization iterations with logging and error handling.
This function runs the PSO optimization algorithm for the specified number of iterations, logging progress and handling any errors that occur during the optimization process.
- Parameters:
fit_obj (FitParamsPSO) – PSO optimization object configured with problem parameters
input_reader (XMLReader) – Configuration reader containing iteration count and output directory
file_obj (Path) – Path to the log file for writing optimization progress
- Returns:
(best_position, best_cost) - Best parameter set found and its corresponding cost
- Return type:
tuple
- Raises:
Exception – If optimization fails, with error details written to fitting_error.txt
Notes
The function checks for a stop_fitting.flag file to allow early termination
Progress is logged to the specified log file
Errors are captured and written to fitting_error.txt in the output directory
If optimization fails, the current best position is returned if available
LLM Integration
- class lib.LLM.classes.LLMBase[source]
Bases:
object
Base class for Large Language Model (LLM) interfacing in the OSParamFitting framework.
This abstract base class provides a common interface for different LLM vendors and models. It defines the basic structure for API key management, client initialization, and core LLM operations like code generation, system checking, and user model creation.
The class serves as a foundation for vendor-specific implementations (e.g., OpenAI) and ensures consistent behavior across different LLM providers.
- name
Problem name identifier
- Type:
str, optional
- vendor
Model vendor/provider name
- Type:
str, optional
- model
Specific model identifier
- Type:
str, optional
- API_key
API key for authentication
- Type:
str, optional
- client
LLM client instance for making API calls
- check_generated_files(output_file_path)[source]
- Parameters:
output_file_path (Path)
- Return type:
None
- check_model_output_inputcheck(output_file_path, num_iterations=1)[source]
- Parameters:
output_file_path (Path)
num_iterations (int)
- Return type:
None
- generate_system(system_path, output_file_path=None)[source]
- Parameters:
system_path (Path)
output_file_path (Path | None)
- Return type:
None
- class lib.LLM.classes.OpenAI_model(input_file_path, reference_file_paths, api_key_string, dev_instr_filename, role)[source]
Bases:
LLMBase
Concrete implementation of LLMBase for OpenAI’s GPT models.
This class provides OpenAI-specific functionality for the OSParamFitting framework, including GPT-4 integration for code generation, error checking, and user model creation. It handles API authentication, client setup, and OpenAI-specific API calls.
The class is designed to work with OpenAI’s GPT-4.1 model and provides methods for generating Python code, checking user inputs, and creating model skeletons based on XML configuration files.
- Parameters:
input_file_path (Path)
reference_file_paths (list[Path])
api_key_string (str)
dev_instr_filename (Path)
role (str)
- input_file_path
Path to the input XML configuration file
- Type:
Path
- reference_file_paths
List of reference files for context
- Type:
list[Path]
- api_key_string
Environment variable name containing the API key
- Type:
str
- dev_instr_filename
Path to developer instructions file
- Type:
Path
- role
Role identifier for the LLM model
- Type:
str
- vendor
Set to “openai” for this implementation
- Type:
str
- model
Set to “gpt-4.1” for this implementation
- Type:
str
- vector_store_ids
Vector store identifiers (currently unused)
- Type:
list
- dev_instructions_filename
Developer instructions filename
- Type:
str, optional
- dev_instr
Content of developer instructions
- Type:
str
- client
OpenAI client instance
- Type:
OpenAI
- __init__(input_file_path, reference_file_paths, api_key_string, dev_instr_filename, role)[source]
Initialize the OpenAI_model instance with configuration and API setup.
- Parameters:
input_file_path (Path) – Path to the input XML configuration file
reference_file_paths (list[Path]) – List of reference files for context
api_key_string (str) – Environment variable name containing the OpenAI API key
dev_instr_filename (Path) – Path to developer instructions file
role (str) – Role identifier for the LLM model
- Raises:
ValueError – If the API key is not found in the specified environment variable
The constructor sets up the OpenAI client, validates the API key, and configures the model for GPT-4.1 usage. It also initializes the developer instructions.
- _check_generated_files(output_file_path)[source]
Check generated files for errors using OpenAI’s GPT-4.1 model.
- Parameters:
output_file_path (Path) – Path where the validation report will be written
- Return type:
None
This method uses GPT-4.1 to analyze user-uploaded Python code against the XML configuration context to identify errors and provide a comprehensive validation report.
The process involves: 1. Loading developer instructions 2. Reading the XML configuration and user code 3. Sending an analysis prompt to GPT-4.1 4. Writing the validation report to the output file
- _check_model_output_inputcheck(output_file_path, num_iterations=1)[source]
Check and refine model output through iterative validation using GPT-4.1.
- Parameters:
output_file_path (Path) – Path where the refined output will be written
num_iterations (int, optional) – Number of refinement iterations. Defaults to 1.
- Return type:
None
This method performs iterative refinement of model outputs using GPT-4.1. It can run multiple iterations to improve the quality and clarity of the generated reports or outputs.
The process involves: 1. Loading developer instructions 2. Reading the current model output 3. Running multiple refinement iterations (if specified) 4. Writing the final refined output to the output file
- _generate_system(system_path, output_file_path)[source]
Generate system code using OpenAI’s GPT-4.1 model.
- Parameters:
system_path (Path) – Path to the system definition file
output_file_path (Path) – Path where the generated system code will be written
- Return type:
None
This method uses GPT-4.1 to analyze the system definition, XML configuration, and output sample to generate appropriate system code. The generated code is written to the specified output file.
The process involves: 1. Loading developer instructions 2. Reading system, XML, and sample files 3. Sending a structured prompt to GPT-4.1 4. Writing the generated response to the output file
- _generate_user_model(input_file_path, input_sample_path, user_model_sample_path, user_model_path)[source]
Generate a user model skeleton using OpenAI’s GPT-4.1 model.
- Parameters:
input_file_path (Path) – Path to the input XML configuration file
input_sample_path (Path) – Path to a sample input file for reference
user_model_sample_path (Path) – Path to a sample user model for reference
user_model_path (Path) – Path where the generated user model will be written
- Return type:
None
This method uses GPT-4.1 to generate a user model skeleton based on the XML configuration and sample files. It analyzes the context and creates appropriate Python code templates that users can populate with their specific problem definitions.
The process involves: 1. Loading developer instructions 2. Reading XML configuration and sample files 3. Sending a generation prompt to GPT-4.1 4. Overwriting any existing user model file 5. Writing the generated skeleton to the output file
Note
If a user model already exists at the target path, it will be overwritten.