Parameter Sensitivity and Estimation: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
Line 8: Line 8:


===Parameter Constraint From Estimates===
===Parameter Constraint From Estimates===
Parameter sensitivity can be used to put error bound on the parameter estimates.  Essentially, the inverse of the parameter sensitivity is proportional to the error bars.  I'm not sure of the correct statistics here.
Sensitivity analysis can be used to place error bounds on the parameter estimates.  Essentially, each parameter value can be independently varied in order to determine the parameter range in which the fit to data remains within specified bounds.  This can be done for parameters with high sensitivity (which we would thus expect to have narrow error bounds), and for parameters with very low sensitivity (which we would expect to have very large bounds).


===Reaction Timescales Can Limit Estimation Ability===
===Reaction Timescales Can Limit Estimation Ability===

Revision as of 14:13, 19 February 2006

Discussion on Parameter Estimation and Sensitivity

Parameter Estimation from Time-Variant Output

Commonly used parameter estimation algorithms fall into two main categories: local minimization algorithms and genetic algorithms. Local minimization algorithms basically work by minimizing the objective function (aka cost function of the model fit to the data) in parameter space around a user supplied initial parameter guess. Starting at different initial parameter guesses can lead to different minima, so in order to gain confidence that a global minimum has been reached, the algorithm must be run repeatedly starting from different initial guesses that span the allowed parameter space. Such algorithms include the Levenberg-Marquardt gradient descent and downhill simplex algorithms. Genetic algorithms are stochasitc rather than deterministic. Multiple sets of parameter values are maintained at all times during the optimization. These sets undergo modification by so-called 'genetic operators' such as random mutation and crossover. The resultant parameter sets are then evaluated based on the corresponding value of the objective function, and the top sets are carried through to the next iteration of mutation and crossover. Genetic algorithms are not guaranteed to cover the entire parameter space. Hybrid algorithms also exist whereby sets of parameters are iteratively modified by the genetic operators and suject to local minimization. These algorithms have the advantage of genetic algorithms in that they automatically move throughout parameter space. The local minimization aspect of the hybrid algorithms acts to speed up the convergence to optimal solutions. In practice there is no difference between estimation of parameters from systems with static and time-variant inputs.

Parameter Sensitivity to Guide Experimtent

The behavior of a model will depend critically on some parameters, and will have a weak dependence on other parameters. This dependence can be quantified by the normalized parameter sensitivity. This is defined as the differential change in the quantitative behavior of the model (measured by some metric X) for a given fractional change in each parameter of interest. That it, the normalized parameter sensitivity is delta(X)/(delta(ki)/ki) = dX / d(log(ki)). In general, parameter estimation will yeild more information about more sensitive parameters. If we are able to design experiments that increases the dependence of the behavior of the model on a given parameter (ie design experiments that increase the parameter sensitivity), then we can potentially gain more information about that parameter from the experimental results. Higher parameter sensitivity does not necessarily lead to better parameter estimates as parameter sensitivity does not take into account the correlation between parameters which can confound correct parameter estimation.


Parameter Constraint From Estimates

Sensitivity analysis can be used to place error bounds on the parameter estimates. Essentially, each parameter value can be independently varied in order to determine the parameter range in which the fit to data remains within specified bounds. This can be done for parameters with high sensitivity (which we would thus expect to have narrow error bounds), and for parameters with very low sensitivity (which we would expect to have very large bounds).

Reaction Timescales Can Limit Estimation Ability

Rate limiting steps are obviously those that we can obtain best estimates for. These rate constants would also be identified by parameter sensitivity, so there's point in worrying about it.


Roger's specifications

What I am hoping for from you, perhaps with Xiuxia Du, is a crisp 3-5 paragraph discussion of the means you can see us using to extract reaction rate constants from time variant output in response to time variant input. Then, back off that goal. If we cannot extract rates, can we constrain rates? If the output we measure is the result of a number of different upstream reactions, can anything in the time variant output give any information about any of the reactions that contributed to it, for example, the slowest one? Or can such analysis tell us anything about which reaction rates we guessed wrong, or suggest to us which reaction rates might be most sensitive to bad guesses? And in each case, what steps would we follow to divine this information from the output.