Parameter Sensitivity and Estimation: Difference between revisions

From OpenWetWare
Jump to navigationJump to search
No edit summary
No edit summary
Line 1: Line 1:
''paragraph 1: Means to extract (ie estimate) parameters from time-variant output''<br>
''paragraph 1: Means to extract (ie estimate) parameters from time-variant output''<br>
Commonly used parameter estimation algorithms fall into two main categories: local minimization algorithms and genetic algorithms.  Local minimization algorithms basically work by minimizing the objective function (aka cost function) in parameter space around a user supplied initial parameter guess.  Starting at different initial parameter guesses can lead to different minima, so in order to convince oneself that one has found a global minimum, one would have to run the algorithm starting from initial guesses that span the allowed parameter space, and compare the corresponding objective function values.  Such algorithms include the Levenberg-Marquardt gradient descent and downhill simplex algorithms.  Genetic algorithms are stochasitc rather than deterministic.  A several sets of parameter values are maintained at all times during the optimization.  These sets undergo modification by so-called 'genetic operators' such as random mutation and crossover.  The reultant parameter sets are then evaluated based on the corresponding value of the objective function, and the top sets are carried through to the next iteration of mutation and crossover.   
Commonly used parameter estimation algorithms fall into two main categories: local minimization algorithms and genetic algorithms.  Local minimization algorithms basically work by minimizing the objective function (aka cost function) in parameter space around a user supplied initial parameter guess.  Starting at different initial parameter guesses can lead to different minima, so in order to convince oneself that one has found a global minimum, one would have to run the algorithm starting from initial guesses that span the allowed parameter space.  Such algorithms include the Levenberg-Marquardt gradient descent and downhill simplex algorithms.  Genetic algorithms are stochasitc rather than deterministic.  A several sets of parameter values are maintained at all times during the optimization.  These sets undergo modification by so-called 'genetic operators' such as random mutation and crossover.  The resultant parameter sets are then evaluated based on the corresponding value of the objective function, and the top sets are carried through to the next iteration of mutation and crossover.  Genetic algorithms are not guaranteed to cover allowable parameter space.  Hybrid algorithms also exist whereby sets of parameters are iteratively modified by the genetic operators and suject to local minimization.   





Revision as of 09:51, 19 February 2006

paragraph 1: Means to extract (ie estimate) parameters from time-variant output
Commonly used parameter estimation algorithms fall into two main categories: local minimization algorithms and genetic algorithms. Local minimization algorithms basically work by minimizing the objective function (aka cost function) in parameter space around a user supplied initial parameter guess. Starting at different initial parameter guesses can lead to different minima, so in order to convince oneself that one has found a global minimum, one would have to run the algorithm starting from initial guesses that span the allowed parameter space. Such algorithms include the Levenberg-Marquardt gradient descent and downhill simplex algorithms. Genetic algorithms are stochasitc rather than deterministic. A several sets of parameter values are maintained at all times during the optimization. These sets undergo modification by so-called 'genetic operators' such as random mutation and crossover. The resultant parameter sets are then evaluated based on the corresponding value of the objective function, and the top sets are carried through to the next iteration of mutation and crossover. Genetic algorithms are not guaranteed to cover allowable parameter space. Hybrid algorithms also exist whereby sets of parameters are iteratively modified by the genetic operators and suject to local minimization.


Paragraph 2: How parameter sensitivity can be used to help design experiments
The behavior of a model will depend critically on some parameters, and will have a weak dependence on other parameters. This dependence can be quantified by the normalized parameter sensitivity. This is defined as the differential change in the quantitative behavior of the model (measured by some metric X) for a given fractional change in each parameter of interest. That it, the normalized parameter sensitivity is delta(X)/(delta(ki)/ki) = dX / d(log(ki)). If we are able to design experiments that increases the dependence of the behavior of the model on a given parameter (ie design experiments that increase the parameter sensitivity), then we can potentially gain more information about that parameter from the experimental results. Higher parameter sensitivity does not necessarily lead to better parameter estimates as parameter sensitivity does not take into account the correlation between parameters which can confound correct parameter estimation.


Paragraph 3: Constraining rates given estimates (using sensitivity to give bounds)


Paragraph 4: How the timescales of the reactions might tie in with our ability to estimate parameters


Roger's specifications

What I am hoping for from you, perhaps with Xiuxia Du, is a crisp 3-5 paragraph discussion of the means you can see us using to extract reaction rate constants from time variant output in response to time variant input. Then, back off that goal. If we cannot extract rates, can we constrain rates? If the output we measure is the result of a number of different upstream reactions, can anything in the time variant output give any information about any of the reactions that contributed to it, for example, the slowest one? Or can such analysis tell us anything about which reaction rates we guessed wrong, or suggest to us which reaction rates might be most sensitive to bad guesses? And in each case, what steps would we follow to divine this information from the output.