Kapernicus provides consultancy support for evaluating uncertainty of modeling:
- Our approach is applicable to virtually all kinds of computations and simulations of technical, scientific or economical character.
- Our methods utilizes novel deterministic sampling techniques, which originates from the original idea of propagating co-variance in the unscented Kalman filter.
- Our offer allows for complex models with high demands on numerical efficiency, emphasize consistency of assumptions, apply a genuine multivariate perspective and respects lack of information by addressing the ambiguity of the evaluated uncertainty.
Uncertainty of Modeling
Most calculations relies upon more or less uncertain hypothesis about dependable parameters, initial and boundary values which typically describe geometrical dimensions, as well as physical parameters such as viscosity. The analysis that propagates these uncertainties to the modeling result often originates from perspectives and methods of mathematical statistics. The scientific field Uncertainty Quantification addresses this type of uncertainty of modeling. It is related to but not equivalent to statistics, as the latter gathers and evaluates statistical information mainly by finite sampling, while the former propagates statistical information mathematically. Hence, a statistical analysis of repeated measurements often precedes UQ.
Motivation
The perceived quality, or uncertainty of modeling is of paramount importance for how we use the result. That is seldom explicitly stated, even though it is always the case. Without trust or confidence in the result it is literally speaking useless. The perceived quality might however differ substantially from the true quality – that is why it should be evaluated with credible UQ methods, rather than vaguely and subjectively guessed from experience. Modeling uncertainty may be utilized for decision making and risk assessment. For instance, evaluation of nuclear safety margins, road bridge design and forecasting of critical weather conditions all rely upon our ability to correctly assess modeling uncertainty. It may be feasible with experimental verification of bridge strength by pulling and releasing an attached wire or loading with many heavy trucks. A similar test is not advisable though for safety critical nuclear applications and is not even possible for particular weather forecasts. Less critical but nevertheless important applications are development of products like, e.g. personal cars and heavy vehicles. Typically, a feasibility study suggests three possible versions and the task is to find ‘the best’, in order to establish a competitive edge. Simply relying upon precise numbers obtained from modeling, like fuel consumption, will suggest one only. Nevertheless, the performance of two versions may be indistinguishable considering the modeling uncertainty. If so, the result of UQ then suggest an experimental test instead of rejection based on modeling results. Otherwise, we might accidentally choose the second best version and loose against competitors which apply UQ more wisely.
Our approach
The current prevailing practice is to evaluate modeling uncertainty with a method based on random sampling. It is also common to ignore the ambiguity implied by the ubiquitous incompleteness of our knowledge. This gives rise to a whole range of difficulties which substantially deteriorates the quality of the evaluated uncertainty. Our methodology is different and to a large extent based on our own research. It is reflected against history and current practice in the example the right rope, constructed to be easily understood by anyone.