The process of determining if a model is an accurate representation of the real system is called

It has proposed a validation model for the standardization of DNA methodologies based on the use of FINS as validation methodology, constructed a free database including reference DNA sequences and developed specific plasmidic standards as reference materials.

From: Improving Seafood Products for the Consumer, 2008

Simulation Modeling as a Tool for Synthesis of Stock Identification Information

Lisa A. Kerr, Daniel R. Goethel, in Stock Identification Methods (Second Edition), 2014

21.2.5 Model Validation

Model validation is the process of determining whether the model accurately represents the behavior of the system (Aumann, 2007). Model validity should be evaluated both operationally (i.e., by determining if model output agrees with observed data) and conceptually (i.e., by determining whether the theory and assumptions underlying the model are justifiable; Sargent, 1984; Rykiel, 1996). Models can be validated by comparing output to independent field or experimental data sets that align with the simulated scenario. However, it is important to consider the quality of the data (e.g., the level of measurement error), whether it truly represents the system, and if it is the best test of the model (Rykiel, 1996; Aumann, 2007). Operational validation of the model using independent data may not be possible when the simulated scenario extends outside the realm of observed conditions (e.g., predicting responses to future climate change) or when using probabilistic forecasts (i.e., those that include uncertainty in system processes). In the latter case, the decision between using a deterministic or probabilistic framework comes at a trade-off between accuracy and precision. In general, deterministic models demonstrate higher precision but are less accurate than those that incorporate uncertainty (de Young et al., 2004). However, regardless of the type of simulation, conceptual validation is always feasible.

Performing sensitivity analyses are another crucial part of the model validation process. The purpose of running a sensitivity analysis is to determine the relative influence of parameters, initial conditions, and alternative assumptions on model output. The process is iterative, providing feedback that can improve the model. A sensitivity analysis compares response variables from multiple model runs. In each of the comparison runs all parameters are held constant except for the parameter being examined. When a model parameter is observed to exert undue influence on the output of the simulation, which does not reflect reality, characterization of the model must be reevaluated. Conducting extensive sensitivity analyses to understand how each parameter influences the model's behavior is an essential part of the simulation process (Peck, 2004).

Ultimately, model validation strengthens support for the model and the reliability of its outputs (Jackson et al., 2000). Building a useful simulation requires the construction of a model that is a reasonably accurate representation of the biological phenomenon under consideration (Peck, 2004; Aumann, 2007). Although no model can be “proven correct,” validation is about testing the reliability and plausibility of model performance (Araujo et al., 2005). Due to the dynamic nature of natural systems, model validation should be a perpetually occurring process, especially when new data become available from the physical system.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123970039000217

Chemical imaging in food authentication

Mohammed Kamruzzaman, in Food Authentication and Traceability, 2021

5.3.5 Multivariate model validation and evaluation

Model validation is an essential aspect of all multivariate calibration methods. Prior to applying the multivariate calibration model for external predictions, it is mandatory to check the performance of the models in predicting unknown samples. Indeed, model validation has become a standard part of multivariate spectral analysis. The overall objective of validation is to guarantee that the model will work in the future for similar unknown data. Many different validation methods are available. Cross-validation (CV), leave-one-out, or k-fold is the frequently used method for model validation. If validation with an independent dataset is not possible due to the small sample size, CV is very economical. However, CV has been shown to yield an overoptimistic estimate of prediction ability. To accurately ascertain the model accuracy and robustness, validation must be performed using a separate dataset, and they should comprise of samples from different batches taken at different times (Boulesteix and Strimmer, 2006). If a separate set is not available, then the best approach is to divide the spectral data and corresponding reference data into training and testing sets. The most commonly used data partition methods are random and rational selection. Random selection is the simplest and popular method for dividing data into training and testing sets. Dividing the overall dataset randomly does not provide any rationale for spectral properties and corresponding reference parameters. On the other side, rational partition methods (KS, onion and Joint XY) can split the data into training and testing sets in an intelligent, logical, and systematic way. In reality, comparison and testing of different data partition techniques are necessary to select the best one for a particular application.

Finally the quality of multivariate calibration models is evaluated with the help of calibration and prediction statistics. Generally prediction statistics are more important. Commonly used prediction statistics are root mean standard error of prediction (RMSEP) or standard error of prediction (SEP) and coefficient of determination (R2). Generally the accuracy (i.e., how close the measured and predicted values) of the multivariate quantitative regression model is considered as excellent when the R2≥0.90 (Kamruzzaman et al., 2012; Sone et al., 2012). The ratio of prediction to deviation (RPD), defined as SD/RMSEP or SD/SEP, can also be used to evaluate the overall prediction capacity of a model. RPD measures the relative predictive performance of a model more directly than when either R2 or RMSEP (or SEP) is used separately (Kamruzzaman et al., 2012; Liu et al., 2010). Higher RPD value indicates excellent multivariate models such as RPD values greater than 3 are useful for screening, greater than 5 can be used for quality control, and greater than 8 for any application (Manley, 2014). On the other hand, performances of multivariate qualitative calibration models are typically evaluated on the basis of their sensitivity, specificity, and overall accuracy.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128211045000076

Toxicokinetics in Veterinary Toxicology

Deon van der Merwe, ... Jennifer L. Buur, in Veterinary Toxicology (Third Edition), 2018

Model Validation

Model validation refers to the process of confirming that the model actually achieves its intended purpose. In most situations, this will involve confirmation that the model is predictive under the conditions of its intended use. This type of validation occurs by comparing model simulations to an independent experimental data set. Data used in the estimation of model parameter values cannot be included in the external data set. Simulated data derived from the model are compared to observed data points. The sets of data may be plotted side by side using simulation plots; or output values at specific times can be compared using correlation plots, and residual plots. Results are then subjected to qualitative and quantitative analysis for goodness of fit. Unlike traditional compartmental pharmacokinetic modeling approaches, there is currently no standardized method to evaluate the goodness of fit for PBPK models. Often, a combination of visual examination of residual plots and simulation plots, along with the quantification of regression correlation values (R2 values) are used. In general, residual plots should have normal distributions around zero without any time bias. Correlation plots should have regression lines with R2 values close to 1, and intercepts close to the starting value (in most cases, this is zero). Simulation plots are also used to detect time and concentration bias.

If a complex model was created by the incorporation of population distributions, then model validation typically becomes more qualitative in nature. In these cases, sampling methods such as Monte Carlo or bootstrapping can be used to generate specific values for the parameters in question. This parameter value assignment is repeated a large number of times, and the output becomes a set of simulations that can be plotted alongside each other. This gives a visual representation of what a population may look like (Sweeney et al., 2001). Fig. 8.8 shows a Monte Carlo analysis using the SMZ model to simulate multiple oral dosing (Buur et al., 2006). The oral absorption rate, rate of gastric emptying, protein binding, and both renal and hepatic clearances were varied. Validation of this data is performed by plotting the multiple simulations alongside independent experimental data points. However, confidence in the distributions, and in the model is determined by visual inspection, rather than correlation coefficients or residual plots, but alone. Generally, more data points covered within the spread of the output results in higher confidence in the predictive ability of the model.

The process of determining if a model is an accurate representation of the real system is called

Figure 8.8. A Monte Carlo analysis using a physiologically based pharmacokinetic model, used in the prediction of sulfamethazine tissue residues in swine, to simulate multiple oral dosing.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128114100000088

Structures of Large RNA Molecules and Their Complexes

Swati Jain, ... Jane S. Richardson, in Methods in Enzymology, 2015

2.3 Model validation

Model validation covers covalent geometry and steric interactions, both applicable to any molecule, and conformational parameters, which are quite distinct for protein and RNA and are therefore covered in detail in Section 3.

Geometry criteria include covalent bond lengths and angles, planarity, and chirality. Target values, including their estimated standard deviations or weights, are derived from the databases of small-molecule crystal structures (Allen, 2002; Grazulis et al., 2009), or perhaps from quantum calculations, especially for unusual bound ligands (e.g., Moriarty, Grosse-Kunstleve, & Adams, 2009). With some exceptions discussed below, geometry validation primarily serves as a sanity check for whether sensible restraints were used in the refinement.

Steric interactions include both favorable hydrogen bonds and van der Waals interactions and also unfavorable or even impossible atomic overlaps, or “clashes.” Not every donor or acceptor is H-bonded and not every atom grouping is tightly packed, but that should nearly always be true in the molecule interior and especially in regular secondary structure. If there are two possible conformations consistent with the electron density and one of them has more of the good interactions, then it is much more likely to be correct. Bad steric clashes have been flagged as problems in just about every relevant analysis, but only for non-H atoms until our lab's development of all-atom contacts (Word, Lovell, LaBean, et al., 1999; Word, Lovell, Richardson, et al., 1999), which is the most distinctive contribution of MolProbity validation. Our Reduce program adds all H atoms, now by default in the electron cloud-center positions (Deis et al., 2013) that are most appropriate both for crystallography, where it is the electrons that diffract X-rays, and also for all-atom contact analysis, where van der Waals interactions are between the electron clouds not between the nuclei. Most H atoms lie in directions determined quite closely by the planar or tetrahedral geometry of their parent heavy atoms, and even methyl groups spend almost all their time very close to a staggered orientation. Reduce then optimizes the rotatable OH, SH, and NH3 positions within entire H-bond networks, including any needed correction of the 180° “flip” orientation for side-chain amides and histidine rings (Word, Lovell, Richardson, et al., 1999). The Probe program analyzes all atom–atom contacts within 0.5 Å of touching van der Waals surfaces, assigning numerical scores and producing visualizations as paired patches of dot surface like those seen in Figs. 2 and 3. A cluster of hotpink clash spikes gives the most telling signal of a serious local problem in the model. Barring a misunderstood atom nomenclature, a flagged steric overlap > 0.4 usually, and > 0.5 Å nearly always, means that at least one of the clashing atoms must move away. The MolProbity “clashscore” is normalized as the number of clashes per 1000 atoms in the structure and is reported as percentile scores by the wwPDB (Fig. 4, second slider bar). As shown by the example in Fig. 2, all-atom clashes are a valuable diagnostic for RNA backbone conformation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0076687915000208

Chemometrics Applied to Food Control

Evandro Bona, ... Patrícia Valderrama, in Food Control and Biosecurity, 2018

5 Model Validation

Model validation is intended to compare the model predictions with a real-world and unknown dataset for assessment of model accuracy and predictive capability (Cheng and Sun, 2015). This can be done by calculating some quality parameters of the multivariate model named as figures of merit, which can be summarized as accuracy, linearity, adjustment, sensitivity, analytical sensitivity, limits of detection, and quantification. Root mean squares error of calibration (RMSEC) and prediction (RMSEP) represent accuracy. RMSECV is also considered since an ideal multivariate calibration model shows similar values for RMSEP, RMSEC, and RMSECV, and this occurs due to the random errors fit in the model. The RMSEC always decreases along with increases in the number of LV, and as more LVs are included in the calibration model, the model begins to fit the random errors imbedded in the spectra and concentrations. On the other hand, the RMSECV and RMSEP can increase when more LV are included. This occurs because of the new samples, which were not included in the calibration set, which could present a different randomization in its errors. Therefore, the calibration model could not fit these errors in the same degree as the errors in the calibration set. In practice, obtaining the same value for these parameters (RMSEC, RMSECV, and RMSEP) is the ideal situation but it is not an easy task. Then, it is better that RMSEC presents slightly higher values than RMSEP because otherwise would indicates that the model is overfitted and fewer LVs would be necessary for that model (dos Santos et al., 2013).

The RMSEC, RMSEP, and RMSECV are global parameters and they incorporate random and bias errors. Then, accuracy can also be represented by the fit of the reference values against the predicted ones and the slope, the intercept, correlation coefficient (adjustment), and the elliptical joint confidence regions (Valderrama et al., 2009), exemplified on the Fig. 4.10.

The process of determining if a model is an accurate representation of the real system is called

Figure 4.10. Elliptical Joint Confidence Regions at 95% for the Slope and Intercept of the Regression of Predicted Concentrations Versus Reference Values Using Ordinary Least Squares.

For this generic example, the ellipse contains the ideal point (1, 0) for slope and intercept, respectively, showing that the reference value and the PLS model are not significantly different at the 95% confidence level. It is also possible to conclude, on the basis of the 95% confidence interval, that no constant or proportional systematic errors are present in the model, since the interval contains the expected values of 1 and 0 for the slope and the intercept, respectively (Valderrama et al., 2010).

Preprocessing used in the PLS model development makes the analytical sensitivity more suitable to evaluate the sensitivity of a multivariate calibration method. To sensitivity estimation the inverse of the norm of the regression coefficients vector is considered. The analytical sensitivity is calculated by the ratio between sensitivity and an estimation of the noise level in the data, which can be obtained by replicate measurements of a blank sample (Valderrama et al., 2009). Considering a perfect model fit and that the spectral noise represents the large source of error, the inverse of the analytical sensitivity (or analytical sensitivity–1) allows for the establishment of a minimum concentration difference, in the absence of errors in the property of interest, which is discernible by the analytical method in the range of concentrations where it was applied (dos Santos et al., 2016; Valderrama et al., 2009). Based on this, considering an example in which the analytical sensitivity–1 is 0.1 mg/mL, it is possible to distinguish samples with a concentration difference of 0.1 mg/mL. However, this value is an optimistic estimate that considers the spectral noise representing the larger source of error and does not take into account the lack of fit of the model (Valderrama et al., 2010).

The linearity evaluation is problematic in multivariate calibration using PLS because the variables are previously decomposed by PCA. The plot of residuals (Martens and Naes, 1989; Valderrama et al., 2009) and scores (Martens and Naes, 1989) against the predicted values is a qualitative estimate of the linearity of the model, where they must present random and linear behaviors, respectively. However, the scores plot only can be used when the PLS model requires a few LVs to describe the data set (Martens and Naes, 1989; Valderrama et al., 2007).

Limits of detection and quantification represent the minimum concentration of the interest properties that can be detectable and quantified, respectively. In this case, these figures of merit can be estimated as in univariate calibration (Valderrama et al., 2009).

For SVR, almost figures of merit already cited could employed including RMSEC, RMSEP, RMSECV, correlation coefficients, bias, residual prediction deviation (RPD), relative standard deviation (RSD), and, elliptical joint confidence regions (Botelho et al., 2014; Valderrama et al., 2009).

As in PLS-DA, SVC also allows estimating class probabilities that provide information about the uncertainty of belonging to one class or the other (Luts et al., 2010). The receiver operating characteristic curve complements the SVC performance analysis with parameters, such as accuracy, sensitivity, and specificity (Bona et al., 2017).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128114452000040

Philosophy of Econometrics

Aris Spanos, in Philosophy of Economics, 2012

7.1 Statistical model specification vs. model selection

As argued in section 4.1, from the error-statistical perspective the problem of specification, as originally envisaged by Fisher [1922], is one of choosing a statistical model Mθ(x) so as to render the particular data x0 a truly typical realization of the stochastic process {Xk,k∈N} parameterized by Mθ(x). This problem is addressed by evaluating Mθ(x) in terms of whether it is statistically adequate — it accounts for the regularities in the data; its probabilistic assumptions are valid for data x0. In cases where the original model is found wanting one should respecify and assess model adequacy until a validated model is found; see [Spanos, 2006b].

The model validation problem is generally acknowledged in statistics:

“The current statistical methodology is mostly model-based, without any specific rules for model selection or validating a specified model.”[Rao, 2004, p. 2]

Over that last 25 years or so, Fisher's specification problem has been recast in the form of model selection which breaks up the problem into two stages where, a broad family of models {Mθi(x),i=1,2,…m} is selected first, and then a particular model within that family, say Mθk(x), is chosen using certain normed-based (goodness-of-fit) criteria; see [Rao and Wu, 2001]. The quintessential example of such a model selection procedure is the Akaike Information Criterion (AIC) where one compares different models within a prespecified family using:

(39)AIC(i)=−2lnfi(x;θˆi)+2Ki,i=1,2,…,m,

where Ki denotes the number of unknown parameters for model i. There are numerous variations/extensions of the AIC; see [Burnham and Anderson, 2002]. Such norm-based model selection encompasses several procedures motivated by mathematical approximation, such as curve-fitting by least-squares, structural estimation using GMM as well as nonparametric procedures; see [Pagan and Ullah, 1999].

Spanos [2010b] argued that Akaike-type model selection procedures invariably give rise to unreliable inferences because:

(i)

they ignore the preliminary step of validating the prespecified family of models,

(ii)

their selection amounts to testing comparisons among the models within the prespecified family but without ‘controlling’ the relevant error probabilities.

The end result is that the selected model M θk(x) is invariably statistically inadequate. This is is illustrated in [Spanos, 2007a] where the Kepler and Ptolemy models for the motion of the planets are compared in terms of goodness-of-fit vs. statistical adequacy. It is shown that, despite the excellent fit of the Ptolemaic model, it does not ‘account for the regularities in the data’, contrary to conventional wisdom; see [Laudan, 1977]. In contrast, the statistical adequacy of the Kepler model renders it a statistical model with a life of its own, regardless of its substantive adequacy which stems from Newton's law of universal gravitation.

One can argue that securing statistical adequacy addresses both objectives associated with the model selection procedures: selecting a prespecified family of models, and determining the ‘best’ model within this family, rendering these procedures superfluous and potentially misleading; see [Spanos, 2010b].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444516763500130

Physiologically Based Pharmacokinetic Models in the Risk Assessment of Developmental Neurotoxicants

Kannan Krishnan, Melvin Andersen, in Handbook of Developmental Neurotoxicology, 1998

D Model Validation

The model validation process ensures the adequacy of the model representation of the system under study. The approaches commonly applied for testing the adequacy of computer simulation models can be classified into four categories: 1. inspection approach, 2. subjective assessment, 3. discrepancy measures, and 4. statistical tests. The testing of the degree of concordance between PBPK model simulations and experimental data has generally been conducted by “eye-balling” or visual inspection approach. This approach involves visual comparison of the plots of simulated data (usually continuous and represented by solid lines) with experimental values (usually discrete and represented by symbols) against a common independent variable (usually time). The rationale behind this approach is that, the greater the commonality between the simulated and experimental data, the greater our confidence will be in the model. The visual inspection approach to model validation continues to be used pending the validation of statistical tests and discrepancy measure tests appropriate for application to PBPK models. While this visual inspection approach lacks mathematical rigor, it has the advantage of requiring the human modeler to become better informed about the behavior of the model. Haddad et al. (1995) screened various statistical procedures (correlation, regression, confidence interval approach, lack of fit F test, univariate analysis of variance, and multivariate analysis of variance) for their potential usefulness in testing the degree of agreement of PBPK model simulations and experimental data obtained in intact animals. Lack of fit F test has been suggested as a useful and practical way of evaluating the adequacy of simulation models. Particularly, this simple procedure permits the consideration of multiple datasets (e.g., data for several endpoints collected at various time intervals) in conducting an evaluation of model validity. The multivariate analysis of variance probably represents the most appropriate test, with the variance for the simulation data permitting.

The application of an appropriate statistical test provides a means of evaluating whether model simulations are significantly different from experimental values. Regardless of the outcome of such statistical analysis, it is often necessary and useful to be able to represent, in a quantitative manner, the extent to which the model simulations differ from experimental data. In this context, Krishnan et al. (1995) have developed a simple index to represent the degree of closeness or discrepancy between a priori model predictions and experimental data used during the model validation phase. The approach involved the calculation of the root mean square of the error (representing the difference between the individual simulated and experimental values for each sampling point in a time course curve), and dividing them by the root mean square of the experimental values. The resulting numeric values of discrepancy measures for several datasets (each corresponding to an endpoint) obtained in a single experimental study are then combined on the basis of a weighting proportional to the number of data points contained in the dataset. Such consolidated discrepancy indices obtained from several experiments (e.g., exposure scenarios, doses, routes, species) are averaged to get an overall discrepancy index referred to as PBPK index. The application of this kind of a “quantitative” method should help remove the ambiguity in communicating the degree of concordance or discrepancy between PBPK model simulations and experimental data.

The use of a discrepancy measure test or statistical test to show that a priori predictions of a particular endpoint are in agreement with the experimental data alone is insufficient. These approaches are only useful for providing a quantitative measure of differences, but not any information on either the model robustness or the reliability of the model structure. In this context, it is important to verify the influence of variability, uncertainty, and sensitivity associated with model parameters. A number of approaches, most often using Monte Carlo analysis, and examples of their applications are available in the literature (Farrar et al., 1989; Bois and Tozer, 1990; Hattis et al, 1990; Hetrick et al., 1991; Krewski et al., 1995; Varkonyyi et al., 1995).

Most PBPK modeling efforts have judged the adequacy–validity by comparing a priori predictions to experimental data on blood, plasma, or exhaled air concentrations. The fact that the model predictions of one endpoint—that is, a measurement endpoint (e.g., plasma concentration of parent chemical) are adequate does not mean that all other endpoints of toxicologic and risk assessment relevance—that is, assessment endpoints (e.g., concentration of a metabolite in brain) would necessarily be predicted with the same level of accuracy. When data on measurement endpoints rather than assessment endpoints are used for validating the model, it is essential to choose those measurement endpoints that have the same kind of sensitivity and response pattern as the assessment endpoint.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780126488609500510

Methods for the Characterization, Authentication, and Adulteration of Essential Oils

Tzi Bun Ng, ... Jack Ho Wong, in Essential Oils in Food Preservation, Flavor and Safety, 2016

NIR Spectroscopy for Various Essential Oils

Cross-validation models can be used to predict with accuracy virtually all of the constituents of essential oils. In various cinnamon (Cinnamomum zeylanicum) and clove (Syzygium aromaticum) essential oils, which demonstrated analogous compositions, 23 components (accounting for the bulk of the oil) were correctly predicted. Likewise, 20 components in Cinnamomum camphora, 32 components in Ravensara aromatica, and 26 components in Lippia multiflora that made up the bulk of the oils, were also correctly predicted. For almost all of the components, the modeled and reference values obtained by GC–FID exhibited a high correlation and a variance below 5%. The model was used to disclose erroneous commercial labeling of C. camphora oil as R. aromatica oil (Juliani et al., 2006).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012416641700002X

Computational modelling of muscle, tendon, and ligaments biomechanics

Tobias Siebert, ... Christian Rode, in Computational Modelling of Biomechanics and Biotribology in the Musculoskeletal System (Second Edition), 2021

8.5.3 Experimental requirements for three-dimensional model validation

Three-dimensional model validation faces two major challenges: first, the data must be three-dimensional; second, the data should be extensive, i.e., they should include active and passive force characteristics, the three-dimensional muscle shape during contraction, the separation of muscle tissue and tendon-aponeurosis complex, and the muscle fiber orientation. Fascicle lengths and pennation angles are major constituents of the muscle architecture, and they largely determine the function and the shape of the muscle. As shown for the rabbit calf muscles (Fig. 8.12), a large variability exists in the architecture of different muscles. Determination of the realistic architecture and its inclusion in 3D models is a prerequisite to understand the active muscle deformation, as well as the interaction of muscle with surrounding tissue and external forces (Yucesoy et al., 2003; Siebert et al., 2014a).

The process of determining if a model is an accurate representation of the real system is called

Fig. 8.12. Muscle architecture of the rabbit (Oryctolagus cuniculus) calf muscles of one animal (m = 3040 g). Fascicle traces were determined with a manual MicroScribe digitizer (Wick et al., 2018). Muscles exhibit variability in fascicle length (FL) and pennation angle (α is the mean fascicle angle determined as the mean of the angles between a line through muscle origin and insertion and a second line through fascicle segments). M. plantaris FL = 10.8 ± 2.0 mm, α = 15.9 ± 6.1°; M. soleus FL = 14.0 ± 2.2 mm, α = 15.8 ± 5.4°; M. gastrocnemius lateralis FL = 14.6 ± 2.2 mm, α = 15.0 ± 6.2°; M. gastrocnemius medialis FL = 13.5 ± 2.2 mm, α = 18.2 ± 8.5°. *, Only the free tendon was digitized, hence gaps appear between tendon and fascicles.

Data were determined by Dr. Carolin Wick, Friedrich-Schiller-University, Jena, Germany.

Several 3D muscle models use three-dimensional muscle shapes in the relaxed state to perform simulations. Publications using geometries of activated muscles for model validation are scarce. The first contribution by Tang et al. (2007) uses two-dimensional silhouettes of the frog gastrocnemius muscle that have been measured during tetanic contraction. However, for extensive model validation, further information such as fiber orientations or the spatial arrangement of additional tissues (e.g., tendon-aponeurosis complex) needs to be generated.

So far, arguably the most comprehensive data set for skeletal muscle model validation was published by Böl et al. (2013). In their work, a comprehensive data set was acquired, including three-dimensional shapes of the rabbit M. soleus muscles divided into the muscle tissue, the aponeurosis, and the tendons, the muscle fiber orientations at optimal length, and the active as well as passive force-time relationships during isometric, isotonic, and isokinetic contractions. Based on this detailed, three-dimensional muscle geometry, as well as passive (Böl et al., 2012) and active muscle material properties (Siebert et al., 2015), a three-dimensional finite element model was developed to simulate different types of contractions as well as to analyze history effects (Seydewitz et al., 2019). Besides showing good agreement with the experimental data, the constitutive model revealed further information about the inhomogeneous stretch distributions within the muscle tissues.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128195314000080

Modeling the growth, survival and death of microbial pathogens in foods

J.D. Legan, ... M.B. Cole, in Foodborne Pathogens (Second Edition), 2009

3.6.4 Model validation

As with other models, validation is by comparing the outcome of independent tests against model predictions. Practically, we should test times a little more and a little less than the predicted time to inactivation. We want to see no survivors in a little longer than the predicted time to inactivation and expect to see survivors in a little less than the predicted time to inactivation (though, if we see no survivors here, that indicates that the model is conservative).

If, instead, we test exactly at the predicted time to inactivation then we would expect a good model to be wrong close to 100 % of the time; over-predicting and under-predicting slightly in roughly equal proportions. However, since we cannot know the size of the over- and under-predictions we have no way to tell a good model from a poor one by this means.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781845693626500030

Is the process of determining if the model is an accurate representation of the real system?

Validation is the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model.

What is the difference between the verification of a computer simulation and the validation of a computer simulation?

Briefly, verification is the assessment of the accuracy of the solution to a computational model by comparison with known solutions. Validation is the assessment of the accuracy of a computational simulation by comparison with experimental data.

Which of the following was accomplished in 1997 by the IBM supercomputer Deep Blue?

IBM Research hired the two scientists and gave them the resources to build Deep Blue, a dedicated chess-playing supercomputer. In 1997, in a historic match, Deep Blue became the first computer to defeat a reigning world chess champion.

Is a program with a benign capability that conceals a sinister purpose?

A program with a benign capability that conceals another, sinister purpose is called a trojan horse.