0:25

or one can have plausible models a term that

I have made up, or one can have identifiable models.

Toy Models often use arbitrary parameters that may or may not be and most often are

not, later have any relationship with the, parameters in the real systems.

And can be used to make sort

of theoretical points, with regard to the model.

Plausible models typically use experimentally

measured, or es, or estimated values.

The kinetic parameter, such as rate constants, or the levels of

reactants, are, are generally not, cell type specific.

They're either canonical and hence may not be fully representative of every cell

type or the cell type of interest for which the models are being created.

So this, this is kind of a general model, if you want to call it that.

It's likely to be real, but it may not be exactly real, for the cell

type, and so, this is why I'm calling these plausible models.

These are the most common type of dynamical models and systems biology, and

often times the use of canonical

parameters does not really affect the systems,

and this can be experimentally tested, when, when looks at model predictions and

tries to validate these, or sort of test these model predictions in by experiments.

Another class of models which is most common in pharmacology,

2:09

are these types of ODE models that are built to explain experimental data.

These are called identifiable models.

These are system-specific, and the models are directly fitted to experimental datas.

These are most commonly used in drug action, and so while the model

parameters are fitted to experimental data and the fitting is quite

it has a level of accuracy that can be statistically characterized.

The model parameters may often not be connected

to molecular details, molecular details so one might not

get a mechanistic understanding of behavior, but one gets

a precise quantitative description of behavior in identifiable models.

So when one goes from mathematical representations to numerical

simulations, what one needs to do is to get the reactions parameterized.

So the initial concentrations and the reaction

rates have to be reaction rates are

unneeded and these are not always easy

to obtained as biochemical and cell biological

experiments a vast majority of these which were done for the last 50 or so years,

had not been really geared towards getting rate measurements or absolute values of

components within cells.

As we get more and more into these kinds

of modeling systems, one, people are starting to collect

these type of data, but still there is very

little of this data that is really available very easily.

They often need, one needs to read experimental papers

and make some assumptions to extract these kind of data.

Sometimes these parameters need to be guesstimated

based on known value for similar parameters, in

one where you can think of there's a

little bit like homology modeling of structures, where

if you know, one protein structure then a protein, an ISO forming a, a have

similar structures sort of based, and you think

sort of calculate the structure based on computation

similarly if one has values for say, GQ, one could, and these kinetic

parameters may be applicable to the other G protein in this category such as GS.

And this is what, this is what one means by guesstimated parameters.

And sometimes parameters need to be estimated

from indirect measurements such as time course.

And these can be quite accurate, although you might not get

a parameter that is directly associated with one component or another.

For instance, if there is a time course of ras activation

one can accurately estimate the relative activities of the get and

the gap, but one may not be able to precisely mea, estimate

the kinetic parameters associated with that get or a gap alone.

So there are curve fitting programs that's COPASI allows us one to estimate COPASI

that allows one to estimate these types of programs, these types of parameters.

So your models are only as real as your kinetic parameters, and this is sort of a

very, very fundamental sort of concept that one needs to keep in place.

So, there are some sort of rules we need to follow when building models.

These are rules that we follow in my own lab all the

time, and I would encourage other people who build these models to

follow these rules so that your models are likely to be realistic

or reasonably realistic representation of the systems that you wish to study.

So the first rule is do not oversimplify the model.

And as I told you previously, when

you require different isoforms of the receptors.

Identify the isoforms and use them as di-, distinct entities, and compute your model.

You, incorporating these levels of details.

6:24

One needs to build models with enough detail

to provide these non-intuitive hypotheses that arise from simulations.

So this is sort of dealing with details is very important.

And so this is a cautionary tale for students to

come from say applied math or computer science or other back,

or engineering background and say, oh, I know how to build

a model, so I can do it without knowing any biology.

In reality, you cannot.

So you need to understand some level of the biological some level of detail of

the biological system and make choices of how much detail

the model needs to sort of address the questions that you might be interested in.

Use available experimental parameters to data

to obtain realistic, and reasonable parameters.

And so this is sort of very, very

important that your parameters are realistic, they are in

the real range, so if a protein is present

at at, say, 10 nanomolar concentration in a cell.

If you assume the protein to have a concentration of one

micromolar or ten micromolar, you will be so off in your

reaction concentrations that the prior, that the model will compute

stuff that are unlikely to represent anything real happening with in the cell.

So one needs to get reasonably realistic parameters to compute these models.

8:17

Personally, I think the worse way to do a model is to say, I would like to observe a

certain behavior and then sort of change the parameters til you see the behavior.

When one does this, what happens is that one loses

any sort of anchoring into the realistic constraints the system

has, and so the behavior may be observed in the

model but may be never observable in real life experimentally.

And this sort of ability for the model to

do anything is what is typically called a spherical cow.

And here I have a picture of the spherical

cow taken from [LAUGH] an Italian website on the web,

and you can see that depending on how many

parameters you put, you can do anything to the models.

So sort of the pejorative way of stating this

is that give me nine parameters and I will build

you a spherical cow, and give me a ten to one and I will make the cow back its tail.

So you really don't want to, your models to be the proverbial spherical cow.

9:27

So if you want to know what your system is capable of displaying a behavior or not,

start with the parameters, that you estimated in an unbiased fashion from

the experiment, and then conduct a systematic

parameter variations, and then do, and then as

you vary these parameters, you can see

whether you're in a realistic range or not.

So for instance if you start at a certain set of parameters that

you estimated a two fold to ten fold variation may be realistic, in

most cases and so the behavior changes a lot and you might presume

that you might see these behaviors not very often, but at least sometimes.

But if you need to change vary the parameter by

a factor of 100 to 1,000, this is unlikely to happen

in real life, and so behaviors that require extreme changes in

parameters are not likely to be observable in experimental assistance.

So, once one settles on these parameters, and sort

of stays in a reasonable realistic range, the issue is

how does one solve the systems of ordinary differential equations,

10:41

to obtain sort of the results one needs to see.

The, the common or the classical way of solving ODE's

is to use what is called the forward Euler integration.

And I really can't, I don't have time or sort

of, can't go into the details of how this is done.

One needs to take a, a differential equation class to do this.

And it may not be necessary for most

biologists, because there are programs to read such as

Mat-Lab and Octave, and others where you don't

need to know how a differential equation is integrated.

You just have to know what is useful for your kind of model to use.

So the forward Euler sort of

12:56

in the finite element method, we, the continuous domain is divided into smaller

parts called elements by enforcing a mesh onto it, and this allows us to sort of get

a system of equations that governs the

flow of entities such as proteins between these,

discrete elements, and then we nu, nu,

numerically solve the system of equations that govern

both the reactions and the flow of entities between these elements.

In the finite volume method, we use, also use a mesh of a defined size.

The volume refers to the volume surrounding a point on the mesh.

here, typically the surface integrals are

converted into volume integrals and solved numerically.

This method is most suitable for cell biological models with diffusion

and is the one which is used in the virtual cell.

There are a number of software suites that are available for numerical computations.

Among the widely, most widely used one is a program, Matlab which is used for a

variety of numerical computation including ODE and PDE models.

We teach all our courses using Matlab, and since the

company is not going to provide Matlab for free, for the

Coursera course during period for the Coursera courses, the dynamic

of modeling course taught by [INAUDIBLE] will also use Matlab.

Another widely used program is Mathematica, which is also

a commercial software for a variety of numerical competition.

There are some free software that are available as well.

GNU Octave is free software suite which is compatible with Matlab,

and is often used by people who do not want to purchase a license for Matlab.

Virtual Cell is another free modeling and, and analysis

software that has unique capability for partial differential equation

models in conjunction with imaging experiments and we often

use Virtual Cell for our spatial models in my lab.

[SOUND]

[BLANK_AUDIO]