I just want to add a cautionary note about being very careful when you're jumping to
conclusions from high dimensional data.
This is an example of a fish in an FMRI and the fish is being put
through the same experiments that we put humans through when we put them in FMRIs.
So it's shown pictures to stimulate the visual cortex.
And then, they use the same algorithms that they use to normalize the data and
test to see if there's any parts that are being stimulated,
compared to the background.
And as you can see, in this fish that's being shown pictures
of whatever a fish is interested in, the visual cortex is clearly being stimulated.
However, the joke in this, is that the fish was actually dead.
And this was a way of demonstrating that a lot of the algorithms that were used to
normalize this very, very high dimensional data,
were extremely prone to artifacts and
could give you the illusion of significant results when none were actually there.
Unless you think that the fish had some sort of after death ability to think about
things.
And if this is true for FMRI, it is most certainly true for microarrays.
And so it's important to be very,
very cautious when you're interpreting any one microarray experiment.
And that is why it is always a good idea to check and
see if these interactions are the pathways that it's predicting,
if there's some basis for that in the published literature.
So what about the future directions?
Well, we've been talking about mostly transcriptomics, but it's kind of
important to remember that transcriptomics only really captures a very,
very small amount of what's going on in the cell.
If you think about it, looking at the genome or looking at the set of MRNAs in
a cell, is really only giving you a very, very small picture of what is happening.
On the other hand, if you're just looking at the metabolome,
that's not going to tell you very much about the underlying regulation.
So if we really want to do systems biology,
we have to learn to look at all of these processes at the same time to be able to
all the way from the genome to the transcriptome to the metabolome.
And understand each part and how it contributes to
the part that's closer or farther from the phenotype.
And this is important because few chemicals only act to perturb a gene or
a metabolites.
So you have to understand the chemical's effects at all levels if you really
want to be able to describe the pathway of facticity.
But combining information from omics technologies is very difficult, but
probably very important for the future.
And there's a couple of stumbling blocks for this.
It requires very careful experimental design,
because you might not have the effects happening at the same time.
Your metabolomics might change a lot faster, or
a lot slower than the transcriptomics.
And the protomics might change more softly or more drastically than either of those.
So a sample size that works for transcript that makes maybe small or
too big for the other level of experiment.
And then, of course, you also have the problem that the more data you have,
the more you have the correct for multiple hypothesis testing.
So it is not always the case the more data is better.
It's better to think very carefully about whether the data you're going to add is
going to increase your understanding or not.
And a lot of the technologies, for example proteomics and metabolomics,
have a much higher signal to noise ratio compared to microarrays.
So think carefully before you just add on more data or
integrate more data if you're really going to be able to improve your understanding.
That wraps up my discussion of pathways of toxicity.
Thank you for joining me.
[MUSIC]