My Senior
ERC grant called TRAVERSE started 1st April 2009. TRAVERSE stands for 'Transcending Reality – Activating Virtual Environment Responses through Sensory Enrichment'.
What is it about?
I use the term ‘transcending reality’ (TR) in two ways as a noun phrase and a verb phrase. A ‘transcending reality’ is one that replaces physical reality by a virtual reality, such that you respond to the virtual reality as if it were real. However, to ‘transcend reality’ is to go beyond the boundaries of physical constraints, when the virtual reality gives you the strong illusion that you've gone beyond the boundaries of physical reality. In these ‘non realistic’ applications of virtual reality I nevertheless expect people to respond to it as a TR. The overriding background objective of this research is:
to maximise the probability that participants will act as if the immersive virtual reality were real (TR).The technical research includes the main components of virtual reality - computer graphics and haptics mainly. Haptics hasn't been a strong field for me in the past, but I'm realising its profound importance - and learning more about it (see 'Haptic Rendering' edited by Ming Lin and Miguel Otaduy). However, the research is mainly interdisciplinary, including computer science and neuroscience.
There's another important aspect - there will be several experimental studies, and of course statistical analyses of the results. For a long time I've known that the classical approach to statistics - significance levels, type I and II errors, power, Neyman-Pearson Lemma, etc - is fine, but ... it doesn't make sense. What is a 'significance level'? It is the probability of 'rejecting your null hypothesis conditional on the null hypothesis being true' P(reject H0 | H0). Who cares?
What we're really interested is the probability of the hypothesis given the observed data: P(H | O), where O stands the the observations. This isn't allowed in conventional statistics since making probability statements about a hypothesis doesn't make sense - since the probability of an event has the interpretation that it is the ratio of occurences of the event to the number of times that the event could have occurred in a long run series of independent and identical trials. Clearly the truth of a hypothesis cannot be an outcome of an experimental trial - from this point of view P(H) = 1 (it is true) or P(H) = 0 (it is false), but we don't know which one of these holds.
This way of thinking leads to Bayesian statistics, where probability is interpreted as subjective degree of belief - so a statements such as P(H) = 0.75 is valid, it means your degree of belief that H is true. From Bayes' Theorem we get P(H | O) ~P(O | H)P(H) (I'm using ~for 'is proportional to'). P(O | H) is often something that can be computed from probability theory, and P(H) is your 'prior probability' for H ('prior' because it is before you get the data). Then Bayes' Theorem allows you to update your probability for H as more and more data is accumulated. In the end two different people who might have started with quite different priors will end up with the same final probabilities (P(H | O)) given sufficient data (O). So I preferred Bayesian statistics, although it does require choosing prior probability distributions.
It is interesting that while the field of statistics has undergone a sort of revolution in the past two decades where Bayesian statistics has become completely acceptable, and considered now part of mainstream statistics, the fields in which statistics are probably used most (in psychology and the social sciences) stick rigidly and ideologically to the sacredness of the 5% significance test.
Let's consider an example of how problematic this is. Suppose this week I do an experiment, and I report results at the 5% significance level. OK. Then next week I do another experiment and I report results at the 5% significance level. And so on for the next 100 weeks. Each of these different experiments I write up in a different paper. They are all accepted (well, of course, this is virtual reality!). So no problem with that. Now in a parallel universe, one also where psychology is dominated by classical statistics, I am very energetic, and I do all 100 experiments in a single week, and I write all the results in one paper and I get exactly the same results (i.e., the same things are 'significant') as in this universe. I then submit the paper for publication, and it is rejected for being statistically unsound! Why? Because ... if you do n tests all at the 5% significance level, then 'by chance alone' on the average 0.05*n of them are going to be 'significant' (think back to the meaning of 'significance level'). Note the only difference in the two universes is that in one I spread the results out over 100 weeks, and put them in 100 different papers, but in this other universe I did them all in a short time period and submitted them in one paper.
How, in the second universe, can we get out of this problem? Well the reviewers of the fictional paper say that I should have applied something called the 'Bonferroni Correction'. What this means, at the simplest level, is that if you do n tests, then you should use a significance level of 0.05/n.
But this is unfair no? If I spread the tests out over many weeks and put each in a different paper, then - no problem. But if I'm especially energetic and do all the tests at once, and then write them all in the same paper, my significance level has to be 0.05/100 = 0.0005. Unfortunately now nothing is significant!
Let's take this argument a bit further. Why pick on me? Why not throw your tests into the pot, and in fact all the tests in this universe. n is infinite, nothing is ever significant, all those fantastic results "it was significant at the 5% level" that we've ever seen are all ... are not supported, statistically invalid, since according to the 'Bonferroni Correction' the significance level is 0, and we can't get smaller than that (at least in this universe).
Now more recently there is another 'new wave' in statistics, based on information theory. I've been reading and learning this recently, and it is ... cool. You don't need prior distributions. You consider the question: what 'information' does this data contain about the possible models under consideration? So, I really like the information theory approach to statistical inference, since it gets to the heart of what the real problem is about, without any mumbo jumbo, weird concepts, strange tricks, and sleights of hand.
If you're interested have a look at http://www.mdl-research.org/. Unfortunately this approach has not reached the mass of practitioners yet, and maybe because there are a lot of new things to learn, with some not so trivial mathematics in the way. However, there is also a really nice practical book that is within this approach: Burnham, K. P., and D. R. Anderson. 2002. Model Selection and Multimodel Inference: A practical Information-Theoretic Approach. 2nd Ed. Springer. Although very practical it also explains the underlying concepts well. For the first time I felt I was doing something really appropriate in statistical analysis using these ideas analysing a recent experiment. However, probably the psychologists will not agree.
Now to get back to the point - for TRAVERSE I'm looking for researchers to fill a number of new research posts at both the post-doc and PhD student level. I expect that applicants will be from the fields of computer science, or cognitive neuroscience with computer science. Knowledge of computer graphics / virtual reality would probably be essential -
- Except for one position - I really want to have a statistician in my group. I would really like to have a statistician who is not orthodox (but who knows the orthodoxy) and is interested in furthering as a research topic, the information approach to statistics, as well as analysing the data of our experiments. Also, I have a strong intuition that the information approach to statistics may also turn out to be an interesting model for the underlying fundamental research questions that we will tackle.