Mel Slater's Presence Blog

Thoughts about the concept of presence in virtual reality - a place to write freely without the constraints of academic publishing,and have some fun.

My Photo
Name:

I still find immersive virtual reality as thrilling now as when I first tried it 20 years ago.

15 February, 2014

The Presence of Your Distant Virtual Body

One way to think about a body ownership illusion is that it arises when the brain attempts to solve a contradiction between different sensory modes, and chooses as a resolution the simplest hypothesis that appears to explain what's happening. For example, in the rubber hand illusion your real out-of-sight hand feels tactile stimulation, while a visible rubber hand on the table in front of you, and in a plausible position and orientation, is seen to be touched. The felt touch and seen touch are synchronous. So there is a contradiction between the felt touch in one place and the seen touch in another. The contradiction is resolved by going along with the hypothesis that the rubber hand is indeed your own hand.

There is a more severe contradiction discussed in the literature: you see your body some metres in front of you being tapped on the back, but you feel the tapping (of course) on your back (synchronously with the seen tapping). Here the contradiction produces a strange effect - somehow your body is over 'there' (the sensation of touch can shift to the body in front) but of course your visual ego-centre is 'here' (where you are really located). There can be a reported sensation of body ownership over that distant virtual body. But is this 'body ownership' in the sense of somehow feeling your body to be at the distant location, or is it just a question that you recognise the body as your body, and so in answer to a questionnaire would give a high score to a question about body ownership, but qualitatively this is not the same as when your virtual body is spatially coincident with your real body and seen from first person perspective?

We examined this issue with immersive virtual reality, and partially reproduced the experiment described above.  The conclusion we reached is that the way the brain attempts to resolve the contradiction between you being 'here' but feeling stimulation on the body in front over 'there', is to produce illusions of drift. One way to resolve the contradiction between the two locations is to make them coincide. So if the virtual body in front is illusorily perceived as drifting backwards towards your position, or if you have the illusion of drifting towards the virtual body in front, then the two bodies will become united into one.

In our experimental setup the virtual body in front was subjected to a threat (a spinning fan was lowered towards its head). So if you drift towards the body in front this would put you in danger, but if the body in front drifts back towards you that would put it out of danger. We found a strong positive correlation between the illusion of drifting forward and heart rate deceleration (increasing danger) but a negative correlation with the virtual body drifting backwards  (decreasing danger).

video
The paper can be read in Frontiers in Human Neuroscience

15 July, 2013

Presence Through the Eyes of a Child


 Using a head-mounted display and body tracking suit, entering into a virtual reality, you can experience yourself as a child of about 4 years old. You look into a mirror, or directly down towards your own body, but you see the child body instead. The brain appears to be remarkably flexible in quickly accepting the proposition that your body is different - especially when you move your body the virtual body is seen to be moving the same as you feel yourself to be moving.The virtual body has substituted your real body.
Alternatively you can be embodied in a virtual body of the same size as the child one, except that this is a shrunken down adult body. Otherwise everything is the same. In both conditions people tended to have a strong illusion that the virtual body was their body.
 The question we set out to answer with this arrangement is whether embodiment in the two different types of bodies would lead to differences in perception and also attitudes. You remember as a child that things seemed to be enormous, that if you see them today they don't look that way. Is it just a question of your size, or is something more at work? It has been shown that size illusions operate when you make people apparently small or big - like if you were the size of a Barbie doll, how would you see the world? You see it bigger. What we found though goes beyond that. In the two conditions (child or shrunk down adult) both overestimated sizes of objects, as expected. However, the child condition led to much greater size overestimation. It must therefore be not just the size but the form of the body that is having this effect.

We also gave people an implicit association test. This requires people to quickly categorise themselves according to child or adult attributes. Their adult attributes (like their age, what they do etc) were obtained a while before the experiment from a questionnaire. Those in the child condition nevertheless were found to identify themselves more with child like attributes than those in the adult condition.

A critical aspect of the findings was that the differences between the child and adult embodiment was due to the degree to which participants had the sensation that the virtual body was their body (their degree of 'body ownership' over the virtual body). We had another condition where everything in the setup was the same, except that the virtual body moved independently of the person's real body movements. In this condition the illusion of body ownership was very much reduced compared to the condition when the virtual body moved synchronously with the real body movements. In this asynchronous condition the difference between the child and adult conditions vanished. Both still overestimated sizes, but there was no difference between them, and the overestimation was about the same as that in the synchronous adult condition.

The body has a kind of semantics, meaning is attributed to a body type. In this case it was a child's body, something of which we've all experienced. Perhaps embodying people in such a child-like body automatically leads the brain to bring to the fore types of mental processing that go along with being a child. We have only shown this with respect to size perception, and implicit associations, but maybe there is more to this. Also we do not know how long the effects last - much work remains to be done.

Domna Banakou, Raphaela Groten, and Mel Slater (2013) Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes, PNAS doi: 10.1073/pnas.1306779110

Video







31 May, 2013

Racial Bias and Virtual Embodiment

You look at your body directly and also you see it in a mirror. When you move the body moves. The strange thing is that although you don't recognise it as your body, nevertheless it feels as if it is. This type of virtual body ownership illusion has been demonstrated several times. What we are interested in though is the consequences of this body ownership illusion. Specifically, does the type of body that you feel is your body influence your behaviours, perceptual judgements, even your cognition?
Body Semantics
A body type has an intrinsic meaning. For example, through stereotyping when you see a very old person you might automatically associate that person with certain deficits in cognition and strength - even though that might be totally untrue for that particular person. I call this 'body semantics' - the type of body carries with it certain predisposed attitudes, behaviours, psychology, physical abilities.  That's when you see someone else's body. What about your own?
The beauty of virtual reality is that we can change your body. In a recent paper we showed that if you take one group of people and you put them in a casually dressed body suggestive of being 'cool', progressive, relaxed then they play the drums with greater body movement than another group of people put in a suited formal looking body. Here we followed a similar idea, and placed light skinned people in a dark skinned virtual body, using a wide field-of-view, head-tracked head-mounted display, and a motion capture suit to track the person's real body movements. The participants saw their virtual body from first person perspective with respect to the viewpoint of that body, so that when they looked down towards themselves they saw the virtual body instead of their real one. When they moved the body moved the same (through the motion capture suit). When they looked in a virtual mirror they saw this other body instead. 

Implicit Association Test
A few days before they entered the virtual reality we applied a test called an Implicit Association Test (IAT) for racial bias. What this does is force you to make rapid associations between concepts and representations of Black or White people. The idea is that if your reaction times in pairing White faces with positive words and Black faces with negative words is faster than White faces with negative words and Black faces with positive words, then this indicates a racial bias. It does not mean that the person is racist - far from it - but perhaps reflects an implicit and automatic bias caused through socialisation via the media (perhaps 'anti-socialisation' would be more appropriate).
The experimental study had four different conditions (experienced by 4 different groups of 15 people): being embodied in a light skinned body, a dark skinned body, a purple skinned body, or no actual body but a dark skinned body in a mirror that did not move the same as the participant. The purpose of the purple skinned body was to check whether the effects were caused by just 'difference' or 'strangeness' or race. The purpose of the 'no body' was to check that the results were caused by the illusion of body ownership over the virtual body, and not simply seeing the different body.
During the virtual reality experience nothing much happened - some virtual characters walked past the participant - half of these dark and the other half light skinned. After the conclusion of the experience the participants again completed an IAT test.

Outgroup
What happened is that the IAT score declined only for those who had been in the dark skinned body. Somehow, becoming, however briefly, a member of the 'out group' in a very obvious way was enough to signal the brain that this was no longer your 'out group' but your in group. The fact that this operates so fast is remarkable, but in these body illusions I find everything remarkable - the fact that a few seconds of stimulation can make a rubber arm feel like your own, or a few seconds of apparently being in another body can make it feel like your own. By the way the group of Manos Tsakiris at Royal Holloway London, very recently also demonstrated a similar effect by using a black rubber hand in the rubber hand illusion.

video
Tabitha C. Peck,  Sofia Seinfeld, Salvatore M. Aglioti  & Mel Slater (2013) Putting yourself in the skin of a black avatar reduces implicit racial biasConsciousness and Cognition, 22(3), 779-787.

This work is funded under the FP7 Project VERE and the ERC Project TRAVERSE.

31 January, 2013

In the Presence of Violence


In the Presence of Violence

The Scene

You’re in a rather empty bar minding your own business, enjoying a quiet drink, and eventually a stranger walks in and starts to talk to you. He is pleasant and you find that you have some things in common, in particular you both support the same football team. The friendly chat lasts a while, and suddenly someone you hadn’t noticed, who’d been sitting by the bar, a third person, strides over and starts to accuse your new found acquaintance of “staring” at him. Of course he denies this, saying that he’d just been talking to you. This third person becomes increasingly aggressive in tone and uses threatening body language towards your acquaintance. Whatever the latter says is immediately turned around to make it sound like an escalation of the argument, and soon it becomes clear to you that the only goal of the aggressor is to cause a fight. Occasionally the victim, the one you’d been having the friendly chat with, looks towards you, but he doesn’t actually ask you for help. Obviously this is going to end in some pretty bad violence, in spite of the fact that your acquaintance is trying everything he can to de-escalate.

The question of interest here is – what do you do? Do you intervene? Do you quietly leave the bar? Do you stand there trying to think what to do but not actually do anything? Do you freeze? What should you do? What is the right thing to do? Would it make things worse or better if you intervened?

Now replay the above scene. The man walks into the bar, and he starts chatting with you. You find that you and he really have nothing much in common. Although he seems to be interested in football, it is apparent that he does not support the same team as you. Otherwise, everything is the same as in the first version. The aggressor clearly wants to start a fight and this is going to end in violence.

What do you do in these slightly different circumstances?

Bystander Intervention

This is an example of the so-called ‘bystander’ problem – how people respond when they come face to face with violent emergencies. I wrote about this in an earlier blog entry ‘The Illusion of Violence’. As pointed out earlier it is very difficult to study this type of situation experimentally – to investigate the factors that might lead to someone intervening or not. There have been many studies of this type of bystander situation, but experiments do not actually include violence, or even if they do it is not a face-to-face type of violent scene, but may be something that is shown on a video.

Our interest was to find out what people actually do when they are face-to-face with this type of violent confrontation. Yet we cannot carry out experiments in ‘real life’. However, research over the past 25 years has shown that people immersed in a virtual reality tend to behave realistically – carry out actions, have emotional responses, even have thoughts that would be appropriate for a situation occurring in reality. By being ‘immersed’ I mean that they are in a computer generated surrounding environment, that they see in 3D stereo, where everything is fully life-size, where they can perceive using their body in a natural way – for example, turning their head to look to the side, bending down to see underneath something, and so on. This is especially important interacting with virtual humans – these are life-sized, they talk with you, they look you in the eye, it would seem as if they could touch you – they seem to respond to you, they have a life-like presence in the same space as you. Now under these circumstances you’re talking with a (virtual) acquaintance, and the third (virtual) man suddenly appears and violently threatens your acquaintance, how do you respond?

The Bystander in Virtual Reality

We recently had a paper published that addresses this issue. The particular factor of interest that we looked at was the extent to which your responses are modulated by the sense of group identity between yourself and the victim. In order to be able to manipulate group identity in a natural way we used football (soccer) club affiliation. We recruited 40 male supporters of the Arsenal football team. They went through the experience described in the opening paragraphs. However, for 20 of them the eventual victim was clearly himself an Arsenal supporter, and for the other 20 he clearly was not an Arsenal supporter. So 20 of them, with respect to this situation, were ‘in-group’ with respect to the victim and the other 20 ‘out-group’ (or at least definitely not ‘in-group’). The aggressor was clearly ‘out-group’ since he made it very clear many times that he ‘hated’ Arsenal and thought that they were an extremely useless soccer team (I’ve put it more politely here than the way that he expressed it).

We wanted to see whether group affiliation could predict helping behaviour. We measured the latter by the number of physical or verbal interventions that the participants made once the argument had started. A physical intervention might be something like trying to step between the two characters, or gesticulating towards them.  Also for half of the participants we programmed the scenario so that the victim would sometimes look toward the participant, and for the other half not. So finally this experiment had 4 conditions: in-group, looking; in-group, not looking; out-group looking; out group, not looking – with 10 participants arbitrarily assigned to each group (in fact data for 2 of the participants were not usable, so we ended up with 38 not 40).

What we found was interesting – that the ‘in-group’ people intervened on the average more than the ‘out-group’ people. In a questionnaire after the experience we had asked how much the participants thought that the victim had been looking towards them for help. For those in the ‘in-group’ the stronger their belief that the victim was looking towards them for help the greater the number of interventions. However, for those in the ‘out-group’ condition there was no such relationship.

Now the finding is on the face of it fairly obvious. If a fight breaks out between two people in your presence you might be inclined to be more likely to help the victim if he had some affiliation with you (in this case supporting the same football team), other things being equal. However, the interesting aspect is that this appears to occur also in virtual reality, even though everyone knows that nothing ‘real’ is actually happening.

Another aspect of the results that is hard to convey in a journal paper is the actual reactions of the participants – that they were disturbed by the situation, reporting things like a racing heart, and also irrational worries like if they had intervened then the aggressor might have turned on them, and so on. As I’ve said before, some part of the brain does not understand about virtual reality – and simply takes what is happening at face value – and responds. Of course you ‘know’ that nothing real is happening, and therefore a slower cognitive response might then act to dampen down your responses compared to those that might occur if the events were actually taking place in reality. In virtual reality studies in fact typically we are trying to capture that first automatic response, the one that happens before you have ‘time to think’. This is the genuine response, and the one most likely to be similar in virtual and physical reality.

Another interesting result of this experiment is that it illustrates that the quality of the computer graphics is not vital. If you look at the video there will be something striking – when the virtual characters talk their mouths do not move! There is no lip sync. After the experiment we asked participants what things they thought took them out of the experience – how could the scenario have been improved? Very few people actually mentioned the lack of lip sync. I think that they became so involved in the situation that they somehow didn’t notice it.

One thing that they did tell us though was that ‘a fight like that would never happen in a bar like this’ – in other words the décor of the bar was wrong, it was not a bar that would be frequented by this type of football supporter. This aspect of plausibility is extremely important, and requires research on the domain to be simulated for any kind of experiment that is supposed to be depicting events that could happen in reality. (See ‘Illusion is Part of the Definition’).

Statistical Diatribe and Symbolic Regression

One other aspect of this paper is quite new. Research in this field follows the conventions of psychology (in this case social psychology) in terms of statistical analysis and reporting. In psychology there is convention of the 5% significance level, enforcement of the frequentist interpretation of statistics rather than Bayesian, and the tyranny of linearity. When you carry out a standard analysis such as regression or analysis of variance, even if using a generalised linear model,  somewhere in this is a very strong assumption of a linear (in fact affine) relationship between the response variable and the independent and explanatory variables (even if the variables themselves might be transformed e.g. to a log scale). But why should everything be linear? In fact it is safe to say that conventional statistics makes the linearity assumption mainly because the mathematics and computation is easier – and certainly the latter is a factor that should not make us stop and think too much today, compared to the nearly a century ago when many of these techniques were invented.

In this paper we had both the observed intervention data (number of physical and verbal interventions) and questionnaire data. I wanted to see what the relationship was between the number of interventions and the responses to the questionnaires. I used a method called ‘symbolic regression’ (which is a specific aspect of Genetic Programming). In particular I used a system called Formulize (or Eureqa) in order to analyse the relationship between the numbers of interventions and the subjective questionnaire responses. See the paper by Michael Schmidt and Hod Lipson in Science.  Supporting Text S3 of our paper briefly explains how this works. The important thing is that this discovered something that I don’t think would be possible with conventional statistics. For the number of physical interventions (N), the resulting equation was of the following form:

N = group*exp(LookAt + VictimLooked) + f(… other questionnaire variables…)

The f() represents some function which isn’t important in this particular discussion. The variable group = 0 means ‘out-group’ and group = 1 means ‘in-group’. LookAt = 1 for the case when the victim occasionally looked towards the participant during the argument and LookAt = 0 for the group where this did not happen. VictimLooked is the response to the statement: “After the argument started, the victim looked at me wanting help.” This (as all questions) was scored on a 1-7 scale where 1 meant least and 7 most agreement with the statement.

Now if we look at this we see that the whole first term on the right hand side of the equation vanishes for the ‘out-group’. Hence only for the ‘in-group’ were the ‘look at’ factor and the strength of the belief that the victim was looking for help important. For the ‘out-group’ these factors seemed to have no effect.

This equation captures 85% of the variation in the original data. The point is that it is much easier to look at this equation and try to understand what it signifies than looking at the original data (that it very well) represents. Reviewers in the psychological and social sciences have to accept that the world has changed since the 1920s when methods such as ANOVA were invented, and that this type of data exploration is a valid way to understand data. There is a world beyond formal 'hypothesis testing'. For example, now that we have this type of equation, what would be wrong with an experimental replication that ran the symbolic regression on the new data and then compared the form of the equation with the original data? Or, having found this equation it tells us that the more we foster the idea that the victim is looking to the participant to help the greater the number of interventions should be. We could set up an experiment to test that specific hypothesis. This is not a challenge to conventional methods, but a statement that there is more - and different techniques should not lead to suspicion. As another example, when we first analysed the 'number of interventions' data we did not use standard ANOVA, which is based on the assumption of a continuous response variable, and a normally distributed error structure. This is because 'count data' (the number of times something happens) is better modelled with a generalised linear model with a Poisson error structure (log-linear regression). This was treated as something 'suspect' by a first round of reviews, even though generalised linear models with Poisson error have been around for at least 60 years!

So What?

What are the conclusions from this type of experiment? First is that using VR in this way allows us to carry out lab based experiments where participants are confronted with a situation that has high ‘ecological validity’ – it is almost like real life. Such experiments don’t come out of nowhere, but they are guided by theory. Here it was the idea suggested by Mark Levine amongst others that group affiliation plays a major role in bystander behaviour – the apparent relationship between the bystander, the victim and the perpetrator. Having carried out the experiment we obtain data so that we can now look again at theory with this additional information, and perhaps formulate a revised theory, leading to another experiment. On the latter we have since carried out an experiment where we change the number of bystanders (not just one – the participant) and have examined the effect of that. The results of the new study are in preparation.

On the practical side we could suggest, for example, that if you are a victim, then yes do explicitly ask people around you for help. This might not be effective if those around do not share some group affiliation with you, but it should help if they do. Even if they do not share affiliation with you, this is something that itself is open to reframing. For example, fans of two rival football clubs might be bitter enemies, but if the situation is redefined so that they are both ‘football enthusiasts’ (compared to say rugby enthusiasts) then at that level they do have a joint affiliation. This was explored in http://psp.sagepub.com/content/31/4/443.short where bystanders to a (non-violent) emergency behaved differently depending on whether they had been primed to think of themselves as fans of a specific football club, or general football supporters. So for a victim it might always be possible to appeal to higher level affiliations if it is possible to seek help from bystanders.

For the bystander him- or herself this experiment cannot say what they ‘should’ do. This depends entirely on circumstances and on the moral choice made by the bystander taking into account many factors – including most importantly their own safety and of others around. But for authorities this type of experiment might be very useful for the formation of policy. Even from this simple experiment, authorities could give advice that victims should explicitly ask for help if there are any potential bystanders around, even perhaps with advice about how to ‘reframe’ group affiliation (e.g., “Think about how my kids will feel”, might appeal to bystanders around who happen to have children!). Or another example, it is widely believed in the social psychology community that there is a ‘bystander effect’ such that the greater the number of bystanders the less the chance that anyone will intervene – because there is a diffusion of responsibility (“Why should I be the one to stop this?”). If that were the situation what should the victim do to break this? Should for, example, he or she choose someone at random in the crowd and appeal specifically to that person? Or should the victim somehow try to raise group consciousness towards prosocial behaviour (“You are all party to this attack by not helping me!”). We don’t know the answer, but with an experimental study we could gain some insight into this.

(WARNING -  this video includes bad language and
depicts a violent confrontation).

videoBystander Responses to a Violent Incident in an Immersive Virtual Environment

  • Mel Slater, 
  • Aitor Rovira,
  •  
  • Richard Southern,
  •  
  • David Swapp,
  •  
  • Jian J. Zhang,
  •  
  • Claire Campbell,
  •  
  • Mark Levine


01 November, 2012

In the Presence of a Rat



BEAMING

The TV series Star Trek originally popularised the
idea of instantaneous transportation of people to distant places. This involved decomposition of human bodies and objects into their atomic
constituents, and then reconstruction at the remote location. Clearly such technology is perhaps centuries away even if it could ever be
realised. Today there is an alternative technology that aims to realise the same idea but through quite different means. This is the idea of ‘beaming’ as digitally transporting a representation of yourself to a distant place, where you can interact with the people there, as if you were there. It is achieved through a combination of virtual reality and teleoperator systems. The visitor to the remote place (the destination) is represented there ideally
by a physical robot. Such rapid transportation to distant locations, where you have the strong feeling to be there, and where the local people in the destination experience you as there has many economic and practical advantages. It is a step beyond approaches such as video conferencing which do not give participants the physical sensation of being in the same shared space, and certainly not the physical capability to actually carry out actions in that space.

The European BEAMING project has achieved early examples of this goal, in relation to humans beaming to distant places and interacting with people there. In one example, Dr Mavi Sanchez-Vives in our event Lab in Barcelona, digitally beamed to London to be interviewed by Rory Cellan-Jones, BBC Technology Correspondent. Or Josep Corbella of La Vanguardia in Catalunya beamed to London to carry out an interview with me about the BEAMING project. Moreover in the framework of this project a system for remote medical assistance for patients at home that can be “transported” to the hospital is also being developed. 

Alien Worlds

In our new paper  published in PLoS ONE the beaming idea is extended by showing how it is possible to beam to even what might be considered as ‘alien worlds’. We show how a person can be ‘beamed’ into a rat open arena, where the person interacts with the rat as if it were another person, and the rat interacts with a rat-sized robotic representation of the distant person. This not only shows the range of this technology, but also provides a new tool for scientists, explorers or others to visit distant and alien places without themselves being placed in any kind of danger, and importantly, to be able to see animal behaviour in a totally new way - as if it were the behaviour of humans.

A combination of several different technologies were used (1) Virtual reality places people in an alternate computer-generated world where they can look and move around, carry out tasks - all at life-size and perceived in stereo 3D. People generally have the illusion of presence - that is of being in the computer-generated space as if it were a real space. Thus the human participants in the system were in a virtual reality lab at the event Lab which is at the Mundet campus of the University of Barcelona. The rat was located about 12 km away in an animal care facility in Bellvitge. (2) Tracking technology was used to track the movements of the rat in its arena, and the tracking data was transmitted over the internet to the computers running the virtual reality simulation in Mundet. This tracking information was used to control a virtual human character (an avatar) that represented the rat. Hence whenever the rat moved its avatar moved too, in a representation of the rat arena but scaled up to human size. Hence the human participant shared the virtual arena (which looked like a room with some pictures on the walls) with a humanoid avatar. (3) Teleoperation technology in conjunction with tracking was also used. The movements of the human in the virtual reality were also tracked, and this tracking data was sent to computers in Bellvitge which controlled a small robot that was located in the rat arena. So whenever the human moved in the virtual space so the robot moved in the rat space.

Interaction Between Human and Rat

So putting all this together - the rat interacted with a rat sized robot that represented the remotely located human, and the human interacted with a human sized avatar that represented the remotely located rat.

In order to make the rat interested in the robot, a small tray attached to the robot body had some jelly on it, which the rats had previously eaten in some training sessions. The humans in a small study interacted with the rat. They had to learn to entice the rat-avatar to go near some specific pictures in the virtual reality. The humans didn’t know about the jelly on the robot, but they typically learned that the rats would follow them, so they had to draw the rats out of their normal behaviours (staying in the corners and staying close to the edges) to get them to move around so that both could stand by the same picture.

Overall the study showed that that the system technically performed well and that there could be an interesting interaction between the animal and the human and remotely located beings at different scales.

At Different Scales

A very important part of this is that this system shows that it is possible for people to interact with creatures at very different scales from the human. As discussed in the paper it would be possible to set up a similar system for humans to visit worlds of vastly different scales - insects, birds, and so on. Seeing these animals in a transformed way (for example as human avatars as we did in this paper) might give insights that would otherwise not be possible. However, our fundamental reason for doing this work is to show how it is technologically possible. It remains to be explored what insights might be obtained from this type of human-'alien' interaction.

This paper has been a long time in the making. It was started as part of the PRESENCCIA project, where the idea was formed and some initial work was carried out. It was then fully realised during the BEAMING project, with contributions of technology that had been developed under other projects mentioned in the paper - in particular the avatar work developed in the ERC TRAVERSE project. 

video
The research was carried out by the following groups:

event Lab, University of Barcelona, Barcelona, Spain

22 July, 2012

A Very Long Arm Illusion

Kilteni K, Normand J-M, Sanchez-Vives MV, Slater M (2012) Extending Body Space in Immersive Virtual Reality: A Very Long Arm Illusion. PLoS ONE 7(7): e40867. doi:10.1371/journal.pone.0040867


Asymmetry


We believe that our bodies are fixed and unchangeable except through the slow process of growing and ageing. Over recent years there have been research results that defy this common sense view - it seems that the human brain will quickly accept gross changes in the body - incorporating external objects such as a rubber arm into the body representation, and even whole bodies seen in virtual reality.



In this paper we add another dimension to this illusion of body ownership. Using virtual reality we show that a virtual body with one very long arm can be incorporated into body representation. An arm up to three or possibly even four times the length of a person’s real arm can be felt as if it was the person’s own arm. This is notwithstanding the fact that having one such long arm introduces a gross asymmetry in the body. An extended body space (a body with longer limbs occupies more volume than a normal body ), affects also the special space surrounding our body that is called ‘peripersonal space’ -a space that when violated by objects or other people can be experienced as a threat or intimacy, depending on the context.

video

Virtual Body



In our experiment 50 people experienced virtual reality where they had a ‘virtual body’. They put on a head-mounted display so that all around themselves they saw a virtual world. When they looked down towards where their body should be, they saw a virtual body instead of their real one. They had their dominant hand resting on a table with a special textured material that they could feel with their real hand (Figure 1A, B), but also see their virtual hand touching it (Figure 1C,D).  So as they moved their real hand over the surface of this table they would see the virtual hand doing the same.


In fact for 10 of the participants although their real hand touched the table top, their virtual hand did not (Figure 1E, F)- and we did this to create an inconsistency between what they felt and what they saw. This group always saw their virtual arm at the same length as their real arm. For another group who also saw the virtual arm at the same length as their real one, there was no inconsistency(Figure 1C,D) - the real hand touched the surface of the table, and the virtual hand was seen to do the same. This same consistency was kept for three other groups of 10 people each - but one where the table moved away to double the length of the real arm, and the virtual arm stretched to double its length (Figure 2B), another 10 where the virtual arm stretched to three times the true length (Figure 2C), and another group where it stretched to four times the true length (Figure 2D).

We took three measurements
(a) a questionnaire to assess the subjective illusion that the virtual arm was part of the person’s body.
(b) a pointing task, where the arm that did not grow in length was required to point towards where the other hand was felt to be (with eyes shut) (Figure 3).
(c) response to a threat - a saw fell down towards the virtual hand (Figure 2E, F) - and we measured whether people would move their real hand in an attempt to avoid the attack.

Malleable

What we found, based on these measures, was that people did have the illusion that the extended hand was their own - based on all three measures. Even when the virtual arm was 4 times the length of the corresponding real arm, still 40-50% of participants showed signs of incorporation of the virtual arm as part of their body representation. We also found that vision alone is a very powerful inducer of the illusion of virtual arm ownership - those who experienced the inconsistent condition where the virtual hand did not touch the table, even though the real hand felt the table top, had a strong illusion of ownership over the virtual arm.

These results show how malleable is our body representation, even incorporating strong asymmetries in the body shape, that do not correspond at all to the average human shape. This type of research will help neuroscientists to understand how the brain represents the body, and ultimately may help people overcome illnesses that are based on body image distortions.

Analysis

For the first time we used the method of path analysis to analyse the results. The most frequently used tools of analysis (ANOVA, regression) are special cases of the general linear model. This requires a single response variable, and postulates a linear model that relates this response to a number of other variables (for example, representing factors in an experiment, or covariates). It assumes an additive normally distributed error term. However, when there are several variables contributing to a phenomenon, there is likely to be a complex relationship between them that cannot be expressed in a single equation: Y may depend on X1 and X2, but X2 may depend on X1 and X3, and X2 also on X3 and so on. Path analysis allows for the possibility of unravelling multiple associations between variables. For example, consider the situation when X affects Y1 and X also affects Y2. Then this could be an example of so-called spurious correlation between Y1 and Y2, since they both have a common dependent (X). With path analysis it is possible to simply test whether the apparent correlation between Y1 and Y2 is preserved even after allowing for the influence of X. This situation occurs in our study, since there are multiple assessments of what might be the same underlying phenomenon - the sensation of 'ownership' over the virtual arm, or even more basic than that - the actual experimental manipulation itself. These assessments were questionnaire, movement in response to a threat to that arm, and also blind pointing direction towards the virtual hand (do you point to the hand at the end of your elongated virtual arm, or towards your real hand)? Path analysis allowed us to separate out the various influences, and present a simple diagram that summarises the findings. 

In psychology especially it is often very difficult to introduce new analysis techniques - preference is towards statistical techniques that are approximately a century old. Things move on though - a large number of statisticians now prefer the Bayesian interpretation (which renders the idea of 'significance testing' meaningless), and path analysis and its more general counterpart, Structural Equation Modelling are widely used. The freedom to explore more modern approaches is really needed.

27 May, 2012

Inverse Presence

video

Inverse Presence

People use virtual reality for a purpose - like any 
kind of media the purpose of experiencing it is
to effect some change. This change might be simply 
at the level of enjoying something (entertainment), or viewing something (for the sake of understanding, design or development), or to learn something new, or to have some new kind of experience that is not available or difficult to achieve in everyday reality.

In some sense there is always a task to be realised - whether it is "enjoy yourself"or something more concrete than that such as to realise a specific set of actions. 

Now for many years there has been research on factors that contribute to 'presence'. Here by 'presence' I mean that aspect where people tend to respond to situations and events in the virtual world as if they were real: an avatar smiles at you and you smile back, or an avatar comes close to you and you feel uncomfortable and possibly step backwards, since this breaks the norms of proxemics. 

Even if only implicitly most applications of VR rely on presence occurring. For example, using VR for psychotherapy would be useless unless the patients, to some extent, respond realistically to what is depicted in the VR. So VR therapy, for example, for fear of heights would not be useful if patients did not feel some anxiety around their experience of precipices in VR.

Lydia Reeves Timmins and Matthew Lombard used the term 'inverse presence' to describe situations where something happens in reality that is perceived as if it were not real - real events (especially horrific ones) may be experienced as 'simulated' - i.e., we have all experienced moments in which we think "this is not really happening". Here I want to give a different meaning to the term "inverse presence" - to mean that we assume that presence in a VR will happen, and therefore we exploit this to get participants to achieve some particular tasks that they had never explicitly been told to do. For example, suppose the task is "get this person to smile" - then elements of the virtual environment (such as avatars) must learn to carry out actions that evoke this response. Probably here it would be quite easy - since from 'presence theory' we know that if an avatar smiles at the participant they are very likely to smile back - and so introduction of smiling avatars would probably do the trick.

Most experiment studies on presence vary factors that are thought to contribute to presence, and then see when and if presence occurs. 'Inverse presence' means - we know that presence will occur and so certain behaviours are likely to follow from this, and let's utilise it to get people to do certain specified actions.

In the recently published paper (PDF) of Jason Kastanis and myself in ACM TAP we described a quite simple example of this approach. When you interact with a virtual human character in immersive VR you tend to respond realistically. In particular several other works have shown that the rules of 'proxemics' operate - that is, if the avatar approaches too closely to you, you step backwards (the implicit rules of social, personal and intimate space seem to apply in your interactions with avatars too). Our goal was for an avatar to learn how to get the participant to go to a particular place within the virtual environment - a place some metres behind where they were initially standing. The avatar was programmed with a number of actions it could take - like move forward or back, do nothing or wave to the participant saying "come here". At first the avatar chose these actions at random, but over time it converged on the right behaviours - get the person close to the avatar and then move forward towards the person so that the person backed away. The avatar was controlled by a Reinforcement Learning agent, that received a reward when the person moved towards the target, and a loss when the person didn't do that. The RL algorithm is designed to maximise long term reward. What we found is that in circumstances when the avatar was allowed to move to intimate distance to the person, it learned how to drive them to the pre-specified place within 7 minutes. It took much longer if they could only move to personal distance, and didn't work at all if it just selected random actions (move forward or back or wave).

The purpose of VR is to get people to 'do' things (doing includes experiencing). Here we let the VR system learn how to get the person to do things by relying on their likely responses to events, as predicted by presence theory. The RL worked efficiently, but of course this was a very simple 1D problem. Nevertheless I think that the paradigm is worth pursuing with more complex scenarios.