Most experiment studies on presence vary factors that are thought to contribute to presence, and then see when and if presence occurs. 'Inverse presence' means - we know that presence will occur and so certain behaviours are likely to follow from this, and let's utilise it to get people to do certain specified actions.
In the recently published paper (PDF) of Jason Kastanis and myself in ACM TAP we described a quite simple example of this approach. When you interact with a virtual human character in immersive VR you tend to respond realistically. In particular several other works have shown that the rules of 'proxemics' operate - that is, if the avatar approaches too closely to you, you step backwards (the implicit rules of social, personal and intimate space seem to apply in your interactions with avatars too). Our goal was for an avatar to learn how to get the participant to go to a particular place within the virtual environment - a place some metres behind where they were initially standing. The avatar was programmed with a number of actions it could take - like move forward or back, do nothing or wave to the participant saying "come here". At first the avatar chose these actions at random, but over time it converged on the right behaviours - get the person close to the avatar and then move forward towards the person so that the person backed away. The avatar was controlled by a Reinforcement Learning agent, that received a reward when the person moved towards the target, and a loss when the person didn't do that. The RL algorithm is designed to maximise long term reward. What we found is that in circumstances when the avatar was allowed to move to intimate distance to the person, it learned how to drive them to the pre-specified place within 7 minutes. It took much longer if they could only move to personal distance, and didn't work at all if it just selected random actions (move forward or back or wave).
The purpose of VR is to get people to 'do' things (doing includes experiencing). Here we let the VR system learn how to get the person to do things by relying on their likely responses to events, as predicted by presence theory. The RL worked efficiently, but of course this was a very simple 1D problem. Nevertheless I think that the paradigm is worth pursuing with more complex scenarios.