Musing today about ray-tracing in simulated 3d environments in the computer, I realized that these spaces have a singularity of view, even though it is intended to mimic a “real” space. Consider a reflective surface: this will look different depending on one’s angle, yet in the simulated 3d space, even for two viewers looking at one monitor, the reflection is exactly the same. Similarly, shapes out of view or obscured by another object are not rendered and in fact do not exist until in view.
A step closer would be an online environment like an MMORPG where multiple users might be looking at the same object from a different view. The caveat is that it is unlikely that the users are in the same physical space (ie: in a room together) and since their computers are fixed, they are not in the same configuration as in the game. It is unlikely that a friend can call out from behind an object, whose view and physical relationship to the virtual object is the same.
The closest technology I can see is Augmented Reality (AR), where a tool like a smartphone or tablet is the viewing device, where the camera also serves to locate the viewer simultaneously in a physical and virtual space. A cube, marked with patterns for recognition by the camera, could be seen on the screen as a reflective sphere, reactive to the viewer’s position. Simultaneous viewing is then possible, in a social space where dialog and a responsive virtual environment exist together.