The challenges when building an interactive immersive system vary depending on the scope of the display area and the number of sensors used as input. The artist has to create a model of the environment they wish to portray mapped to a perspective cube and then project that portrayal from the perspectives of the projectors. For interactive immersive experiences, the creator must also map real world sensor data against that model, and portray the live experience at high enough speed to avoid input lag. The sensors are the user input systems and they have to be mapped to the same relative distances as the virtual model.
Traditional planetarium spaces describe a hemisphere, which is easily modeled. For this reason, there are a variety of tools to build dome master formatted live output. One specialized Unity3D camera for doing that is available on my github account, and is called DomeMaster4K. It’s used as the basis for planetarium games and utilities I’ve built for Dome Lab. It’s based on Open Source work by Paul Bourke, so I’ve kept my own implementation of it in the public domain, something I’d like all of you to consider doing with tools you develop for immersive entertainment purposes.
For other spaces that aren’t planetariums, there are a couple of ways to effectively model them. One technique is to use photogrammetry. You take a bunch of pictures with a fixed focal length lens camera, then import those to a model using your choice of photogrammetry software. The mesh which is produced from this technique is complicated, so it’s a good idea to carefully model a lower poly count version of the space based on the generated model. This can be done manually, or using the decimation modifier on the photogrammetry based model.
Once the model of the space is created, it’s necessary to map the cube faces of the virtual camera onto the surface of the model. This is done with perspective based projection and the technique varies according to the software used. In my favorite modeling program, Blender, the easiest way I’ve found to do this is to create a 90 degree FOV camera, orient it according to the cubemap position, and then unwrap the model from that camera’s perspective. This is repeated for each cube face that will be tracked.
It’s then possible to add cameras to the scene which reflect the positions and orientations of the projectors, relative to the model used to create the display. Projector overlap is impossible to avoid, so it’s a good idea to provide a function for the displays to mask and blend their content, which reduces the seam effect when joining two projected views. I do that with a plane set to display itself just in front of each projector feed camera, with a shader that supports alpha transparencies and system to accept a png file to act as the blending mask. Each blending mask goes on a different layer in Unity to keep them from interfering with other cameras. Each projector should be directed to a different display number, and it’s a really good idea to come up with a system which orders those displays in advance. If you’re using Windows as your display software, then you can get the display ids from your system to show on your projected surfaces with the “Display Settings” tool.
With your projectors in place and outputting in the real world, your project is ready for sensor mapping. Sensor mapping is similar to the last part of projection mapping, but involves positioning any sensors instead of the projectors. If you have a way to show debug data for your sensors, it’s a really good idea to incorporate that into your display. This way, you can visually confirm the values the sensors are feeding map to the real world detected movements. Sensors can include cameras of various types, depth trackers, button presses, potentiometers, flexible resistors, PIR sensors, smartphones or just about any other kind of electronics hardware.
For a simple system that makes smartphone sensor data available to a Unity project, check out my open source software, slab-fondler.