When a new technology paradigm is visible, but neither obvious nor enabled, the opportunity is for creative, technical, and business visionaries. Toward the end of this initial element of the cycle, the opportunity is becoming visible, and often seems trivially easy to observers. At this time, opportunists tend to jump in; setting interns and family members to complete projects “well enough” to set a toe in the water and position for upcoming opportunities. After which phase, the paradigm becomes professionalized, with tasks then allocated to experts in industries that are aligned, or new sorts of experts, specifically formed for the new paradigm.
Those are pretty broad statements, so examples are certainly called for in clarifying what it means, and why this sort of understanding may be useful to current and future opportunities.
The web really began to be of interest in 1993, when the Mosaic browser started to ship, with anxiously awaited regular updates of additional functionality. There was a generation that immediately saw democratizing opportunity in a distributed interactive technology for intellectual and creative sharing. Publications like Word.com and suck.com explored diverse possibilities. The leaders of this generation found representation with agents at William Morris and were featured in sexy ad shoots. Just as this initial phase peaked in the late 90's, though, the friends and family production of websites became ubiquitous, and the professionalization of the field also began to form. Design and IT talent pools merged with what had been a gritty small-scale field, and the initial creative and communal vision became irrelevant to new realities. The visionaries who had been in at the start either found a place in newly defined professionalized roles, rolled into new tech paradigms, or (perhaps most often) ditched commercial efforts for academia.
More recently, mixed reality(MR)/virtual production and immersive experience reached their respective initial points of interest around 2016 and 2014. Their paths since have been quite different, so I'll start with the virtual production paradigm.
By 2016, it was possible to hack together functionality for a VR scene in Epic's Unreal Engine that would enable one to track a human being passing through the space and interacting in real time with elements in that constructed world. -Using a greenscreen and camera integration. The limitation was that only crazy people would attempt to do so (mirroring the Word.com/suck.com era). This was the period of technical and creative visionaries. Epic was obviously quite interested in making this process both easier and more broadly available, and in subsequent years they did a lot to make this so. The Covid era added wings to such efforts, and ROE was fortunate to have just the right technology in their LED screens to exploit what Unreal could now do; far bypassing what could be done with chromakey. By late in Covid, MR was fully professionalized, with execution leaning on experienced special effects professionals, and the interface for the engine subsumed under the industry-standard Disguise interface for many such purposes. Due to the ever-evolving nature of the Unreal Engine, there remains more space for technical visionaries than one might expect otherwise, but it is definitely now in the fully professionalized phase.
A bit earlier, in 2014, immersive experience was still in its fairly raw beginnings. There already existed several software tools providing projection of linear video, anamorphically disorted, to enable illusions of dimensionality when projected upon dimensional targets (projection mapping). This was often pretty, sometimes interesting, but not really significant. However, at that time, Epic revised its licensing strategy for the Unreal Engine to be effectively free for uses that were not conventional game sales, as well as making the software free to download. This meant that it was possible (as with MR uses), for a small team to make amazing imagery that could be projected onto three-dimensional targets, creating a wide variety of amazing illusions and interactions in physical space. Not only could conventional projectors be employed to display this spatially correct imagery, but all sorts of physical controls of things like robotic elements, and deformable screens could be controlled. Until the Covid lockdowns occurred, it really seemed like this was the aspect of Unreal Engine usage that would dominate. -Especially with development of nDisplay in Unreal and integration with Disguise, which is predominantly a theatrical software.
So, where does the paradigm shift of immersive experience currently sit, relevant to the timeline described initially? It actually seems bifurcated right now. On one hand, there is Unreal Engine as the theatrical tool, which is really pretty much at the professionalized stage. It is executed by people who have theatrical experience, often on large stages, and primarily using Disguise as their interface for control.
The other aspect of immersive experience, especially as enabled by Unreal, is probably more interesting. This is as the tool for a fully three-dimensional dataset and controls for all aspects of experience. A world where we know what is happening in all dimensions and use a variety of tools to control it and engage with it. It is the post-LCD screen world. Not that we won't have such screens, but we will have a wide variety of interface technologies that will take into consideration the inherent three-dimensionality of human experience. It is already here in various contexts, such as emerging automotive displays, but this rethinking will rapidly accelerate in coming years. In a sense, this is the metaverse. But the value of ubiquitous dimensional experience is far beyond simply having dimensional online interactions. Hence, a landgrab, and buying your space in a new metaverse concept, is like investing in a modest metaphor, rather than engaging with the real upcoming change. One might do better buying physical real estate, once one understands that the new experiential world will not be constrained to flatscreen (or VR) but will be at least as significant to what happens in the real world.
At this moment, this branch of immersive experience is absolutely the most relevant to creative and technical visionaries, and I suppose that's really the motivation for writing this. To highlight that we are at an exciting and significant point with this area of experience, and to clarify, for myself as much as others, that the exciting moment won't last long.
One final note, as I look over what I've written. I recognize that it very much skews toward the Unreal Engine as a power behind what's coming, and some may disagree with me on that. However, I have worked with a wide variety of tools, and a wide variety of experts in those tools, to the extent that I believe I have some pretty good arguments as to why Unreal Engine is uniquely significant. If there's interest, I will write another post explicitly describing why. For now, I get nothing but the engine from Epic, and I have no reason to care about Unreal beyond what it can do.