Checking in on Metaverse & AI
The Invesco Metaverse and AI fund was launched to capitalise on opportunities that emerged from the next step in the evolution of the internet. As we approach the third anniversary of the fund’s launch, we find plenty of evidence that the theme is progressing nicely, as we transition towards a world where our digital and physical lives will converge seamlessly in an immersive experience. In this update, we revisit some of our original insights and highlight how the portfolio is positioned to capture the ongoing momentum across key themes.
Computer power is not holding us back
When we launched the fund, we said we needed development in four key enabling technologies for fully immersive, 3D, real-time Metaverse experiences to be supported: VR/AR headsets, computer hardware, artificial intelligence (AI) and wireless broadband connectivity.
Since then, the most significant development has been the rapid emergence and acceptance of generative AI apps, like ChatGPT. What is often underappreciated is the scale and speed of improvement we are seeing in the computer hardware that is powering these large language models. Nvidia’s GB300 will be released later this year and is the company’s highest-performing, largest scale AI system for enterprises. It is 1.5-2x faster than the GB200, but will be surpassed by Rubin Ultra in 2027, which is expected to be 8-14x faster than the GB300, so an improvement of 12-28x just in the hardware1.
We are entering a new phase of exponential growth in AI capabilities across the technology stack. AI training and inference power has experienced growth in recent months, which is only leading to more investment. AI models are also improving and could continue to improve in the future, with similar advances in software and protocols.
What comes next?
AI models are rapidly advancing from simple pattern recognition systems to more sophisticated "reasoning" models capable of tackling complex, multi-step questions and problems. This evolution paves the way for AI agents’ systems that can autonomously perform a wide variety of tasks, reducing the need for human intervention. While the long-term goal is artificial general intelligence (AGI), which can match human cognitive abilities across domains, the near future will likely see the rise of more specialised, task-focused AI systems.
One of the most successful specialist AI agents is GitHub Copilot, a code completion and automatic programming tool, which owner Microsoft charge enterprises c.$20 a month per user. Other specialist AI agents are building up their user bases before monetising their businesses. For example, Heidi Health takes notes during doctor consultations, allowing the doctor to focus fully on the patient. We’ve also seen Hitachi use gen-AI to try and help replicate the expertise of its experienced maintenance technicians, with concerns around training the next generation of workers.
As Microsoft’s Chief Technology Officer recently argued, the limiting factor for AI agents is not the power of the models, but the ability to access the right data and talk to other apps. The models already have far greater capability that what they are being used for, and they are going to get “so much more powerful and so much cheaper over the course of the next 12 months”.
Scope for productivity gains
Coding is foundational to advances in other areas of technology and has historically required significant amounts of human labour. Increasingly, the hyperscalers are using AI for coding, with the expectation that in a couple of years almost all coding will be done by AI. There tends to be a lag for broader corporate uses of AI, but it is not hard to imagine widespread use within the next year or two.
While we may have to wait for widespread corporate use, AI is currently enabling groundbreaking research, overcoming challenges humans have struggled with for a generation. For example, Google’s Deepmind’s AlphaFold has predicted the structure of nearly all proteins known to science, solving biology’s biggest problem and allowing researchers to turn to even bigger questions in biotech and materials science.
Latest from the industrial metaverse
Industrial metaverse use cases are the most compelling in our view, given the scale of productivity gains from harnessing the power of digital twins. A digital twin is a digital replica of a physical object, person, system, or process, contextualised in a digital version of its environment. Digital twins can be used to simulate real situations and their outcomes2. Enhanced AI capabilities are seeing concepts developed that suggest even greater efficiencies can be achieved.
Rolls Royce have used digital twins to try and completely connect the worlds of product, service and digital. Engines will start to understand how they are being used and be able to respond to the environment around them without human intervention, while also being connected to fleet engines and learning from a network of peers, to adjust behaviour and achieve best performance. Digital twins have also been essential in developing the company’s aero UltraFan engine, which will be quieter and 25% more fuel efficient than the original Trent 700.
VR/AR and Smart glasses
To access and experience the metaverse, we identified the need for development in another key enabling technology, VR/AR headsets. The launch of Apple’s Vision Pro showed the incredible potential of spatial computing, but proved to be a niche product, that was held back by limiting factors including the weight of the headset and its prohibitive price.
A new generation of smart glasses with gen-AI capabilities would appear to be less encumbered by issues related to weight and price. Meta and EssilorLuxottica’s Ray-Ban glasses have sold over 2mn units since the second-generation launch in September 2023, supported by Meta’s leading-edge gen-AI model, Llama 2. Meanwhile, Google has unveiled its lightweight XR glasses, which are powered by Gemini AI.