As a web developer diving into Unity development a few years ago, working with UI in the game engine felt like shifting from using a precision multi-tool to crafting with a hammer and chisel. Both have their merits: the hammer and chisel are timeless, and can achieve wonders in the right hands. But there's no denying the ease, flexibility, and nuanced capabilities a modern multi-tool brings.
Unity is undeniably powerful, yet when it comes to the developer experience of building user interfaces, it can feel pretty rudimentary, especially when juxtaposed against the comprehensive ecosystem of React, HTML, CSS. and web development frameworks in general. Would my perspective be different had I begun my journey building UIs in Unity? Maybe, but I guess we’ll never know.
Before we dive deeper, it's worth noting that some gaming giants have already embraced React in their games in some form or the other. I mention this because I sometimes get comments about the idea of using React or HTML/CSS in game engines being unrealistic.
Minecraft, Battlefield, and even Sony (the PS5’s UI was apparently built with React Native) have tapped into the capabilities of using React.
Here's the deal: React brings with it a mature ecosystem. Companies and open-source contributors have invested a ton of resources into improving and building upon it, meaning there’s a vast trove of libraries, frameworks, and tools at your disposal. Plus, with a huge pool of React developers available, hiring becomes a breeze.
React might not always be the best fit. Think performance, for one. None of the React-based solutions I bring up later in this article will be as performant as using the native engine UI tools. Ultimately, you’ll be trading developer experience (DX) for performance, which might be too high a cost for some.
For instance, if you’re building a first-person shooter (FPS), performance and super-fast response times are crucial, so you might not have the performance budget for this approach.
Similarly, marrying different languages between the engine and UI could also pose some hitches because your devs now need to switch between contexts when working with UI or you’ll need to hire a different set of developers. In these cases, it might not be worth the effort.
And that is perfectly fine. This isn’t an ‘If You’re Not using React for your UIs, Shame on You’ article, it is more of a ‘Consider Trying Out React For Game UIs, You Might End Up Liking It’ type of thing.
So if you’re just getting started, there are two main approaches these days.
Think of it as putting a browser right inside your game. Ever heard of Vuplex? It's a Unity plugin that lets you display and interact with web pages within your Unity apps or games. According to the homepage, it’s been used by HTC and NASA.
On PC, it embeds Chromium, while on mobile it opts for native WebViews. It’s what I usually turn to on personal projects, and it makes building cross-platform apps a breeze because you use the same C# interfaces across PC and mobile and can communicate between the browser and Unity easily.
When working on smaller, mobile-only projects, I’ve had a pretty good experience with a different plugin called UniWebview as well, which takes a similar approach.
If this isn’t your cup of tea, either because embedding Chromium is too heavy for your use case or because you don’t need all the features, Coherent Lab's Gameface might be more your style. Think of it as a custom lightweight browser/framework tailored for game UIs. It’s what Minecraft uses.
Because it is tailored for game UIs, performance is one of its primary goals, which sometimes means sacrificing certain browser features. However, it does save you time in that it comes with certain features you would have to manually implement yourself with something like Vuplex. For example, Gameface supports data binding natively using a custom data-bind-value attribute, so you can do things like this in HTML:
It also has wider platform support, so you can use it in Unity and Unreal Engine, and your UIs will work on a range of platforms like Windows and Mac, to Playstation and Nintendo Switch. However, if you want to get started with Gameface, you’ll need to get in touch with them using their contact form.
This approach skips the browser rendering and instead interprets JavaScript into native game engine UI code. The most fully-formed version of this I’ve seen so far is the OneJS implementation. Designed specifically for Unity, it integrates popular tools like Typescript and Preact. Its benefits are threefold:
But (yes, there's always a 'but'), because OneJS is at its core, still using Unity's UI Toolkit, you can’t integrate 3rd party JS modules. In its quest to minimize browser and NodeJS dependencies, OneJS doesn’t provide access to many 3rd party JS modules. Certain browser features, like Canvas, SVGs (for now), and complex CSS animations, are also off-limits due to this approach.
Now you might be wondering, what’s the workflow like when working with the two different contexts? As mentioned, I mostly use Vuplex because I’m not usually working on projects where users or games need to react (heh) or respond super quickly to input, so I haven’t had to worry about browser frame rate limitations (e.g Vuplex has a maximum frame rate 60 FPS). Still, this workflow should work with any approach.
I make heavy use of the ability to send messages between JS and C#, or the browser and Unity. Instead of trying to tightly couple interactions between both (e.g. checking in Unity when a specific button is clicked in the browser), I coordinate both environments with a state machine and send events to trigger reactions.
For example, if we’re in a Settings Menu in a game, Unity doesn’t need to know about any inputs, buttons, or elements on the page, it only needs to know that the user updated their settings. So when a button is clicked in the browser, a SETTINGS_UPDATED message is sent to Unity along with any new settings data, and the app can respond to the message.
Where do state machines come in? I usually break down the entire app or game into different states. For instance, the game could be in any of the following states or even substates: PAUSED, ACTIVE, CUTSCENE. It is then up to the browser and Unity app to individually determine what they display when the app is in a given state. The main benefit is that instead of coordinating a bunch of micro-interactions imperatively, most of the time is spent on coordinating what the given state should be and what things should look like, declaratively.
However, I should mention that I also still use Unity’s native UI system if I need to have Worldspace UI, or UI directly in 3D space, especially if the UI needs to be obscured by a game object. While I could try to do this somehow in the browser by leveraging WebGL, I don’t see the point in rendering two separate 3D contexts, especially since I feel that game engines have better 3D support than the web. Plus, doing that also results in a pretty significant performance hit. As a result, when I do need to use Worldspace UI, I try to limit it to simple graphics or sprites and avoid having to deal with fonts and text natively.
Merging the realms of web development and Unity has been a journey, to say the least. There’s a lot more to cover in this space, and if there’s interest, I’ll probably go more in-depth, with examples. If you're venturing into this niche, or have already set foot, I'd love to hear about what you’re building!