Introduction

In this book we will cover all essential information regarding usage of Oxygengine. That being said, we should explain first what Oxygengine is, what it aims to be and what's definitely not made for.

The goal

Oxygengine is an all-batteries-included kind of engine, focused on being Web-first but slowly moving more into more platforms such as desktops and consoles. The goal is for it to be a Rust version of Unreal Engine for 2D games, which made us take a very important decision from the beggining, for it being a completely asset-driven game engine with game editor dedicated mostly for Game Designers and Artists to use. That means, all architecture decisions are made to ease interacting with all parts of the engine from game editor with next to none use of code.

This also means that while Oxygengine is a general purpose 2D game engine, it aims to provide specialized modules to ease work on specific game genres. At the moment we already have specialized game modules for these kind of games:

Where we are now

At this point Oxygengine is near its stable version from the code architecture point of view, but does not have a game editor ready to use yet, although editor is still work in progress and it might be released in 2024, not yet decided on any particular time - we already have moved from web-based to desktop-based editor.

Where could i use Oxygengine

Since this engine is Web-first (and desktop/console-second) 2D-focused and it aims to give you a specific games genre solutions to let you just make a game and not reinvent the wheel, you most likely would want to use it if you aim for making one of these kind of games:

  • RPGs
  • Visual Novels

Genres we will cover soon in new specialized engine modules:

  • Shooters
  • Platformers
  • Puzzle

Every other genres, although they are possible to make, we do not provide or rather not yet plan to provide a specialized engine module.

Where i can't use Oxygengine

This engine is basically useless (yet) for any kind of 3D games (maybe except original Doom-like games, but these would require heavy hammering). Also definitely do not use it (yet) to make gaming console titles (although there are plans for these platforms and if we get lucky to sign a partnership with gaming console producents, we might provide private dedicated engine modules and game templates).

Oxygengine Ignite CLI tool

Although Oxygengine can be used like any other crate, it's better to install Ignite CLI tool that will govern most vital operations on your game project development:

cargo install oxygengine-ignite

Additionally it's encouraged to install just CLI tool:

cargo install just

Each created game project contains justfile with set of handy shortcut commands for common day-to-day operations useful when developing a game. Once in game project root directory, run just list to see list of commands.

Creating new project

Oxygengine uses a concept called "Game Templates" where instead of starting from scratch, you create your new project from one of few existing game templates:

  • base - Contains barebones setup for the simplest game example. Use it create "blank project" to later configure with more modules.
  • game - Contains more complex setup with modules you would use in production-level games. Use it if you decided to get deep into developing your dream game.
  • prototype - Contains ergonomic framework for quick and dirty game prototypes with more imperative than data-driven approach.

Create new game project with default (game) game template:

oxygengine-ignite new <project-name>

for example:

oxygengine-ignite new dream-game

Create new game project with specified game template to use:

oxygengine-ignite new <project-name> -p <game-template-name>

for example:

oxygengine-ignite new dream-game -p game

Create new game project in specified path:

oxygengine-ignite new <project-name> -d <path-to-parent-directory>

for example:

oxygengine-ignite new dream-game -d ~/game-projects/

Managing your project

Once you've created your project, there is set of commands to run.

NOTE: Each game template contains setup for multiple platforms, although for day-to-day development you might want to use desktop platform commands.

We encourage to use just commands with configured oxygengine-ignite calls for best experience.

Each of these just commands can have an optional last parameter specifying platform to run command for (defaults to desktop when ommitted).

Day-to-day development

Bake project assets into asset packs for development cycle to use:

just bake

Format project code:

just format

Compile project:

just dev build

Run project (best to run only for desktop platform, because web is not configured here at all):

just dev run

Test project (best to run only for desktop platform, because web might not be configured here properly):

just dev test

Live development with code recompilation and assets re-baking on change:

just live

Files will be served from: http://localhost:8080.

Production distribution

Build game binaries in debug mode:

just prod build debug

Build game binaries in release mode:

just prod build release

Package game distribution in debug mode:

just prod package debug

Package game distribution in release mode:

just prod package release

Update engine version used in your game project

  • reinstall oxygengine-ignite:
    cargo install oxygengine-ignite --force
    OXY_UPDATE_PRESETS=1 oxygengine-ignite --help
    
  • update oxygengine dependency version in Cargo.toml to point to the latest engine version.

Speed up compilation times for new projects

Best use case for gamejams and quick feature prototypes.

  • install SCCACHE, a tool for caching and sharing prebuilt dependencies between multiple game projects (https://github.com/mozilla/sccache):
    cargo install sccache
    
  • add these lines to the Cargo.toml:
    [package.metadata]
    # path to the sccache binary
    sccache_bin = "sccache.exe"
    # path to the sccache cache directory
    sccache_dir = "D:\\sccache"
    

From now on you will have to wait for full long engine build only once, for any other new game project you create, it will perform first compilation in matter of minute or less, not in minutes or more.

Asset pipeline and its tools

Since Oxygengine is highly data-driven, we put most if not all data into assets, that's where the idea of asset pipeline was born. Asset pipeline is used to take source files such as images, levels, sounds, basically every file that contains some static and read-only data your game wants to use, and it converts it to data format that suits best engine internals needs.

For example you can have many different source image to be rendered but all engine cares about is the image data and not its source format, so we use asset pipeline tools to convert them into engine's internal image format, or to even compress them. Actually, better example can be found with fonts or levels.

In HA (hardware-accelerated) renderer we use SDF-compatible font map images so we obviously need to bake them either from BMfont generated files or other font rasterization software. For game levels we use free LDtk level editor so we have a LDtk asset pipeline tool that takes LDtk project files and bakes images from tilesets, prefabs with entities from level layers and additional data assets from used grid layer values.

Asset pipeline tools are just CLI binaries that uses oxygengine-build-tools crate types to read input data passed to your tool, and your tool job is to write new files to given path. That means, if you need to have support for custom/additional asset sources, you can easily make an asset pipeline tool for it.

IMPORTANT: When using HA renderer, you have to also install their asset pipeline tools for asset pipeline to be able to bake assets. These are:

  • cargo install oxygengine-ha-renderer-tools

In general, if some engine module requires asset pipeline tools to work, there is a companion crate named: <some-engine-module>-tools.

Hardware Accelerated rendering

Here we will take a deeper look at how Oxygengine's hardware Accelerated rendering model works.

Overview

HA renderer uses completely data-driven, material-graph based rendering approach. The most important unit in HA renderer is Render Pipeline.

  • Render Pipelines are containers for Render Targets and Render Stages.
  • Render Targets are a GPU storage, more precisely dynamic textures that all rendered information gets stored into.
  • Render Stages are containers for Render Queues and are used by dedicated render systems which define how entities (or any custom data in mater of fact) gets rendered into Render Target that Render Stage points to.
  • Render systems work on cameras and record Render Commands into Render Queue assigned to given Render Stage.
  • Render Stage also defines Material Domain (which is basically a shader backend) that will be linked with Material Graphs (which are shader frontends) of entities that gets rendered (more about how Materials works later on this page).

Material-graph based rendering

While most game engines expose raw shaders to the users to make them tell exactly how anything should be rendered, Oxygengine took another path and do not expose any shader code, these are considered engine internals that user should never be forced to write on their own. More than that, shaders gets baked at runtime (or build-time, both material graphs and baked materials are assets) only when engine finds that certain pair of material domain (backend) and material graph (frontend) are gonna be used in rendering.

Material domain and material graphs always work in tandem, material domain job is to preprocess vertex and uniform data and send it to material graph (via specific interface that given domain defines) for it to postprocess that data and send it back for material domain to store it properly in target outputs.

The reason for them being separate units is because HA renderer aims for in case of user wanting to do some different visuals than default ones provided by the engine, to focus only on the effect and don't bother writing additional logic just to meet specialized vertex format and render target requirements. Another benefit we get from this approach is that now user can make his frontend material once and renderer will bake at runtime all variants needed by all pairs of domain and graph materials.

This basically means that user can now get his material graph working without any additional work with any vertex format and target format that renderer find compatible at runtime. In even simpler words: imagine you have a material graph that has to add outlines to the image, in that case no matter if you render your entity for example in forward or deferred renderer, it will work for both by default as long as both use material domains which provide domain node that your outline material uses.

IMPORTANT: All shader variants for given material are considered unique as long as they have different Material Signatures:

  • Material Signature is defined by Material Mesh Signature + Material Render Target Signature + Domain name + Vertex Middlewares used.
  • Material Mesh Signature is defined by unique Vertex Layout (vertex layouts are defined by meshes, to be more precise by the vertex format given mesh data uses).
  • Material Render Target Signature is defined by set of render target output names.

1 picture say more than 1000 words

Let's take a look at the simplest material domain (this is code-side representation of the material domain/graph):

material_graph! {
    inputs {
        [fragment] inout BaseColor: vec4 = {vec4(1.0, 1.0, 1.0, 1.0)};

        [vertex] uniform model: mat4;
        [vertex] uniform view: mat4;
        [vertex] uniform projection: mat4;

        [vertex] in position: vec3 = vec3(0.0, 0.0, 0.0);
        [vertex] in color: vec4 = vec4(1.0, 1.0, 1.0, 1.0);
    }

    outputs {
        [vertex] inout TintColor: vec4;
        [vertex] inout ScreenPosition: vec4;

        [vertex] builtin gl_Position: vec4;
        [fragment] out finalColor: vec4;
    }

    [model_view_projection = (mul_mat4,
      a: projection,
      b: (mul_mat4,
        a: view,
        b: model
      )
    )]
    [pos = (append_vec4, a: position, b: {1.0})]
    [screen_position = (mul_mat4_vec4, a: model_view_projection, b: pos)]

    [color -> TintColor]
    [screen_position -> ScreenPosition]
    [screen_position -> gl_Position]
    [BaseColor -> finalColor]
}

In this snippet we can see that this particular material domain expects model + view + projection uniforms, as well as position + color vertex inputs, and it writes data to gl_Position vertex output and finalColor target output. This basically means that this material domain will work with stage that writes to finalColor target output and position + color vertex format. It also will bake shader variants for any material graph that might read TintColor and/or ScreenPosition domain input, and might write BaseColor domain output.

Consider domain input/outputs to be purely an optional interface between material domain and material graph. You might ask now: "why domain interface is optional?" well, this is where this approach shines: you see, when material domain gets combined with material graph, it will bake shader only from nodes that leads directly from target outputs to vertex inputs, with all required nodes along the way, every node not used in that path won't get compiled into shader variant.

Now let's take a look at the simplest material graph:

material_graph! {
    inputs {
        [vertex] inout TintColor: vec4 = {vec4(1.0, 1.0, 1.0, 1.0)};
    }

    outputs {
        [fragment] inout BaseColor: vec4;
    }

    [[TintColor => vColor] -> BaseColor]
}

Here in this material graph we can see we only use domain interface and just move input color from vertex shader stage to fragment shader stage and send it back to material domain to let it store properly for its render target outputs - when user is making material graph, (s)he doesn't have to care about how to write to targets, (s)he can only care how to process domain inputs into domain outputs and domain takes care of properly storing data into target outputs.

Now imagine user wants to create material graph that do not use TintColor at all, rather converts ScreenPosition into BaseColor:

material_graph! {
    inputs {
        [vertex] inout ScreenPosition: vec4 = {vec4(0.0, 0.0, 0.0, 0.0)};
    }

    outputs {
        [fragment] inout BaseColor: vec4;
    }

    [[ScreenPosition => vColor] -> BaseColor]
}

This material graph when combined with our previously defined material domain, will bake shader with nodes that only use screen position calculated in domain graph and not include color vertex data in the shader at all since this shader variant does not use it. Now, do you also see the benefits of this over usual #ifdef-ed raw shaders? You can focus on what effect you want to achieve without caring about engine internals you work with, trying to define or even limit yourself with effects.

Material middlewares

Another concept used with material graphs is material middlewares - an ergonomic way to "inject" other material graphs as material input preprocessor.

Consider you have a vertex format like this:

vertex_type! {
    #[derive(Debug, Default, Copy, Clone, Serialize, Deserialize)]
    @tags(SurfaceDomain, SurfaceTexturedDomain)
    pub struct SurfaceVertexPT {
        #[serde(default = "default_position")]
        pub position: vec3 = position(0, bounds),
        #[serde(default = "default_texture_coord")]
        pub texture_coord: vec3 = textureCoord(0),
    }
}

It is used to render for example regular entity sprites. Now you want to make these sprites animated with skinning. Normally you would need to duplicate all material domains that has to work with skinning which makes future changes and general material maintanance subjective to being out of sync for starters - we can avoid that problem entirely using material middlewares!

You start with defining skinned vertex format with skinning data only:

vertex_type! {
    #[derive(Debug, Default, Copy, Clone, Serialize, Deserialize)]
    @middlewares(skinning)
    pub struct SurfaceSkinningFragment {
        #[serde(default = "default_bone_indices")]
        pub bone_indices: int = boneIndices(0),
        #[serde(default = "default_bone_weights")]
        pub bone_weights: vec4 = boneWeights(0),
    }
}

As you can see, we have marked this vertex format to use skinning middleware. We also define compound vertex format that will make that sprite vertex format mixed with skinned vertex format:

compound_vertex_type! {
    #[derive(Debug, Default, Copy, Clone, Serialize, Deserialize)]
    @tags(SurfaceDomain, SurfaceSkinnedDomain, SurfaceTexturedDomain)
    pub struct SurfaceVertexSPT {
        #[serde(default)]
        pub vertex: SurfaceVertexPT,
        #[serde(default)]
        pub skinning: SurfaceSkinningFragment,
    }
}

Now we have to add skinning middleware material graph to Material Library resource:

library.add_function(graph_material_function! {
   fn skinning_fetch_bone_matrix(texture: sampler2D, index: int) -> mat4 {
       [return (make_mat4,
           a: (texelFetch2d, sampler: texture, coord: (make_ivec2, x: {0}, y: index), lod: {0}),
           b: (texelFetch2d, sampler: texture, coord: (make_ivec2, x: {1}, y: index), lod: {0}),
           c: (texelFetch2d, sampler: texture, coord: (make_ivec2, x: {2}, y: index), lod: {0}),
           d: (texelFetch2d, sampler: texture, coord: (make_ivec2, x: {3}, y: index), lod: {0})
       )]
   }
});
library.add_function(graph_material_function! {
   fn skinning_weight_position(bone_matrix: mat4, position: vec4, weight: float) -> vec4 {
       [return (mul_mat4_vec4,
           a: bone_matrix,
           b: (mul_vec4, a: position, b: (fill_vec4, v: weight))
       )]
   }
});
library.add_middleware(
    "skinning".to_owned(),
    material_graph! {
        inputs {
            [vertex] in position as in_position: vec3 = {vec3(0.0, 0.0, 0.0)};
            [vertex] in boneIndices: int = {0};
            [vertex] in boneWeights: vec4 = {vec4(0.0, 0.0, 0.0, 0.0)};

            [vertex] uniform boneMatrices: sampler2D;
        }

        outputs {
            [vertex] out position as out_position: vec3;
        }

       [pos = (append_vec4, a: in_position, b: {1.0})]
       [index_a = (bitwise_and, a: boneIndices, b: {0xFF})]
       [index_b = (bitwise_and, a: (bitwise_shift_right, v: boneIndices, bits: {8}), b: {0xFF})]
       [index_c = (bitwise_and, a: (bitwise_shift_right, v: boneIndices, bits: {16}), b: {0xFF})]
       [index_d = (bitwise_and, a: (bitwise_shift_right, v: boneIndices, bits: {24}), b: {0xFF})]
       [result = (skinning_weight_position,
           bone_matrix: (skinning_fetch_bone_matrix, texture: boneMatrices, index: index_a),
           position: pos,
           weight: (maskX_vec4, v: boneWeights)
       )]
       [weighted = (skinning_weight_position,
           bone_matrix: (skinning_fetch_bone_matrix, texture: boneMatrices, index: index_b),
           position: pos,
           weight: (maskY_vec4, v: boneWeights)
       )]
       [result := (add_vec4, a: result, b: weighted)]
       [weighted := (skinning_weight_position,
           bone_matrix: (skinning_fetch_bone_matrix, texture: boneMatrices, index: index_c),
           position: pos,
           weight: (maskZ_vec4, v: boneWeights)
       )]
       [result := (add_vec4, a: result, b: weighted)]
       [weighted := (skinning_weight_position,
           bone_matrix: (skinning_fetch_bone_matrix, texture: boneMatrices, index: index_d),
           position: pos,
           weight: (maskW_vec4, v: boneWeights)
       )]
       [result := (add_vec4, a: result, b: weighted)]
       [(truncate_vec4, v: result) -> out_position]
    },
);

What is important there, material middlewares has to define in and out pins that it injects in between, so in case of skinning we essentially tell material compiler we want to inject skinning before vertex position data gets passed to actual material - we are making skinning middleware as vertex input preprocessor:

inputs {
  [vertex] in position as in_position: vec3 = {vec3(0.0, 0.0, 0.0)};
}
outputs {
  [vertex] out position as out_position: vec3;
}

Now whenever we want to render mesh with SurfaceVertexSPT vertex format (skinned position texcoord), for every material that has to use it, there will be compiled shader variant with skinning injected - no more duplicating materials with extra features, when we can just inject these features (middlewares) directly into materials that use them!


As you can see, all of this moves away burden of careful producing of all shader code from user to the engine. With materials there is no more need for any tedious, boilerplate-y and unnecessary #ifdef-ed shader code - we have reduced the complexity of shader creation and management to bare minimum.

Render Pipeline

Overview

HA renderer requires user to define render pipelines on game app setup phase. Render pipelines describe how camera that uses given render pipeline, should render world entities.

Let's take a look at typical renderer setup first:

HaRenderer::new(WebPlatformInterface::with_canvas_id(
    "screen",
    WebContextOptions::default(),
)?)
.with_stage::<RenderForwardStage>("forward")
.with_stage::<RenderGizmoStage>("gizmos")
.with_stage::<RenderUiStage>("ui")
.with_pipeline(
    "default",
    PipelineDescriptor::default()
        .render_target("main", RenderTargetDescriptor::Main)
        .stage(
            StageDescriptor::new("forward")
                .render_target("main")
                .domain("@material/domain/surface/flat")
                .clear_settings(ClearSettings {
                    color: Some(Rgba::gray(0.2)),
                    depth: false,
                    stencil: false,
                }),
        )
        .debug_stage(
            StageDescriptor::new("gizmos")
                .render_target("main")
                .domain("@material/domain/gizmo"),
        )
        .stage(
            StageDescriptor::new("ui")
                .render_target("main")
                .domain("@material/domain/surface/flat"),
        ),
)

From that code snippet we can tell than render pipelines contains:

  • set of render targets used to render into.
  • set of render stages that tells how to render geometry to given render target.

What is important here is that for render stages we are required to provide its render target name for stage to know where to store all information it produces, as well as domain graph name which is the shader backend for all shader frontends (material graphs) used by entities in the world.

How it works

Recording to Render Queue

At first, renderer searches for new camera components, for each camera it creates its own instance of render pipeline that camera component points at.

Then all render stage systems goes through all cameras that contain render stages with stage type given system provides, here is a brief snippet example:

pub fn ha_render_gizmo_stage_system(universe: &mut Universe) {
   type V = GizmoVertex;

    let (
        world,
        mut renderer,
        lifecycle,
        mut gizmos,
        material_mapping,
        image_mapping,
        mut cache,
        ..,
    ) = universe.query_resources::<HaRenderGizmoStageSystemResources>();

   if gizmos.factory.is_empty() {
       return;
   }

   let layout = match V::vertex_layout() {
       Ok(layout) => layout,
       Err(_) => return,
   };

   let mesh_id = match cache.mesh {
       Some(mesh_id) => mesh_id,
       None => {
           let mut m = Mesh::new(layout.to_owned());
           m.set_regenerate_bounds(false);
           m.set_vertex_storage_all(BufferStorage::Dynamic);
           m.set_index_storage(BufferStorage::Dynamic);
           match renderer.add_mesh(m) {
               Ok(mesh_id) => {
                   cache.mesh = Some(mesh_id);
                   mesh_id
               }
               Err(_) => return,
           }
       }
   };
   match renderer.mesh_mut(mesh_id) {
       Some(mesh) => match gizmos.factory.factory() {
           Ok(factory) => {
               if factory.write_into(mesh).is_err() {
                   return;
               }
           }
           Err(_) => return,
       },
       None => return,
   }

   gizmos
       .material
       .update_references(&material_mapping, &image_mapping);
   let material_id = match gizmos.material.reference.id().copied() {
       Some(material_id) => material_id,
       None => return,
   };
   let time = vec4(
       lifecycle.time_seconds(),
       lifecycle.delta_time_seconds(),
       lifecycle.time_seconds().fract(),
       0.0,
   );

    for (_, (visibility, camera, transform)) in world
        .query::<(Option<&HaVisibility>, &HaCamera, &HaTransform)>()
        .iter()
    {
        if !visibility.map(|v| v.0).unwrap_or(true) {
            continue;
        }
        let iter = match camera.record_to_pipeline_stage::<RenderGizmoStage>(&renderer, transform) {
            Some(iter) => iter,
            None => continue,
        };
        for (info, render_queue) in iter {
            let mut render_queue = match render_queue.write() {
                Ok(render_queue) => render_queue,
                Err(_) => continue,
            };
            render_queue.clear();
            let mut recorder = render_queue.auto_recorder(None);

            let _ = recorder.record(RenderCommand::ActivateMesh(mesh_id));
           let signature = info.make_material_signature(&layout);
            let _ = recorder.record(RenderCommand::ActivateMaterial(
                material_id,
                signature.to_owned(),
            ));
           let _ = recorder.record(RenderCommand::OverrideUniform(
               MODEL_MATRIX_NAME.into(),
               Mat4::identity().into(),
           ));
           let _ = recorder.record(RenderCommand::OverrideUniform(
               VIEW_MATRIX_NAME.into(),
               info.view_matrix.into(),
           ));
            let _ = recorder.record(RenderCommand::OverrideUniform(
                PROJECTION_MATRIX_NAME.into(),
                info.projection_matrix.into(),
            ));
           let _ = recorder.record(RenderCommand::OverrideUniform(
               TIME_NAME.into(),
               time.into(),
           ));
           for (key, value) in &gizmos.material.values {
               let _ = recorder.record(RenderCommand::OverrideUniform(
                   key.to_owned().into(),
                   value.to_owned(),
               ));
           }
           if let Some(draw_options) = &gizmos.material.override_draw_options {
               let _ = recorder.record(RenderCommand::ApplyDrawOptions(draw_options.to_owned()));
           }
            let _ = recorder.record(RenderCommand::DrawMesh(MeshDrawRange::All));
           let _ = recorder.record(RenderCommand::ResetUniforms);
            let _ = recorder.record(RenderCommand::SortingBarrier);
        }
    }

   gizmos.factory.clear();
}

You can toggle full code reveal to see what is actually happening there.

As you can see, when we get an iterator over requested render stages for cameras, all we do next is to get access to render queue, create auto recorder (to ease writing ordered render commands) and start recording commands. Although Gizmo render system renders already batched gizmo geometry in other render systems, you can get the idea that all what recording phase cares about is to just record render commands into render queue of given camera render pipeline, it doesn't really matter where we get data from, what matters is what gets into render queue. We could also just iterate over world entities and record their render commands to the queue - in matter of fact, this is how Render Forward Stage does that, here we show Render Gizmo Stage for the sake of simplified explanation.

Execution of render queues

After all render stage systems complete recording commands into queues, renderer is now ready to go through all active render pipelines and execute their render queues full of previously made records.

You can remember that when we were talking about recording to Render Queues, we have been mentioning auto ordered recordings of commands - but what does that means? Well, sometimes your render stage system might require its commands to be ordered by for example some kind of depth value. For this case, to not require user to collect entities to sort them and then record them in proper order, we just encode order information in render command group index and enable optional render queue sorting in stage descriptor. That way we do not break unspecified order of entities iteration and just sort render commands itself. This obviously has its own cost so it's just an optional step and you should definitely benchmark to decide which of either render commands sorting or manual entity sorting approach will benefit more your stage rendering.

Another thing worth mentioning about render queues is that they are only data containers so you can for example create your own render queues separately from what render pipeline provides, for example as a way of caching queues and reusing them with multiple pipelines by flushing your custom render queue into one provided by the render pipeline. Yet another use of render queues is to instead of recording them in your application, you can send them via network socket to render it on client application that will reflect your camera setup, but instead of recording world itself, it will render what server sends - similar use case could be for making both game and editor worlds embeded in one application host and game world sending its recorded queues to editor world which then renders game view in its rendering context.

More...

If you need an explanation of some Oxygengine related topic, share your thoughts, ideas, games, consider creating a post on GitHub Discussions explaining what are you trying to achieve.

You can create new post here: https://github.com/PsichiX/Oxygengine/discussions