The Renderer
Understand the how the renderer works in Telltale Editor.
First of all, find all headers here and source files here.
This renderer is sort of a quasi-Telltale/game engine renderer. It supports shader abstraction, shader parameter abstraction, resource management, default render meshes and more.
Contents
The Render Context
This is sort of the main class which when you want to render to a window. By constructing this, you create a window on the client. All implementation detail uses the SDL3 GPU API but you should never need to use it. You can create a context by claiming it from an existing SDL3 GPU device as well as window in the static function RenderContext::CreateShared .
Then periodically just call RenderContext::FrameUpdate on your render context instance. This will sleep to match the frame rate if required.
Layers
Render layers are an abstraction commonly seen in game engines. Essentially the layer is a set of input handlers and render command issuers. The RenderLayer class is also defined in the context header file, and has the abstraction functionality which internally updates the context (i.e. pushes render commands) as well as processes events.
Double Buffering and Sync
This class is implemented as double buffered. On frame X, the main thread will be rendering and waiting for the GPU to render. On frame X + 1, in parallel at the same time another worker thread (using the job scheduler) is issuing the draw commands asynchronously. This is where the render layer functionality is all executed.
You may think well shouldn't rendering happen on another thread and updates on the main thread. That would be ideal, but a lot of the time (especially looking at you MacOS) the main thread must own the render device. This is ok though as this isolates the render updates and allows for a more client-server approach.
Scene Runtime and Messages
The scene runtime is the only really useful layer at the moment. It basically runs Telltale scenes: doing rendering, running scripts and playing audio etc. Once pushed as a layer, you can then hold a weak reference to it and send messages to it from any thread, such as to start a scene.
See the layer here.
Render Parameters
Parameters passed into shaders are all static and don't change in format. The ShaderParameterType enum has all of the parameters implemented at the moment and will continue to grow to support more parameters. Looking at the header here, you can see each shader parameter struct (which matches the layout in the shaders exactly) as well. Bit sets are used to hold the set of parameters a shader can accept.
Parameter Group and Stack
A shader parameter group is a distinct set of parameter types specified inside the parameters bitset. A shader parameter group represents a group of any number of shader parameters (possible parameter types from the enum above), from 0 to all of the bitset parameter types. The idea is that you have some parameters which need to be set and they could be bound from anywhere in a stack of parameters. When binding, the parameters which need to be set are iterated over and starting with the top of the stack and going down (see stack below) each group is gone through until all needed parameters have been set for the current shader. Each stack unit is a group.
This means for example you could push a global camera parameter at the beginning of the scene, and then at a later point push another camera. You can't pop, but you just specify the top of the stack at any point when running a shader anyway. All shader groups and stacks don't need to be freed as they are in the render frame linear heap.
When pushing stacks to each other, you only connect the top of the stacks (singly linked). The bottom-most of the stack knows nothing about whats above it. This is good, as only stuff on the top most should know what below it to search for if not found in the current stack.
For creating parameter groups, stacks and pushing (i.e. chaining), all API is in the RenderContext class. Only use the parameters header file for looking up parameter information.
The RenderUtility namespace is used (found towards the end of the RenderAPI header here) to set the data inside shader parameters quickly, as well as lots of other utility functions.
Render Views and Passes
The core of the renderer and its API for rendering in the layer render function deals with render scene views, render view passes and render instances (draw calls). This is the higher level API, while there is also a lower level API which deals with GPU passes and more (read on). These classes are all found in the render context header.
Render Instances
A render instance i.e. RenderInst is the fundamental instance of a draw call. This class (defined in the render API header, unlike others) is the unit of a draw call. Create one of these on the stack and call its member functions to set data such as the shader program, vertex state, draw call, etc. These are then pushed to a view pass.
Render View Passes
This is the fundamental concept of a render pass at the higher level. You can push draw calls to a render pass via its member functions. You specify the set of render targets and optional depth stencil target here as well. It is considered good practice to set the name of the pass as well (all string memory handled internally and freed end of frame). You can also push pass-wide parameter groups, ie base parameters on top of the render scene view base parameters.
Render Scene Views
A scene view is the fundamental concept of a view into the scene. For example, if you are rendering the scene from two viewpoints, everything is done twice and you would issue two sets of scene views to be pushed. Create one by pushing one into the RenderFrame class which is given to you (thread local so thread safe) in the render layer render update function. You can push view passes to this as you need to in the order which best suits. The passes are executed in the order they are pushed. The scene views are also done in this way, however when pushing a view you can optionally specify whether to push it to the front.
Render Effect Shader Variant System
The Render FX system manages shader loading, program linking and all compilation.
It is a variant system which means that programs are created with a set of macros (called render features here) which are switched on and off for different permutations such that all that will be used are already compiled into one big shader pack at release or on the fly in debug.
Each shader program is referred to by an effect reference (i.e. effect ref). This is just a hash but internally links to a resolved shader program. You can get an effect reference from the render context (in the population thread, the thread which you issue draw commands but not the renderer thread which executes them).
Effect references are comprised of a render effect type, and a set of render effect features (ie variants). All of these variants and effect types are found in a static descriptor set at the top of the render effects header file.
Hot reloading of programs is also supported (by enabling it in the render context). This means every so often when the render FX system detects any effect shader source files have changed, all currently loaded shader programs are reloaded with the new code.
Shader Source Files
Shader source files are not the Microsoft effect files but similar. They share the same file extension, .fx, and contain both the vertex and pixel shader entry functions. The macros TTE_VERTEX and TTE_PIXEL are defined whether or not that specific shader type is currently being compiled so ensure they are wrapped.
Each shader effect type is associated to one effect source file prefixed by either 'MTL_', 'DX_' or 'VK_'. This is done as at the moment shader conversion is not supported due to limited documentation and it being close to impossible to compile them and link them cross platform. Simple AI bots are used at the moment to convert them and do a pretty good job. DirectX code should use shader model 5.0 and all VK code is all GLSL (MSL for metal).
Each render feature and render effect type has its own macro. One source file can hold more than one effect type (use macros). Most of the time one render effect type maps to one distinct effect source file, but sometimes effects are closely related and that happens.
The pseudo-macros TTE_UNIFORM_BUFFER, TTE_GENERIC_BUFFER, TTE_TEXTURE and TTE_SAMPLER take in the parameter name as one argument (see render parameters descriptor sets in the header file for parameters), not a string, the exact name of the buffer and these get replaced by the generated internal shader parameter index to bind to, so you don't need to ever worry about shader binding slots. These are NOT proper macros! These get replaced by a simple string search and replace before compilation so don't use them in other nested macros or so macro combination of any sort! These 'macros' resolve directly to an integer binding slot, so think about exactly how the shading language compiling for would expect it (eg in DirectX, might do 'register(bTTE_UNIFORM_BUFFER(xx))', where there is a 'b' due to it needing to be in the format 'b0' or 'b4' etc).
The same happens for vertex attributes (see parameters header again for descriptor sets and names of each). The pseudo-macro TTE_VERTEX_ATTRIB is replaced directly by the attribute binding index for that attribute. Although these don't really ever change, always use this version in case they do and for readability.
Render Targets
All render targets are abstracted away from you via target IDs. Specify targets to be used and that are required in the render view pass. Render targets are either constant or dynamic. Constant targets don't change and are present at any point. Dynamic targets are transient. The RenderTargetConstantID enumeration in the render API header (see the render targets section) specifies all targets. Like parameters, more can be added as long as they are added to the descriptor array.
The RenderTargetID is essentially a wrapper around that enum but also allows for dynamic render target IDs. Use this to pass in IDs to the passes. The RenderTargetIDSurface struct is also useful and used to pass in which mip and/or slice index to use when binding. Render target sets are used also as a set of the bindable targets at any one point. Maximum of 8 colour targets and 1 optional DS target.
These are then resolved when the frame is executed to their actual GPU texture but you won't need to worry about that.
State Blobs
The RenderStateBlob class which is defined in the API header is very useful and essentially represents all possible render state information which can be currently bound. You can bind this state to each draw call (render instance). These objects specifically designed to be very lightweight, and hold bit packed data. Each entry has as fixed size and is most of the time an SDL3 enum (SDL3 enums are the only part of SDL3 exposed). See the RenderStateType enum for information about all states and their allowed values. Note that all blend information is only applied to render target 0, which is normally the backbuffer (or MSAA backbuffer or HDR).
Render Samplers
The RenderSampler class, inside the render API header, simply describes a sampler and these are kept alive as long as they are referenced. When binding textures, you also specify the sampler too.
Render Pipeline States and Command Buffers
Like all modern rendering APIs, they all work on using pipeline states and buffers. All of this is managed under the hood as needed. Command buffers are used to batch render commands and execute them. Again, these are used under the hood and you should prefer to use render scene views and passes. These classes are found in the API header too.
Render Frame Update List
All uploads to the GPU (download is not supported as of yet) are done via each frame's update list. Here you specify buffers and texture data to upload to the GPU. All uploads are guaranteed to have completed before they are used. This class is in the API header.
Example
The following code renders all renderable objects in the Scene
Last updated