Fundamental Concepts in 3D graphics (work in progress)
This articles talks about apis, meshes and more
(still in progress) also needs spell checking
Also any feedback for this article is highly appreciated, you can ping me in the discord: ViceIntoVirtue
Overview
-
Graphics API
-
Mesh
-
Skeletal Meshes
-
Shaders
-
Shader model
Not written
-
culling
-
Anti aliasing methods
-
GPU pipeline
Graphics APIs
The first concept when it comes to computer graphics is the graphics API. Since all commands are given from the CPU, the programmer needs to be able to tell the GPU what code to run. You can either talk to the GPU through the graphics driver, in a language like Cuda for Nvidia or HIP for AMD, but that's very low level. So instead we can use a framework specifically designed for 3D programs, that's where graphics APIs like DirectX3D, Vulkan and OpenGL come in to play. They give the programmer the ability to use a set of commands/functions to create their program.
Since the graphics API just gives a set of commands to be used, a graphics API in of itself has no overhead, the performance impact depends on the functionality that is being used by the programmer, which means that it's not recommended to use older versions of an existing API since they offer more functionality for better visuals and performance and more support for newer hardware features.
The only two reasons to use a past version of the same API is when facing compatibility issues, or when functionality that was previously used has been cut in newer versions, this happens mostly when legacy features are removed.
This is excluding the reason to use a different API entirely
Overview of some APIs
-
DirectX3D This API is a Windows exclusive developed by Microsoft. You might hear the word DirectX being used to refer to this graphics API, which is incorrect since DirectX is a larger set of APIs for all kinds of things, like sound, video, multimedia etc, and DirectX3D is only one specific API of that tool set. (even though it's a lot easier to say DirectX).
-
Vulkan Vulkan is a newer API, which unlike DirectX is cross-platform, and can be used in windows, Linux and Android. When it came out it had a reputation for being low level, giving the programmer more tools for high performance, but DirectX has been catching up with the latest version DirectX12, despite that one platform that it doesn't support like Android, Vulkan has a lot of capability for performance, the problem is that there seem to be only a small amount of phones that support Vulkan.
-
OpenGL The OpenGL project is no longer in active development, and has been getting replaced by Vulkan, despite that OpenGL is still being widely used for Android game development. https://en.wikipedia.org/wiki/OpenGL
-
Metal Is exclusive to Apple platforms.
Mesh
A Mesh is a set of data which is being used to describe a 2D or 3D object, it uses three types of data: vertices, edges and faces.
Vertices
A vertex is just a coordinate, which contains 2 or 3 numbers (depending on whether it's in 2D or 3D space) to describe its position in space, for example (10,42,67). X coordinate is equal to 10, Y = 42, and Z = 67. If you have more than one vertex, they're called vertices.
In the context of a mesh, these numbers are relative to the origin of the mesh, and this completely depends on how the mesh was made in a 3D modeling tool like Blender or Maya.
Here you can see a cube made out of coordinate/vertices. You can also see the x and y line, the z line isn't visible. If I were to select all vertices on this cube and move it far to the left, the cube would still hold its shape because the distances between the points relative to each other is still the same, but in absolute values they will be completely different. So when I take this cube out of Blender and to Unreal, the cube will be shifted away to the left from where I place the widget which says where I place the object.
Vertex attributes
A vertex can contain extra data, for example almost always a vertex has a direction. Since a lot of names and concepts in computer graphics come from mathematics, you will hear very weird names, and this is the case here, instead of calling it a direction it's called a "normal" doesn't seem normal to call it that, but I guess who cares
vertex normal
First let's explain what a normal is in mathematics, it's an imaginary line that sticks out of a surface, that creates a 90-degree angle with the surface (pretty much pointing straight-up relative to the surface), this line is predefined by the direction the surface is facing, as you can see as the surface is tilted to the right or left the normal is pointing to the right or left
But the normal in computer graphics is pretty much arbitrary, which means that you can make it whatever you want in the 3D modeling program that you use to make the model. You can also modify the normals ate runtime in the shader itself, just think of using normal maps in the material editor to give your object more depth, that shader/material is changing the normal of the mesh itself
So the vertex normal just says which direction the vertex is looking at or facing, it is made out of 2 or 3 numbers (depending on 2d or 3d), which says how many degrees the point is rotated along the different axes, just think back of rotating an object in unreal, where you have to use the red, green and blue circle to rotate an object along a specific axis to face the direction you want it too.
That's pretty much the information that every single vertex is storing, for example: (45, 30, 270) whereby the rotation along the X axis is 45 degrees, Y axis is 30 degrees and Z axis is 270 degrees.
The vertex normal is crucial for defining the direction of the faces which are created using vertices. And the vertex normals define the way the light bounces off a surface.
https://learn.foundry.com/modo/12.0/content/help/pages/uving/vertex_normals.html
vertex color
There is sometimes also a vertex color, which nowadays, with shaders and textures, rarely gets used. https://gamedev.stackexchange.com/questions/139059/what-is-a-vertex-color
vertex texture coordinate
When wrapping a texture around an object the render needs to know how to map this 2D picture on to the 3D object, to do this texture coordinates are assigned to every vertex
https://blender.stackexchange.com/questions/23173/how-do-texture-coordinates-work
Edges
An edge is just a connection between 2 vertices, pretty much just a line. On the image above the vertices are connected by edges. An edge also has a normal, which is defined by taking the average normal from the 2 vertices that make up the edge.
Face
A face is just how it sounds like, if your object has only coordinates and only edges connecting those coordinates, you pretty much have an invisible object, since a coordinate is mathmaticly speaking an infinitly small point in space, and a line is just a representation of the smallest distance between 2 coordinates, but if you add a face now your filling in the blank space between the edges and giving it a surface, now you can add a shader, a texture, a vertex color. To give the object the look you want.
Face normal
one of the things game engines do to improve performance is culling, which is a topic all on its own, but just know that only one side of a face is beign renderd to save performance (in rare cases like foliage it might be dubble sided) which means that it if your looking ate a face from the wrong direction it will be invisible, thats why when you fly your camera inside an object you are able to see everything outside, because the faces are pointing to the outside
Since the normal of the face is calculated by taking the average of all the edges that make it up, you have to make sure they are correctly defined, so you are not able to see through your object
Skeletal Mesh
A skeletal mesh is a mesh specifically designed so that it can be animated, and it can have 2 properties
Multiple meshes
A skeletal mesh can be made out of multiple underlying meshes, this is done to animate multiple objects ate once, most of the time for meshes that represent characters, animals, etc. these are just made out of 1 mesh
Hierarchical bone structure
To be able to facilitate animations, a skeletal mesh has multiple bones, which when scaled/rotated/transformed affect different regions of the mesh. The amount the bone affects those regions of a mesh is defined by a heatmap, which ranges from 0 to 1.
0 having no effect and 1 having the same effect which has been applied to the bone. The video below ate time stamp 10:05 and 11:40 will give you an idea of how this works
So the way animations in the mesh work is by key framing, which means modifying the position/scale/rotation of the bones to certain positions.
For example moving the arm of the character to point upwards, what the animation software (UE, blender, etc.) will do is gradually move the arm from its current position to that position that you key framed, so moving it upward if the arm was pointing ate the floor, the more key frames the better the movement of the animation can be defined.
If you look below ate this ugly cat that nobody loves, you can see an ugly cat
Animations can also be done in the shader/material, which is more performant https://www.reddit.com/r/gamedev/comments/853ivl/vertex_shader_animations_vs_skeletal_animations/?rdt=34416
note to self, move this statment to a future article abount performance
Shader
A shader is a piece of code that runs on the gpu, a shader can do allot of things depending on the shader model beign used which is defined by the graphics api beign uses.
(i just copied this part below from https://www.linkedin.com/advice/0/what-some-common-shader-types-techniques-game)
Two important examples are:
vertex shader
Vertex shaders are the first stage of the shader pipeline, and they operate on the individual vertices of the 3D models in your scene. They can manipulate the position, color, texture coordinates, and other attributes of the vertices, as well as perform calculations such as lighting, transformations, and animations. Vertex shaders can be used to create effects such as deformations, tessellation, wind, and water waves.
pixel shader
Pixel shaders are the second stage of the shader pipeline, and they operate on the individual pixels that are generated by the rasterization of the 3D models. They can determine the final color, brightness, transparency, and other properties of the pixels, as well as perform calculations such as shading, blending, fog, and reflections. Pixel shaders can be used to create effects such as textures, bump mapping, normal mapping, specular highlights, shadows, and post-processing.
There is also the:
Geometry shaders
Geometry shaders are an optional stage of the shader pipeline, and they operate on the primitive shapes (such as triangles and lines) that are formed by the vertices. They can create new primitives, modify existing ones, or discard them altogether, as well as output data to other stages of the pipeline. Geometry shaders can be used to create effects such as particle systems, fur, grass, decals, and instancing.
Compute shaders
Compute shaders are a special type of shaders that are not part of the shader pipeline, but run independently on the graphics card. They can perform arbitrary computations on large sets of data, such as images, buffers, and textures, and communicate with other shaders and the CPU. Compute shaders can be used to create effects such as physics simulations, fluid dynamics, ray tracing, and image processing.
Shader model
Shader model, which should not be confused with shading model, which can be defined within a shader model, for example when editing a material in UE, you can pick shading models like default lit, subsurface scattering, translucent etc.,
If a material/shader that you create in unreal is your painting, then the shader model that you use defines what tools that you can use to draw that painting
Shader models are specifications that define the features and capabilities that can be used in shaders. They are usually numbered and have different versions for different platforms and APIs (such as DirectX and OpenGL). For example, shader model 5.0 is the latest version for DirectX 11, and supports features such as tessellation, geometry shaders, and compute shaders. Shader models are important to consider when developing games for different devices and systems, as they affect the performance and compatibility of your shaders.
They can also have a large impact on the visuals of your game, for example D3D 3.1 (DirectX3D) does not support lumen, TSR and dynamic shadows (what you see are not shadows since this is a closed space and does not have light to begin with), but shader model 5 and 6 do. (This example prob doesent do the difference justice)