Posted on 5 Comments

How to Build Unreal Engine 4 on Ubuntu with Blender Assets

The original Unreal game

Although the source code for Unreal Engine has been made freely available in more recent times, creating and editing levels using the engine is nothing new. In fact, since the first game that used the engine in 1998, that being Unreal was published the intrinsic link between gaming and games development has always been evident within the Epic Games ecosystem.

If you can recall that far back one of the most impressive elements of Unreal was in UnrealEd, a level editor that was distributed at no extra cost to the game. However, let’s not forget the original games beautiful lighting, compelling gameplay and that incredible fly-by intro sequence that sets the tone for all the mystery and intrigue that is to follow.

An early screenshot of UnrealEd.

The Unreal Engine has progressed significantly since it’s initial inception of UnrealEd, in this post we will be setting up Unreal Engine 4, currently the latest version of Unreal publicly available and soon to be replaced by version 5 in 2021. Although you could easily head on over to Epic’s site and download a copy of the engine, for easy installation, we will instead be building the engine from the source code. There are many benefits to building the engine from source of which one is that we are not restricted to the Operating Systems that are supported on the official download page such as Windows. We will be building the Unreal 4 Engine on Ubuntu 20.04 LTS Focal Fossa.

Ubuntu 20.04 LTS Focal Fossa

Technically speaking it’s not entirely necessary to use this version of Ubuntu, or Ubuntu at all. As previously noted building from source removes such limitations so you could use just about any modern Linux-based distro. However, for the sake of convenience and simplicity, we’ll be using Ubuntu’s most current LTS (Long Term Support) version.

In continuing the Open Source ecosystem of this project we will not be relying on propitiatory Autodesk products (with all due respect). Instead, we will be utilizing the Open Source Blender 3D content creation suite to develop and import assets into our Unreal project. Of course, one of the main benefits of this approach is that to get you up and running will not cost you a dime in terms of software purchasing, licences or the like. However, it is worth noting in the long term if you do happen to make a game that does start making money then depending on how you distribute it, UE4 licensing could result in some costs.

Now that we have an idea of where we’re heading lets jump right in and get started with building UE4 on Ubuntu.

Build from source

When building a project from source code you will need to take into consideration the programming language/s used to develop that code. Without this consideration, source code is nothing but ASCII text, similar to that in a word processing document. However, in order to make something useful of the source code it needs to be converted into something that a machine can understand, this refers to the process of compiling which forms a part of the build process.

Building an application from source code will generally involve several steps but simply put in the context of what we are doing, we will

  • Resolve dependencies
  • Create online accounts
  • Clone the codebase
  • Configure resources
  • Make and install from binaries

Take your first steps into the increasingly popular field of 3D animation, with this beginner course in modelling, texturing and character development fundamentals

Resolving Dependencies

Before we get started with accessing and downloading the source code there are a few additional tools we will need in order to ensure the build is successful. The process of installing the requirements needed to build an application from source can be referred to as resolving the software’s dependencies.

  • In terms of UE4, the source code is written in C++ and as a result, you will need the tools for compiling a C++ application.
  • As UE4 makes extensive use of hardware acceleration in order to render very sophisticated 3D graphics in realtime, you will need the latest drivers for your Nvidia or AMD graphics card. This is not as much a dependency as a software requirement, as this only becomes relevant after the application has been built.
  • Finally neither a dependency nor a software requirement, but arguably an essential component within any developers toolkit would be a solution for Version Control Systems (VCS). In this case, the recommendations would be Git and GitHub.

As previously mentioned the operating system we will be using is Ubuntu 20.04 LTS. Although it might be possible to build UE4 on other versions of Ubuntu, the process is somewhat simplified on 20.04 and encountering difficulties with graphics drivers on 16.04 after successful builds or failed builds on 18.04, can prove to be less of an issue.

build-essential

Make sure build essential is installed on your system you can do this through Synaptic or by entering the following command in a Terminal,

sudo apt-get install build-essential

Build-essential is a meta-package, that is to say, it is not essentially an application itself as much as a tool for installing other software. In this case, the other software would primarily be for the purposes of building applications developed in C++. Among the software it will install will be g++ (the GNU C++ compiler), various development libraries as well as make, which is a utility that assists with the compilation process. Of course, you could simply install these packages individually based on your skill level and proficiency with regards to C/C++ development. If you don’t wish to install build-essential you could also rely on the setup process, noted below resolving these dependencies within the build toolchain. Nonetheless, it’s worth noting in the event you encounter difficulties or wish to expand on your C++ development skills.

You might have also noticed from the above screenshot that bumblebee is not installed on this system. After having encountered issues with the Nvidia 660M discreet card not initializing on this system, removing Bumblebee resolved the issue. If you are utilizing an Nvidia card, it is recommended that you install the latest drivers supported by that card. As a result, the system noted for this installation uses version 4.40 of the Nvidia drivers. It is also essential that you use drivers that support the Vulkan API. OpenGL and Direct3D alone will not suffice for UE4.

As you will need an Integrated Development Environment (IDE) for the purposes of developing code for your game or for editing the UE4 source code, Visual Studio Code 2017 or greater is an official recommendation. You can obtain it from Ubuntu’s Software Center, it is also available as a Snapshot.

Nvidia 4.40 drivers

Git is essentially a source code management tool that you would install locally on the system you will build UE4 on. Although you do not need git to build UE4 there are however some key benefits to using it, which we will have a look at shortly. To install git you could simply do so through Terminal again with the following command,

sudo apt-get install git

This will install all the requirements for working with git through the Command Line Interface (CLI). Git forms one of the main components in working with Version Control Systems (VCS), but to have an effective solution for software development you will also need a remote hosting and VCS solution. There are several providers that can fulfil this requirement in various paid or unpaid capacities, however, GitHub is the service provider we will be using. As a result, you will need to visit github.com and sign up for an account if you do not already have one. You will not require more than a basic free account. Once you have git and GitHub setup you are ready to start retrieving the UE4 source code.

Finally with regards to accounts, make sure you have signed up for an unrealengine.com. This account is needed in order to access the UE4 source code.

Get the Source

Log into your Unreal Engine account, under your user profile you will find a link to your account settings.

Click the PERSONAL link. This section lets you customize characteristics about your account, as well as allows you to connect 3rd party software.

In the panel on the left is a button called CONNECTIONS

Click this button to customize what 3rd party software your Unreal Engine account can access. In this section, you will find an option to link your GitHub account. Click the CONNECT button and follow the prompts to connect your Unreal Engine and GitHub accounts. Ultimately you will need to become a collaborator on the Unreal Engine GitHub repository in order to obtain the source code. Don’t worry if you don’t have any experience with code, you will not be able to make changes to the main source code that easily. However, if your intent is to modify the source code then you should probably consider Forking the Unreal Engine source code and editing that. We’ll discuss options for accessing the code shortly, but first, you will need to finalize the connection between the accounts by visiting GitHub and clicking the Authorize EpicGames button.

Once you have authorized Epic Games access to your GitHub account, you should then receive an email inviting you to join the @EpicGames organization. This email will be sent to the address associated with your GitHub account, worth noting particularly if you are using different emails for your UE4 and GitHub accounts. Accept the invitation and you will finally have access to the Unreal Engine source code.

Now that you have access to the EpicGames repository as a collaborator when you visit their repository at https://github.com/EpicGames/UnrealEngine you will notice that the UrealEngine repo is now accessible.

Read through the Readme file, which has a friendly welcome message as well as some very useful links for learning how to use the engine.

You are now ready to Download, Clone or Fork the engine’s source code. Before proceeding there are a few suggestions that should be taken into consideration regarding these options.

  • Fork: This option is particularly useful if you wish to modify the source code. Choosing to create a Fork will replicate the repository within your personal GitHub account. You are then able to modify and change the source to your requirements. This does not download the code onto your local workstation and is therefore not recommended as a standalone solution if your interests are primarily in creating games or realtime applications with UE4.
  • Download: This might initially seem like the best option and although there is no particular reason not to utilize this option if your interests are primarily to get up and running with potentially as little overhead as possible, you will be defaulting on the benefits of the VCS provided.
    To download the source you will first need to choose a branch. By default, the release branch is selected as this is a well maintained and tested branch with updates regularly being added and merged. If you are somewhat familiar with VCS you might be tempted to download the master branch, however, this branch is primarily the master for development and therefore not subject to as much testing as release. You can also choose to download older versions of the engine and the developer teams also have their own branches prefixed with dev-branchname. However, these branches tend to be considered as bleeding-edge and should not be used for the purposes of production as much as for development.
  • Clone: Cloning the source perhaps provides the most versatility in terms of the three options as this will download the source onto your local machine, allow you to modify and test your changes locally as well as benefit from working with fully-fledged VCS enabling you to do things such as checkout other branches on the same system, modify, stage, commit and push remotely if you initially forked.
git clone https://github.com/EpicGames/UnrealEngine.git

To clone the repository navigate to the desired directory through a CLI and run the command above.

As you can see from this screenshot, cloning required approximately 3.41GiB of disc space. Depending on the speed of your internet connection this could take a considerable about of time to obtain. Although this might seem like a lot of data be aware that it is only a small part of the requirements for building and running the engine, as in order to use the engine you should have at least 100 Gigabytes of free space on your drive. That is not a typo and does not accommodate the additional space requirements of your personal projects.

Compile Binaries

At this point you would have obtained a copy of the source code, installed all the required dependencies and are now ready to start setting up the engine and installing it on your system.

To clarify, the source code you have downloaded is in a human-readable format, which makes it possible to edit and maintain. You will now need to convert this code into a machine-readable format through the process of compilation, this will result in binaries which will be executable therefore allowing you to finally launch and run the engine.

If you have followed all the steps to this point compiling the binaries will be quite straight forward and will only require a minimal knowledge of the Command Line Interface.

Open a terminal window and navigate to the root directory of your UE4 source code. You can do this by changing to the appropriate directory.

cd UnrealEngine

Then run the Setup shell script. This step might take some time depending on your system’s configuration as it will download a native toolchain that will ensure the UE4 codebase compiles and links successfully on your system.

./Setup.sh

You will then need to generate the Unreal Engine project files, once setup has successfully completed.

./GenerateProjectFiles.sh

Finally you will need to run make to build the binaries

make

This step will require substantial system resources and time depending on the target machine you are compiling against. You will need at least 8GB of RAM and a multi-core processor (8 or more) to complete this step in an hour or less. Once this step has successfully completed and no errors have been returned you are finally ready to launch the Unreal Engine 4 Editor. Congratulations!

Launching the Editor

Well done for getting this far, it’s now time to launch the Editor and start your journey towards learning how to make your first game. Navigate to the directory where the binaries were created and launch the editor.

cd Engine/Binaries/Linux

Now launch the editor.

./UE4Editor

The first time you launch the editor, it will need to compile shaders which will take some time but fortunately, this only needs to happen the first time the editor is launched.

At the Create New Project screen, select Games then Next and you will be presented with the Select Template screen. Choose Blank Project, then make sure Blueprint is selected and No Starter content is required as we will be importing our own mesh from Blender.

Blueprint is the Unreal Engine visual scripting language. With it you can create gameplay through a node-based system using simple drag and drop techniques to create connections between nodes in the editor. This means that as a designer you are not necessarily limited to having to learn a programming language in order to develop your game.

Importing a Mesh into the Editor

Now that we have UE4 up and running on Ubuntu we’re going to use Blender to export a 3D asset that we will import into the editor. In the interest of keeping things simple for now, we are just going to discuss the basic concepts of importing a static mesh.

Within the Editor locate the Content Browser section in the bottom half of the screen and click the Add New button.

Click New Folder to create a new folder within the project, that will be used to store the imported asset.

You will need an asset to import into your project. There are various options for importing 3D meshes into UE4, although the official file format for this is usually considered to be FBX. The FBX file format supports animation as well as many other features, but we are just using it to familiarize ourselves with the import/export workflow.

In order to prepare your mesh for exporting from Blender make sure that all transforms have been applied. In other words if your mesh has any rotation, translation or scale attributes these will need to be baked into the mesh so that your mesh transforms are all zero. If your object moves while trying to Apply it’s transforms you might need to enter the object’s Edit mode select the object’s vertices and then translate the object back into the correct position. Essentially your object’s Center of Mass should be as close as possible to the origin without intersecting the ground plane.

Apply the object’s transforms by selecting the object in the 3D viewport, then clicking the viewport object menu and Applying all transforms from the Apply menu.

Next export the mesh by choosing FBX from the export options.

The default settings are ideal for the purposes of what we are doing, and the only option you would need to change is to ensure that the Selected Objects option is checked from the FBX export settings. Save your exported mesh and it’s time to import it into the Unreal Editor.

From the Content Browser in the Unreal Editor click the Import button, to import the FBX file you just exported from Blender.

When importing this file again accept the default settings. If all goes well your assets will now appear in the new project folder you previously created.

In completing this final step you would have successfully imported your 3D model into the Unreal Editor. From the Content Browser click and drag your asset into the editor viewport. Your model is now visible within the Unreal 4 3D rendering engine’s editor. Click the Play button to preview what your asset will look like in-game.

With the techniques you have learned here try import other models as well as other types of assets including texture maps.

Visit the Asset Store for More Realtime Models made in Blender

The elephant model used in this post, as well as other realtime models, are available from the RabbitMacht Store to use in your own projects.

This series of models come with 4K Textures, Non-overlapping UV’s, Normal maps, Rigged and are available in High resolution as well as realtime versions.

Subscribe to RabbitMacht’s Blog

* indicates required
/ ( mm / dd )
Posted on Leave a comment

Minotaur XIV : FK and Pose Rigs

Forward Kinematics (FK) Rigging

If you have been following the other posts on the Minotaur you might have noticed that this is not the first post that mentions rigging. In fact, the very first post for the Minotaur “Minotaur” mentions rigging and another post on Skinning “Minotaur XI” also mentions rigging. As you can imagine rigging is not essentially reserved for the purposes of animating a character however it is primarily used to fulfil that purpose.

  • In the first post on the Minotaur, a rig was used to pose large portions of the model by deforming the mesh in such a way as to avoid geometry intersecting itself, ultimately this rig was used as a tool for modelling.
  • In the post on skinning, a rig was used to pose the final version of the modelled character for a wide action shot. A rig like this is not suitable for animation and is intentionally kept simple as once the character is in the desired pose, the deformation is baked into the geometry and the rig is discarded.
    Deformations that usually would be done with weight painting (and would be visible for that particular pose) can then be added with lattice deformers, sculpting and modelling tools.

In the following posts, we will be discussing creating a Forward Kinematics (FK) Rig, a Controller Rig and skinning the Minotaur to the FK rig all for the purpose of creating animations.

Take your first steps into the increasingly popular field of 3D animation, with this beginner course in modelling, texturing and character development fundamentals

The above video demonstrates the Minotaur attached to an FK rig and posed for a turntable. The render is of the realtime model (<7k polys) as it is seen from the 3D viewport, and has incomplete textures.
One of the most important technical qualities of a character that needs to be set up for animation is to have multiple levels of detail of which at least one of the mid to low levels provides a close to final output representation of the character’s deformed geometry while still maintaining realtime playback in the 3D viewport.
Relying on non-realtime rendered viewport previews (also known as Playblasts) can hinder the process of creating animations significantly, and maintaining a responsive 3D environment in which you create your animations is crucial.

The above video is a demonstration of the Minotaur moving from one pose to another driven by an FK rig while scrubbing the playback head in Blender. Note the character’s realtime responsiveness to the timeline (at the bottom of the frame) as the mouse moves back and forth. This is true even in an older version of Blender.
When the Minotaur was bound to its armature which comprised solely of an FK rig almost every bone in the rig was enabled for deformation. The controller rig is then built on top of the FK rig once weight painting has reached a reasonable representation of what the finished product will look like.

As mentioned the FK rig is then posed in various ways to test how the mesh deforms, it is in these poses that weight painting occurs. Typically, it should not be necessary to paint weights on a character in its default/bind pose.

Although the term weight painting implies a superficial task that is related to the surface of the mesh, I prefer to think of weight painting as an extension of the modelling process. It is true that weight painting is performed only on the surface or “skin” of the mesh but the objective of the task is to modify the volume of the skin, muscle tissue, flesh etc that is affected by the bones that are rotated to create that deformation. As a result, we are simulating the deformation of a volume by means of a tool that addresses the surface of the model. This effectively results in displacing vertices by moving them towards or away from the area of deformation.

In Blender, we paint vertices as red if we would like them to be more affected by a bone’s deformation, in Maya we would paint them as white. Regardless of what software you use the principle remains the same, we are effectively modelling what we would like the areas surrounding the armature’s/skeleton’s joints to look like when those bones are rotated into a position other than their rest positions. We do this so that we can ensure that every time the bone is rotated into a particular position that the volume of geometry surrounding that bone and its joints will fold, wrinkle and deform the same way each time.

This character has two layers of FK bones, one used to deform the Minotaur and another used to deform
the Minotaur’s armour.

A rig like this is far too complex and cumbersome to animate with the simple rotations and a single translatable parent that FK would allow for, so in order to make the animation process more intuitive, a controller rig will be created for the FK rig.

The controller rig, as the name implies, is responsible for providing the controls used to transform the rotations of the FK rig. Sometimes the FK rig might be referred to as the base rig as it will be the most low-level rig and ultimately the tool that provides the link between the animation system and the rendered character.

One fundamental difference between the base rig and controller rig is that the controller rig will primarily be used for the purposes of creating translation transforms, often by means of Inverse kinematics (IK). This is in contrast to the base rig which will generate rotation transforms, primarily. The combination of these transforms result in the keyframe data that eventually make the character animated.

Another level of complexity making up this character’s rig will be that of a third rig for the purposes of dynamic secondary animation. However, this is something that will be addressed at a later stage.

Posted on Leave a comment

Minotaur XIII : Texturing, Materials and Uv’s

Texturing

The Minotaur consists of several textures and materials that are composited together, this render depicts the current painting status of the Minotaur’s color texture channel.

This above image took approximately 6 minutes to render using the Blender Internal (BI) Renderer. This includes 3 Sub-Surface Scattering (SSS) passes (each with its own material), a color map, a normal map, and 28K polys subdivided 3 times at render-time. Although there is still a lot of work that needs to be addressed, particularly regarding the specularity/ reflections pass and completion of texture painting for color and normal maps, I find the render times and quality from BI to be very reasonable and certainly something I am pleased to work with.

Materials and SSS Considerations

The main reason for having multiple materials composited for the Minotaur is so that three layers of Sub-Surface Scattering (SSS) can be addressed independently. These layers represent the epidermal skin layer, the subdermal skin layer, and the backscattering layer.

  • The Epidermal skin layer is the outermost layer of skin and as such will tend to be the layer that most prominently shows the current texture map that is being painted in the previous rendering.
  • The Subdermal layer is used to represent the fatty tissue that exists under the epidermal layer. Its texture map will differentiate most significantly in that it will also include the color of the Minotaur’s veins. The material’s primary function is to create the impression of the character having volume, as opposed to appearing like an empty shell.
  • The Back Scatter layer is the SSS layer that is most discernible as it adds a reddish tinge simulating blood vessels within the Minotaur’s body. This will be particularly noticeable in areas where the Minotaur’s volume is significantly less so that it is easier for light to pass through it, such as in his ears.

The following two images demonstrate the three materials composited together, with SSS properties. The first image is of a low resolution render followed by the same material and lighting setup on a high-resolution model.

Low Poly SSS composite
Hi-res SSS composite

As you can see the SSS material properties affect the renderings with significant differences based on the mesh density. This is yet another benefit of using actual geometry displacement, and not relying on normal or bump textures for surface variation (as would be the case in the first of the two renderings).
Fortunately, rendering high-density geometry that is subdivided at render time (see Minotaur XI Skinning post for more details regarding setup) is a feasible option in Blender.

Multiple UV Layouts

As I mentioned in a previous post this character will require multiple UV layouts so that more texture space can be allocated to certain areas of the model that will be featured in some close-up shots.
One of the downsides of multiple UV layouts at this stage is having to re-bake the character’s normals with the new UV layout. Although this is not a problem for me as I save my working files incrementally, it does mean having to revisit previously saved files which some people might find to be problematic (depending on your file saving habits). As the character’s UV’s are adequately laid out, I will only need to add one additional UV layout.
The following image shows my current progress with the color texture channel. As you can see I prefer to work on one side of the character as I lay down a base for the details that follow, then mirror and modify this base in an image editor before painting more detail and variation in the texture. I use composites of photographs laid down in an image editor (the GIMP in this case) then export the image as a PNG and paint over it with the clone and paint texture tools in Blender.
This texture is approximately 10% completed in this image. I hope to have more posts of this map as it develops.

Posted on Leave a comment

Minotaur XII : Optimizing a Poly Count for Rendering

Optimization Testing

Early multi-angle, animation test with the Minotaur character.

Synopsis

The model and armor take about 4.5min/fr to render. Using the rendering technique outlined in my previous post on skinning (“Minotaur XI”) the Minotaur in this animation is subdivided 3 times at render- time, then targeted to a high resolution sculpt which was baked at level 03 to displace the subdivided geometry (see “Minotaur XI: Proxy Model Setup” for details). The high-res sculpt is saved externally and linked to the current render file, this reduces the file size for this particular character from approximately 0.5GB to 100MB. Smaller file sizes help clear up unnecessary RAM usage, which has been reduced from 8GB (RAM) + 3.5GB (Swap Space) to current usage of 4.1GB at render time and 2.1GB when loaded (Blender startup uses about 1.3GB of RAM for this setup). This reduction in RAM usage accounts for the reduced render time which was previously 30min/frame to the current time of 4.5min/frame. This makes a vast difference in pre-rendered animation when you consider that approximately 25 frames are required to account for only a second of animation.

Testing Criteria

Only Two separate passes of 1) character and 2) armor was used in this render. No texture maps have been completed yet, as this render is mainly used to gather data on three main categories:

  • How geometry is being displaced at render-time over the entire mesh
  • How normal mapping affects the displaced geometry
  • And render timings on optimized models.

Armour Geometry Targeting Displacement and Normal Mapping

Several basic renders were also created testing the same criteria in the armor, the results follow.

The preceding image is of the Minotaur’s right shoulder guard. The lighting is particularly “unflattering” in these images as certain areas of the geometry are highlighted for consideration. Any areas that indicate stretching of the normal map will need to be addressed with multiple UV layouts, but this will likely only be addressed at a much later stage when the camera has been locked down for the final shots.

The following image is a shot of the right shoulder guard from the back of the character. It’s evident from this test that geometry displacement did not recess the polygons comprising the holes in the strap adequately, as was the case in the sculpt data. Custom transparency maps will need to be used to compensate for this lack of displacement on the character’s armor straps.

The preceding image is of the lower body area with the toga armour between the legs. The sculpt data on this geometry was exceptionally high and as a result, is a serious consideration with regards to detail loss during optimization. However, the geometry displaced considerably well when the Simple algorithm for calculating subdivisions was chosen (as opposed to the standard Catmull-Clark method). Subsequently, the toga armor straps only required a single level of subdivision (the lowest for all the characters components). The Toga is also planned to be a hairy surface in the final render, so a large amount of detail would have been wasted with more subdivisions.

Posted on Leave a comment

Minotaur XI : Proxy Model Setup

Skinning is the process of attaching geometry to an underlying skeleton or armature. We then use the armature to pose the model and simplify the process of animating it. There were several issues that had to be considered when skinning the Minotaur character, of which the main issue was inconsistent Normals.
This post will cover a method I discovered on how to correct this issue without having to resort to a lattice/cage deformer and still allowing for the (very important) proxy object method of rigging.

What’s so bad about inconsistent Normals?

As all polygons will have two surfaces, the software you are using will need to know which of those two surfaces is pointing away from the mesh. Geometry’s Normals should be perpendicular to the surface of the polygon pointing away from the outer surface of the mesh. This is not only important for skinning but also for sculpting.

📝 If you are feeling a bit lost a this point, you can brush up your knowledge on Normals in my free 3D course

1. Easy Fix

The best way to ensure consistent Normals in a 3D package is to Apply or Bake all transforms of the model, then select the model’s faces, normals or vertices (depending on your 3D software) and recalculate the direction of the selected component’s Normals.
In Blender this is really easy as ctrl-n will automate this process for your selection. In other software you might need to turn on “show surface normals” to ensure that the objects Normals are pointing in the correct direction, and if not select the erroneous component and choose “flip normal”. This will reverse the direction of the selected component’s normal.
You might come across inconsistent Normals when sculpting with a tool that translates geometry away from the surface or into the surface of the mesh. For example, creating a stroke along the surface of your mesh (with a sculpt tool such as “Draw”) could start as concave but end as being convex. In this case the Normals are possibly inconsistent and need to be addressed.
If this problem is not addressed and the same model is used in skinning, that model is likely to suffer from poor deformations.

Problems Arising From The Modifier Stack

Although this problem might be trivial at times and fixing it is simply a matter of performing the steps outlined above (see Easy Fix), sometimes the above method will not be practical as it does not respect an objects modifier stack. If you are using one or more modifiers in your objects stack such as multi-resolution or other modifier that deforms the object at a component level, reversing the Normals of the base mesh will effect everything in the stack above it.


Secondly, if you are using a multi-resolution mesh you will probably know that working with a mesh at it’s highest level of subdivision is simply not practical. The problem is that in order to recalculate the object’s Normals you need to bake the modifiers into the object’s stack before you can recalculate the Normals, and recalculating the Normals of a high resolution mesh baked from a multi-resolution modifier is not practical and sometimes not even possible (for lack of system resources).

2. Normal and Displacement Maps Method

If you have come across this problem, one of the most common solutions seems to be to bake out a normal and displacement map from the sculpt data although, I found that this method produces results that are in some ways vastly different from renderings that include the highest level of sculpt data.
However, as you can see in the image below, the results are not completely unusable, but warranted too much of a compromise on quality to be used as a sole solution.

The above image demonstrates the results of this method. It is a single subdivision from the sculpt data baked into the mesh, this means that the mesh being used for the final render is a realtime mesh. Since the multires has been applied/baked into the mesh the normals can be recalculated safely. The model then has a Normal map and a Displacement map (which were both previously baked from the highest level of sculpt data) applied to it. A subdivision surface modifier is then applied to the model and it’s levels are increased for render time (the viewport model remains realtime).

Cons

As you can see the results are not too bad, but substantial detail is lost in the lower body and the outline of the model is exceptionally smooth in an undesirable way.

Pro’s

The main benefit of this method is that it computes optimally for realtime viewport interactivity and has a relatively short render time. If your character is not a focal point and nothing closer than a wide shot is required you’ll probably be able to achieve useful results from this method.

3. Weight Transfer and Mesh Deform Method

One of the methods I am familiar with in dealing with this problem is to create two models a realtime model and another high resolution model from the sculpt data. The realtime model is then skinned and animated, the high resolution model is then bound to an identical or the same rig before render time and the weights of the realtime model are transferred to the high res model. As a result the only process intensive task performed on the high resolution model is rendering. No manual production tasks need be performed on the high resolution model, which would be impractical. This tool set has existed in Maya since version 6.5 (if I’m not mistaken).
I was expecting to use this method in Blender, however it slowly became undeniably apparent that Blender does not (as of version 2.63) have a transfer weights option that matches the usability that I’d previously been accustomed to.

The issue is being addressed by user 2d23d and you can read about it at this post on blenderartists.org
The addon looks very promising and I sincerely hope it continues to be developed, as at present it is unable to address exceptionally high levels of geometry. Which made it unusable as a solution in this particular situation.
Other methods are suggested in the above thread such as the usage of a “mesh deform” modifier, which I think was added to Blender during the Apricot project which resulted in the open source game Yo Frankie!
Unfortunately, the mesh deform modifier proved to be the most cumbersome and difficult method (particularly as weight transfer only takes a couple of minutes in Maya). Creating the deformation cage took a total of 10 hours, and the results were unfortunately still unusable. I would recommend that anybody attempting to use this method creates a cage from scratch and do not try and convert a base mesh into a cage, especially if your model is not symmetrical or has a lot of sharp edges.

If I had been able to apply the mesh transfer method I would have ended with a result similar to the one below.

The above image is a rendering of the actual sculpted model at it’s highest level of resolution. As you can imagine this is a mammoth sized polygonal model, for example the cracks in the skin are geometry and not a normal map. Looking at this rendering and the final render below it’s difficult to tell them apart, however the most notable difference is in the characters tail which you can see faintly behind the character’s left calf. The highest level sculpt render (above) shows protrusions extending from the end of the tail, the same protrusions created from the realtime model (below) do not extend as far. This can, however, be corrected by using a level 05 sculpt for the shrinkwrap target (method explained below) and increasing the subdivision surface levels at render time. But in this case it would not warrant the
additional time that would be added onto the render as the end of the tail will mainly be covered in fur.

4. ShrinkWrap Method

The method that I finally settled on turned out to be exceptionally simple and takes only a few minutes to setup.

  1. Bake a Normal Map from the highest level of sculpt data (and a Displacement Map if desired).
  2. Create a Realtime model and a High Resolution Model to be used as a reference at render time.
    The High res model does not need to be baked at the highest level of sculpt data. I chose level 3 of 5 because at level 3 all indentations in the mesh are visible therefore breaking up the smooth outline problem mentioned in earlier render tests (see Normal and Displacement Maps Method).
  3. After ensuring that the model’s normals are facing the correct direction, for both models (see Easy Fix). Place the models on different layers and hide the high res model so as to speed up viewport interactivity and apply the Normal and Displacement maps to the realtime model as per usual.
  4. Select the realtime model and apply a Subdivision Surface Modifier (so as to increase the poly count at render time as this will be the final model used for rendering), then add a Shrinkwrap modifier and target it to the high res model. Order is important here, as the surface needs to be subdivided at render time before it can be shrinkwrapped to the high res model.
  5. Bind/Parent the realtime model to an armature, with the armature modifier completing the stack in the setup. Once again, order must be respected this is so that the armature deforms a representation of the high res model (by means of the other two modifiers) at render time.
    As you can see using this method the high res model remains hidden and out of the way as it requires no manual process intensive work such as weight painting, binding (to cages or armatures) and no proxy mesh data transfer is required either. The longest part of this setup is baking the Normal and Displacement maps.

The above image is the final result, as you can see the highest level of sculpt detail is retained, for example the bumpy and folded knee area in this render can be compared with the initial rendering for the Normal and Displacement Maps Method where the knees are extremely smooth. Also note that the models outline is no longer smooth, either.
Another benefit of using this method as opposed to rendering high resolution geometry from a viewport (such as in the weight transfer method), is reduced render times. This model takes approximately 5 minutes to render compared to the image shown in the “Weight Transfer and Cage Deform Method” section that takes approximately 30 minutes to render (as my system has to resort to swap space in an effort to cache 8GB (RAM) + 3.6GB (Swap Space) of polygonal data).
PS. You might have noticed that the Minotaur suddenly has some color but I’ve not mentioned anything about it until now. That’s because I’ve started texturing him, but as you can see this is not yet completed.
So you can expect a post on materials, textures and rigging soon!

Posted on Leave a comment

Minotaur X : Sculpting Armour

This image is of the highest level multi-res sculpt which, at level 5, is about 1,4 million polys for this model. Since this is just a test render mainly to see what the character currently looks like when wearing the armour, the armour is rendered at a low resolution. The Minotaur’s lower-body draping-clothing goes up to level 7, it is rendered here at level 4 in the front to level 1 at the back. The depth of field effect that this creates is merely coincidental and no post production is done on this image (as it is for testing purposes only).
The Minotaur sculpt data is about 95% complete in this picture, and attention needs to be directed towards the hands, mouth area (where removal of symmetry has begun). The final stage of the the Minotaur’s sculpt can only be completed when the armour sits flush against the characters skin, which would subsequently cause indentations in the flesh and possible abrasions. The highest level sculpt is not created with symmetry, but in order to emphasize the effect lower level sculpts often need to be readdressed so as not to create the impression of the highest level data looking like it’s been painted on the model, but is more solidly integrated into the characters anatomy.

Posted on Leave a comment

Minotaur IX : High Resolution Sculpting

These are some level 5 renders. This model won’t go beyond level 5. In these two renders the top half of the Minotaur is almost done and the bottom is only sculpted to level 4.

It’s worth noting at this point, that the technique used for sculpting this model is not typical as sculpting is performed non-destructively. In other words, sculpting does not alter the model’s topology by means of Blender’s Dyntopo technology instead sculpting is performed by simply displacing existing vertices on the model.

This basically means sculpting on top of a Multiresolution Modifier.

Although this technique, has over the years, tended to fall out of favor over the almighty Dyntopo, it still has it’s benefits. I would not say that either technique is better or worse than the other they are just simply different approaches to solving similar problems. As such, one will be beneficial in some areas where another technique will be less so.

Dyntopo

One of the most alluring characteristics of Dyntopo is the natural approach it lends it’s self towards. When using Dyntopo you don’t need to be concerned with technical matters such as topology, the propagation of high level sculpt data to the base mesh and the implications of the inverse. You don’t need to concern yourself with the even distribution of subdivided polygons over your model and how topology can be used to influence the weighting of distribution, thereby salvaging valuable system resources and forcing detail only in areas where necessary. …Not at all, as Dyntopo will easily handle the matter of adding detail for you by subdividing polygons according to the abstraction of a “paintstroke”. Which is of course exactly what you should expect, as an artist, after all.

Nonetheless, with all of these pro’s there must be some cons. Well, I wouldn’t go so far as to call them con’s as much as caveats of which the most obvious is the lack of consideration for continuous edge loops, quads and all matters relating to generating decent topology for the purposes of an animatable character, in general.

Ultimately, what this translates to is that if you wish to utilize your character for animation purposes it would not be recommended that your model (you have just spent all this time sculpting) is itself, physically included in that pipeline. Typically, what you would do in this case is retopologize the model and bake a normal map from the sculpt data. Sounds easy enough? Well, that depends on your skill level and also if you have any fancy tools at your disposal.

Either way you look at it, topology does certainly play an important part in ensuring that characters intended for animation deform in a predictable manner. Whether you attain good topology through retopologizing (as previously discussed) or whether you take the non-destructive approach as outlined in the Minotaur character, is a matter of what suits your needs at the end of the day.

Level 5 Sculpt data on a Multiresolution character

Non-destructive Sculpting

For the purposes of this character, I opted for the non-destructive approach just because I wanted the benefit of not having to complete the sculpting process before I was able to effectively apply the first useful texture map to the character. I also like working on more than one area of interest at a time, this way I can experiment with sculpting and texture mapping simultaneously and immediately see the effects of how each aspect of character development influences another. These effects can easily be visualized through a software render that would give me an arguably, more accurate representation of the final output. That, to me, can also be quite a natural approach to character development, as a whole.

In the above image you can clearly see that this model has not been sculpted with Dyntopo, as even at the higher levels of sculpting the edge loops continue to follow the base model’s topology to a large degree. The high res sculpted data effectively forms part of the production pipeline, with this technique.

LOD

Another benefit of this technique is that if you plan on exporting your character for a games engine, you will already have various Levels Of Detail (LOD) models, each with the same UV layout and vertex ordering. An LOD variant can then simply be created by collapsing the model at a predetermined multiresolution level. For example, in this model I would effectively be able to create 5 LOD’s with a few simple clicks and retain complete control over each model’s topology, without having to rely on a plugin that automates retopologizing, to save time. That can be quite a big plus in terms of game engines, particularly for next gen engines that can crunch through millions of polys per second and utilize what would typically have been film quality assets.

As the vertex ordering is exactly the same on all of the LOD’s you will also be able to use the same rig, weight maps and subsequently animations on all of them, but we’ll get into all of that a little later.

For now, I’m happy with the main sculpt and so I can move onto the next step. Of course, it’s worth also noting that as the sculpting process is non-destructive I don’t have to decide that I’m done with sculpting at this point in time. In fact, I could revisit sculpting at any stage in the future including after the character has had UV’s laid out, been texture mapped, rigged and animated.

As a result, you can see there are still alot of benefits to using this sculpting technique. Although, I am perhaps over-simplifying some of the complexities of this setup you will nonetheless come to see that it is certainly achievable and definitely effective in production as we discuss the setup in more detail in posts to follow.

Posted on Leave a comment

Minotaur VIII : Sculpting and System Stability

Sculpting

I generally lay out UV’s before sculpting commences…


This may not be an ideal solution in some cases as vertex translation of the base mesh could occur, when the higher level sculpt data is propagated to the base level.

In other words there is always the risk that high level sculpting could inadvertently modify the basic form of the character. If you have a texture map applied using the current UV layout, stretching will occur, as a result of vertices being translated to match the new form changes from snapping vertices on the base mesh to the higher level sculpt.
However, I won’t apply a texture map to the model until the sculpt can be applied to the base level mesh, in order to avoid this side-effect. So why not just do the UV layout after sculpting is done, and you’re ready for texturing?
Well you could, but I prefer having an idea of what my UV layout will look like before completing the sculpt, I can then bake out normal map tests while sculpting. This gives me a clear indication of whether my UV’s are spaced out enough to provide adequate room for sculpt details at a reasonable texture size.


My UV’s can then accordingly be tweaked again before arriving at a final layout.

System stability is also a big issue for me.

Sometimes the systems I use to create models have a limited amount of RAM which could be as low as 8GB. Although this might be adequate in many circumstances, it will require that I adopt a different approach for the level of skin detail that is needed with this model. Basically, it will mean having to cut the model up into smaller components or “dice” the model in order to sculpt micro-level details and maintain a workable, interactive 3D environment.

BACK LOWER BODY Level 2 Sculpt

Typically, this might mean having to separate the head, the torso, lower body etc all into separate files but I don’t like doing this because it could result in hard edges in areas where the components of the model would have been separated. Instead I keep the model unified and use the system’s resources for rendering the model in the realtime, openGL viewport, which is particularly important for sculpting. In this case performance might be compromised at the highest multi-resolution (or subdivision) level, in which case this can be counter-acted by hiding enormous amounts of geometry and concentrating only on small portions of the mesh. Of course, this is the purpose of the highest subdivision level, to create micro-level details so there is no problem with hiding two thirds of a model, thereby reducing the viewport vertex count from 100’s of millions to 1 million or less. This hiding (or masking) re-establishes realtime viewport interaction.

You might be aware that in order for such a high level mesh to be usable it will need to be baked to produce a normal map. However, normal map baking is a product of rendering and rendering requires additional RAM. The Minotaur at it’s highest subdivision level is using about 2GB to 3GB of total RAM (depending on OS configuration) to open and display the file, rendering the model in this state is not an option as the amount of RAM required will increase by three to four times that amount. Which would exceed the available 8GB of RAM on the current system, at which point Swap space (or virtual memory) will be used. This will make the system unstable as other software and services try to compete for available resources.

FRONT LOWER BODY Level 2 Sculpt
FRONT MINOTAUR Level 2 Sculpt

Keeping your 3D program’s RAM usage below 50% of your total system’s RAM, and not exceeding this
will provide a much more stable environment, where crashing during a rendering and wasting time in the
process can be avoided.

  • With the models UV’s laid out, I am free to jump back into edit mode once all highest level sculpting is completed.
  • In edit mode I can delete entire portions of the model such as everything but the head, return to object mode and render a normal map for the head without compromising system stability as the amount of object data has been substantially reduced by dicing.
  • Since the UV’s are already in place I can repeat this process for the other model components, arms, legs, torso etc until I have several high resolution maps with the model’s components already in their correct positions.
  • As long as all the maps are rendered with the same image aspect ratio and pixel aspect ratio, the files can easily be imported into a single multi-layer document and exported as a single high resolution normal map, that retains the model’s micro-level details and can then be applied to the original model which can then be collapsed to the base for furthering production.

As you can see using this method the model’s vertex order is retained, no additional vertices are added or merged (which would have subsequently modified the UV layout) and you have the benefit of working in a stable 3D environment.

MINOTAUR FRONT Sculpt Level 5
The final image is the start of the current highest level sculpt, as you can see veins are starting to occur at this level, and so to are pores which will only become clearer in later renders.
Posted on Leave a comment

Minotaur VII : Laying out UV’s

UV Layout

Students I’ve worked with have often asked me why the term UV is used? And the answer I give them is the same answer I was given 15 years ago when I was learning about UV’s for the first time. “The term UV is used because it is not XYZ” As ambiguous as that may sound, it is probably the most fitting description for UV’s that I’ve ever heard.


As with the term XYZ in 3D UV’s also relate to dimensions, but as you can imagine U and V relate to only two dimensions. Although it is not a given rule, most 3D applications will use the U dimension to represent width and V to represent height. But U and V dimensions are not simply 1 to 1 pixel matches relating to the width and height of a bitmap image. They are in fact used to quantify “texture space” which comprises of 2 dimensions. As you are aware textures are 2 dimensional bitmaps, procedural maps or other map types that are wrapped around a 3D object. UV’s provide the crucial method by which texturing and mapping artists translate these 2 dimensional maps into a 3D environment.


In much the same way that a vertex represents a point on a 3D model in 3 dimensional XYZ space, a UV represents a point on a 3D model translated into a 2 dimensional texture space.

When you view a Bitmap used as a texture for a model in a 3D application’s UV editor it will be forced to fit into a square shaped editing area, this area is often referred to as 0 to 1 texture space. This simply means that the square area is used to measure a starting value of 0 (at the UV texture space origin) to a value of 1 in both the horizontal and vertical axes in floating point numbers. As the amount being measured is the same in both dimensions (i.e. 0 to 1) the area forms a square shape, and is as such referred to as 0 to 1 texture space. The bitmap that you create to use as a texture and subsequently (with the aid of UV’s) intended to wrap around your 3D model, must fit within this texture space.

Various 3D applications have different methods for achieving this, and as such it is important that you try to avoid letting your 3D software decide how to make your bitmap fit into this square space. The most obvious method of achieving this is to create bitmaps that are square, in other words the bitmaps width must match it’s height. Furthermore, in order to make efficient usage of your machine on which the rendering (real-time or pre-rendered) of which these textures will be done the dimensions should be to the power of 2’s for example 16 x 16, 32 x 32, 64 x 64, 128 x 128, 256 x 256, 512 x 512, 1024 x 1024, 2048 x 2048, 4096 x 4096 etc Using bitmaps that have dimensions that are to the power of 2 will also be particularly useful for graphics displays that use the mipmap technique for displaying textures.
You can read more about UV mapping in my “Understanding UV’s” page.

UV unwrapping should only be attempted after the modeling phase is completed. Edge loops need to represent the models target topology, as the key points to creating a good UV layout is in creating a layout that:

  1. Matches the models form as close as possible
  2. Does not have overlapping UV’s
  3. Minimizes stretching
  4. Uses texture space as efficiently as possible

UV editing has come a long way since it’s first implementations. Certain areas of the mesh need to be isolated as they will require extra detail or form separate pieces of the same model. One of the major advancements in UV editing is automatic unwrapping. The first implementations that I used of this was in a 3D Studio MAX plugin called “Pelt”. As you can see the map on the left of the above image is starting to resemble what the pelt of this minotaur might look like. This is what is meant by, “the UV layout should be as true to the original model’s form as possible”. From this layout we can tell by looking at it that it is from a character that has two legs, arms with fingers and a torso. These isolated UV components floating around in texture space are called UV shells.

The subtle red lines that flow through the Minotaur on the right represent the seams along which the UV shells will be separated to form the flat shells you see on the left. In other words the outer edges of the shells are the red lines you see in the 3D model.

The image shows the UV shells laid out to match the models form as close as possible, but the shells are outside of 0 to 1 texture space (represented here as the light grey square area). These shells then need to be proportionately scaled to utilize the 0 to 1 texture size as efficiently as possible.

The completed UV layout in the final image. The shells are as follows corresponding to the appropriate side of the model (i.e. left ear on the left side, right hoof on the right side) starting from the bottom shells ears and hooves, tail in the middle, and the main shell consisting of limbs, torso and neck, the head is to the left (and is the second largest island), rows of teeth molars (lower jaw, upper jaw) on either side of the main shell. The buckle cavity (mouth interior) in the upper left corner, with major canines on the top of the layout and the tongue in the middle.
The red area on the right is an empty space for the eyes which will eventually be joined to this mesh. But only after sculpting is completed, this is to reduce the amount of geometry that will be subdivided when sculpting the face.

Posted on Leave a comment

Minotaur VI : Head Modelling and Adaptation

The Head

There are many adjustments that were required regarding the original cow head to suit a more bull-looking head. Amongst which are flared nostrils, larger horns and droopy ears to name the most obvious.

Finally for the head, I added some pitch-fork type teeth, with a particular pose in mind for the end shot.

This is still a part of the modelling phase as I’m working with a base level mesh. Next UV unwrapping
then sculpting.