Posted on Leave a comment

How Did Blender’s Sculpting Technology Shape Up Over Time?

The Rise of Ngons

With the advent of Blender changing it’s polygon engine to include Ngons, as opposed to only supporting triangles and quadrangles (post-Blender version 2.63 circa 2012), a new mesh system known as Bmesh was introduced to Blender and brought with it several new features.

Besides the obvious advantages of creating organic and some inorganic models-types faster, Ngons also bring an addition to Blender’s sculpting toolset in the form of Dyntopo.

Ngons 101

Although Ngons are nothing new in the world of 3D, their utilization was for many years considered bad practice among many 3D professionals. This bad rap primarily stems from their unpredictable behaviour when it comes to deformation during animation. As you can imagine it is not of primary concern when ngones are utilized within a sculpt workflow, as deformation of a mesh is typically only for rudimentary purposes. There is generally no particular reason to retain transformation or deformation historical data when sculpting as this would typically be applied or baked into the mesh once sculpting resumes. Many 3D applications will simply remove this historical data by default, so as to further prevent unpredictable results.

As it became more evident that Ngons could benefit 3D sculpting more than impair it, from around the mid-2000’s it became more of a necessity for Blender’s internal mesh editing system, to be readdressed.

Building Meshes in the Early to Mid 2000’s

Prior to the massive uptake of 3D sculpting from as late as 2009, modelling was considered the de facto goto for building meshes. There were various options for producing outcomes matching different purposes such as box-modelling for organic surfaces, vertex pushing for real-time models or even NURBS modelling for industrial design, to name a few popular choices. However, the main consideration was that regardless of your choice in terms of modelling a mesh, that mesh would intrinsically be linked to to the final outcome (in some way or another). Therein resided the temperament of the era advocating the use of quadrangles and triangles as part of the mesh building process.

However a new implementation of 3D Sculpting was to change this fundamentally, towards the latter part of the 2000’s.

By decoupling the process of mesh building from the final outcome a new genre of 3D artist was ushered in. Unconcerned with the technicalities of modelling or the limitations of the final outcome, sculpting became truly modular and with this we saw the surfacing of many new (some old) sculpting software rising to the foreground, performing one task, primarily, that being to sculpt. 3D-Coat, Mudbox, ZBrush and of course Sculptris are some names that may come to mind. Of those perhaps, Sculptris (the only defunct sculpting software from that list) arguably accelerated sculpting into this new aforementioned era by implementing the first stable, non-commercial form of dynamic tessellation in 2009, when Sculptris reached version 1.

Although Sculptris was, shortly after reaching version 1.0, to be acquired by Pixologic (the makers of ZBrush), dynamic tessellation had already made its waves in the 3D community and we were hungry for it in our beloved Blender.

A Match Made in Heaven

In an almost synonymous and parallel timeline with Sculptris’ introduction of Dynamic tessellation to the 3D world, Blender developers were readdressing the limitations of the old mesh editing system. This system was slowly being replaced with BMesh and early adopters were able to start testing and working with the new system obtainable from GraphicAll, in 2011 (and possibly even before this, in less stable implementations).

BMesh is the underlying mesh editing system used by Blender. It is essentially a programmatic representation of the topology that comprises meshes rendered in the 3D viewports. It has its own API (Applications Programming Interface) which exposes very low-level almost C-like data structures for mesh manipulation and fortunately, if you are only concerned about making art with Blender, you’d never need to know any of that.

Prior to the inclusion of BMesh all geometry rendered in a 3D viewport in Blender could only be made up of a four-sided quadrangle or a three-sided triangle. In order to construct a complex 3D model these simple “building blocks” must then be arranged to represent the shape or form of the model. Of course, this is all relative to the degree of control you want when building your models, as much of the process of assembling the building blocks would typically be automated to some degree.

Destructive vs Non-destructive Sculpting

To clarify BMesh is not responsible for making sculpting in Blender possible, however, we certainly can attribute it for bringing to Blender a more natural and spontaneous approach to sculpting, affording artists the kind of creative freedom that alleviates many technical considerations.

Sculpting in Blender predated BMesh by approximately half a decade having officially premiered in Blender 2.43 in early 2007. Fundamentally, the difference that BMesh brought to sculpting was in a destructive approach to realtime retopologizing, much akin to that of dynamic tessellation. This, it can be said, was to be the inspiration for what was to become Dyntopo.

Blender’s sculpting toolkit was, at the time of its inception, very much on par with that of its commercial counterparts with Maya having introduced sculpting from as early as Maya 3 which was released in early 2000. Although this timeline might lead you to think that Blender was slow to catch up, in fact, this was not the case. Sculpting just never really took off in 3D until many years later, with the advent of dynamic topology. Prior to this Blender’s modelling toolkit pioneered an impressive array of intuitive tools making it a formidable choice for any serious modeller. The point is simply, that the focus back then was not the same. It was more about creating beautiful, minimalist, topology quickly and efficiently. Arguably, Blender delivered in this respect more so than it’s proprietary counterparts.

Sure, sculpting was available but as was the case with most high-end 3D suites of the time, to fulfil the promise of being able to create highly detailed models the trade-off was typically an exceptionally high-resolution mesh. This was often the result of geometry that subdivided in areas that were just simply not necessary. One of the side-effects of non-destructive sculpting at the time.

As you would expect, the great thing about sculpting utilizing non-destructive techniques is that your model retains a history. That history is a hierarchy of the deformations applied to the model by sculpting the low-resolution model which in turn contributes to the outcomes of the medium resolution model, contributing to the outcomes of the higher resolutions and so forth. There certainly are still many benefits regarding this approach to sculpting, if you are not already familiar with them you can read more about it here.

Nonetheless, it was with the advent of Dyntopo that an insurgence in a new breed of 3D artist was born.

Enter Dyntopo

Dyntopo, although it is a portmanteau for Dynamic Topology, is that and much more. Unlike typical, non-destructive sculpting which only affects the shape of the model, a Dyntpo mesh is able to both have its shape as well as its topology influenced by certain sculpt brushes. Although the idea of shape or deformation refactoring topology might seem contradictory given that topology is typically preserved under deformation, you should also bear in mind that the topology is effectively ‘dynamic’ and therein resides its namesake. In other words, the old topology is erased and rebuilt, ideally in realtime, with each brushstroke. It is this refactoring of topology that the destructive nature of this sculpting technology resides as well as its greatest asset, that being the ability to increase a model’s resolution only where it is necessary.

Also worth noting is that preservation of topology is typically not an objective artists pursue when working with Dyntopo, in fact, the opposite is true. As previously noted working with Dyntopo relieves artists of the technical requirements associated with modelling.

In order to create meshes that deform predictably for animation or other such circumstances from a mesh that was created using Dyntopo, the mesh should first be retopologized.

Retrospectively working with Dyntopo and BMesh

Fortunately, if you are using a version of Blender greater than 2.62 then BMesh is not something you really need to be too concerned about. As BMesh was intended to replace the old mesh system, Blender versions greater than and including 2.63 will use it by default, this means that geometry can consist of triangles, quadrangles, ngons or combinations of any type. All you need to do is simply use Blender’s extensive set of polygon modelling and sculpting tools to create whichever type of polygon you desire and let Blender figure out the rest. The only time you need to be aware of which mesh system you are using is if you wish to import a model created with the new BMesh system into a version of Blender older then 2.63, for whatever that reason may be. In that case, your model might need to be quadrangulated or triangulated then either saved to a legacy format or exported to a format such as OBJ. Of course, this will disregard all animation, rigging and various other Blender specifics.

Dyntopo on the other hand can be turned on and off when working with Blender’s sculpt tools. The results, as noted above, are that you will effectively be using a destructive or non-destructive sculpting technique, respectively.

Enabling Dyntopo in older Blender versions required clicking the Enable Dyntopo button. This was available from sculpt mode, as a tool option for applicable brushes.

In more recent versions of Blender the button has been replaced with a checkbox. Accessing Dyntopo remains relative to the sculpt mode interface as well as a part of tool configurations. Over the years, improvements have been made to various aspects of the tool. Significantly, this includes the ability to gain a greater degree of control over how detailing in dynamically generated topology relates to camera proximity or even the size of the brush.

Retopologizing a Dyntopo Model

Not all models created with dynamic sculpting need to be retopologized. A few of the reasons you might consider retopologizing your model include if you want to create a workable UV layout that can be used for texture painting/mapping etc, if your model is intended for animation especially character-based animation, if you want to export your model to another 3D application or games engine working with a high poly count could significantly impair performance.

Retopologizing is a modelling technique that is used to create a mesh with more suitable, usually realtime, topology. This is in contrast to the topology of a sculpted mesh and is a technique used for various reasons including the aforementioned.

There are many free and commercial tools for assisting or automating the process of retopologizing. However, if you have a background in 3D modelling this will certainly be to your advantage. Sometimes the simplest options might provide the best solutions, bearing this in mind and in terms of retopologizing you could find all the solace you need in the snap to faces button. Once this button is enabled, it’s just a simple case of extruding vertices to create the desired edge loops. Then selecting edges and creating polygons. It’s always very tempting to create a fully quadrangulated mesh through retopologizing however this is not always necessary and in certain instances when the mesh is intended for a static shot using ngons in the model could speed up the modelling process and have no visible effect on the quality of the rendering.
Once a retopologized mesh is created, it’s a simple case of creating a normal map if desired and/or using multires and shrinkwrap modifiers to retain the sculpted details and maintain a manageable, deformable model. If you are feeling a bit lost at this point you can read more about it here.

The Future of Sculpting in Blender

Blender’s sculpting technologies are actively being developed while sculptors have been receiving a rapidly expanding toolset over the past decade that rivals any proprietary solutions. One could even say that things have come full circle in the sense that where the technology deviated from a non-destructive workflow emphasising the use of the multires modifier, we are now seeing new hybrid workflows that utilize aspects of both destructive and non-destructive techniques. But does that mean sculpting is becoming more of a technical than artistic skill? Not in the least bit, in fact the emphasis in the question might be misguided given that many technical artists embrace such cross-overs and create not just art but the tools with which to make it.

If you want to learn more about some of the new and exciting tools added to Blender’s sculpting toolkit, keep an eye on the Blender Developers Blog.

Subscribe

* indicates required
/ ( mm / dd )
Posted on Leave a comment

How To Export Skeletal Animation From Blender To Unreal Engine

Now that you have Unreal Engine and Editor up and running, it’s time to start migrating your digital assets from Blender into the Editor. In this post we will discuss some of the prerequisites for building certain assets in Blender, then exporting and finally importing and setup within Unreal Editor.

Rigged and Animated Character from Blender Imported into Unreal Editor/Engine

Build the model in Blender

In fact, you are not limited to Blender in terms of building your models for Unreal Engine 4 (UE4) however, we will be focusing on Blender as it provides a great deal of versatility in terms of content creation. We will focus on an animated 3D character that will be exported via the FBX file format, then imported into UE4. When building assets such as this for a real-time engine there are certain considerations to take note of. Working with an application such as Blender in conjunction with UE4 requires a certain degree of understanding as to what role each software plays in the production pipeline. In other words, there are certain tasks that Blender performs well that should be completed before a model is exported for UE4. Among these considerations are,

  • Topology
  • Non-overlapping UV’s
  • Texture mapping
  • Rigging and Animation

Model Considerations

Topology: Build your models without non-manifold geometry and with edge loops that are placed with consideration for accurate deformation during animation. Although, it will require some experience in getting this right you can start by learning about the basics on this free course.

Non-overlapping UV’s and Texturing: Your model should have it’s UV’s laid out before it is exported from Blender. When unwrapping your model’s UVs to export for a real-time application, its generally best to maximize usage of the 0 to 1 UV texture space, keep your textures square with dimensions between the 4th to 13th power of 2 exponents. You can read more about understanding UV’s here.

Rigging and Animation: Blender has a vast toolset for creating simple to exceptionally complex animations and supporting the content creation process with advanced rigging features. Although you can create animation in UE4 and will certainly encounter times when this is necessary, mastering animation in Blender will certainly be beneficial with regards to raising the quality of your final output significantly. Learning animation is a time-consuming process if this is not in your interests simply utilize readily animated models.

Exporting a Model and Rig

Once you have your model prepped with all the above criteria checked, it’s time to begin the export process. In the interest of keeping things simple, we are not going to focus on export settings and the finer details involved therein. Simply, we will examine the process as an overview in order to equip you with the tools to get an animated character from Blender into Unreal Engine 4. Taking a research-based approach to learning, from that point, will help accelerate your project to the next level.

As mentioned in a previous post, when exporting a model, Location and Rotation transforms should be 0 for all axes.

Scale transforms should be at 1. This is both applicable for the model and for the armature.

When choosing a unit of measurement such as Unit system, Metric or Imperial it would be considered best practices to remain consistent with the same system throughout the entire project. Doing so could contribute towards creating models that are imported at the correct scale, and ultimately save you some time.

You will not require any animation for this first step. All that is required is that the model is bound to the rig and adequately weight painted prior to exporting. With the rig selected go to frame zero, and enter the model’s Rest Position if it is not already in rest pose. If there are any objects in the scene other than the model and rig, such as lights, camera or other objects simply delete them. Ideally, just the 3D model and rig should remain.

With your rig selected enter Pose mode (from the 3D Viewport not Pose Position as opposed to Rest Position i.e. remain in Rest Position) and rename the top-level parent bone, that is the highest bone in the rig’s hierarchy, to Root.

To reiterate on the process we will first be exporting just the model and rig. Once we are happy with how this has imported into UE4 we will then continue to export the animation.

As we are only interested in the Armature and Mesh, select those object types from Blender’s FBX export options dialog. Although if you already deleted all other object types as previously noted then your object types would have been implicitly set.

In the interest of separating animation out of this file, if any already existed, it is recommended to uncheck the Bake Animation setting at first.

When exporting your media to UE4, keeping things as simple as possible is key. Therefore if you are able to create a workflow that enables only exporting Deform Bones successfully then, checking the Only Deform Bones option can help to significantly reduce file sizes and speed up the import/export process.

Bear in mind that if you choose this option for the initial exporting of the character and rig, you should keep this option checked for exporting all remaining animations.

You are now ready to export the file. Save the file to a unique name.

Project Setup

Launch Unreal Editor to configure a new project. If you have not installed UE4 yet start here. Select Game for the type of project and Blank for the Project Template.

Choose Project settings that match your systems capabilities as well as your intended output. You can always change these settings at a later stage. There is no need to include Starter content as we will be importing our own content. We won’t be covering Blueprints in this particular post.

Level Setup

Once you have your project setup, the launcher will exit and the Editor will open .

The primary areas we will be focusing on within the editor are the

Content Browser: This is, by default, located at the bottom of the screen.

Viewport: The viewport is used for a close representation of what your final output may look like. It’s worth, however, noting that your game’s performance should not be measured by that of the viewport’s.

Preview: A preview creates a much more accurate representation of your applications performance and appearance and therefore the final outcome.

From the Content Browser click Add New and choose New Folder. A new folder will appear within the Content Browser. Rename the folder to your liking, it will be used to store the assets you import into your scene.

Import the Mesh and Rig

With the new folder you just created open, click the Import button within the Content Browser. A file dialog will appear, navigate to where you exported your files from Blender.

Select the FBX file you exported with no animation. This file should only contain the 3D mesh and Rig.

You will be presented with FBX Import Options within Unreal Editor. Ensure that Skeletal Mesh and Import Mesh are checked, Import Animations should be unchecked.

If you chose to Export from Blender with Only Deform Bones selected, you may receive a warning within Unreal. This is not an error, it is simply a warning and should be expected due to the noted export configurations. You are free to close the dialog as it will have no impact on your workflow in this instance.

Several new assets will now appear within the Content Browser. Depending on how you exported your asset from Blender, you should as a minimum requirement see the 3D Mesh object, a skeleton (a representation of your Blender Armature), a material node and a physics node.

Import the Animation

At this point you are now ready to begin importing your animation.

From the Content Browser click the Import button again, this time select the FBX you exported with the animation.

It’s worth noting at this point that since you have already imported your 3D mesh, in the interests of keeping your assets at a small and manageable size it is possible to delete the mesh from this file entirely and retain only the Armature with animation. This is something you could have done in Blender before exporting the animation.

Either way you can always simply delete the duplicate 3D mesh from the Unreal Editor Content Browser.

When importing the animation Unreal should be able to detect the rig it applies to. It will, therefore, select the rig, as can be seen from within the FBX Import Options dialog under the Skeleton section. You can also click on the drop-down menu to manually select the appropriate rig.

Ensure that Import Animations is checked this time.

Again you will notice that several new assets have been created. To preview the animation double click the Animation Sequence asset.

To place your animated character into the Level, click and drag the Animated Sequence asset into the Viewport. You will be able to place it on the ground plane.

Create and Edit Materials

Now that you have imported your animated character into your Level, its time to turn your attention to editing it’s materials.

When your 3D mesh was imported by default Unreal would have created a new texture for it.

Simply double click this material from the Content Browser and the Material Editor will open with the noted material loaded for editing.

Let’s now examine a quick material node setup.

Before we start setting up our material you will need some textures to apply to the material. Textures can either be created by hand, from photo references or by baking from Blender. How you choose to create your model’s textures will be determined by the type of outcome in terms of the aesthetic and technical requirements of your project. You can learn more about the texturing process in this free course.

To import your textures into the project simply follow the same procedure for importing any asset, from the Content Browser click Import.

Once your textures have been added to your project it is time to connect them to your model’s material. Double click the material in the Content Browser to open the Material Editor.

Depending on what Shading model you have selected for your material certain material inputs will be available and some not. If your model is of an organic character you could choose the SubSurface shading model, thereby making the SubSurface Material Input available.

In order to add the textures, you imported to the material inputs, hold the “t” button on your keyboard and left-click in an empty area within the material editor’s graph-like background. A new Texture Sample node is created.

In the Texture Base section of your Material details panel (by default on the left-hand side of the screen), you will find a drop-down menu allowing you to select one of the textures you imported. If you are connecting the model’s base color choose your diffuse/color texture, if you are connecting a Normal map choose the Normal map texture you baked from Blender and so forth. You can read more about the different texture types here.

With the correct texture selected you are now ready to connect the texture to your material. There are a variety of different possibilities in terms of how to make this connection but for the sake of simplicity, let’s assume you are attempting to connect a diffuse/color map to your material. In this event click and drag from the Texture Sample RBG output and a connector will appear. Hover over the material’s Base color Input and you will notice a green checkbox appear. This indicates a valid connection. When you see this connection, stop dragging and the connection will be made.

Continue to connect and experiment with other textures in a similar way, until you have connected all of the textures you have imported. You can preview how the changes you make to your material affect your model as you edit your material. Simply click the Apply button in the material editor and position it such that you can see both the material editor and the viewport simultaneously.

Once you are satisfied with your material, click Apply then Save in the Material Editor. You can now close the Material Editor and your your materials will be visible on the model within the current Level.

Exclusive offer for Learners

Jumpstart your learning with this exclusive limited offer for RabbitMacht learners. Get the elephant model used in this post with textures, rig and animation at a phenomenal 50% discount! Apply the coupon code below at checkout and get your Royalty Free model to use in your own commercial and non-commercial projects.

*This offer expires at the end of August 2020 and is only available for the first 10 customers.

Posted on 2 Comments

How to Build Unreal Engine 4 on Ubuntu with Blender Assets

The original Unreal game

Although the source code for Unreal Engine has been made freely available in more recent times, creating and editing levels using the engine is nothing new. In fact, since the first game that used the engine in 1998, that being Unreal was published the intrinsic link between gaming and games development has always been evident within the Epic Games ecosystem.

If you can recall that far back one of the most impressive elements of Unreal was in UnrealEd, a level editor that was distributed at no extra cost to the game. However, let’s not forget the original games beautiful lighting, compelling gameplay and that incredible fly-by intro sequence that sets the tone for all the mystery and intrigue that is to follow.

An early screenshot of UnrealEd.

The Unreal Engine has progressed significantly since it’s initial inception of UnrealEd, in this post we will be setting up Unreal Engine 4, currently the latest version of Unreal publicly available and soon to be replaced by version 5 in 2021. Although you could easily head on over to Epic’s site and download a copy of the engine, for easy installation, we will instead be building the engine from the source code. There are many benefits to building the engine from source of which one is that we are not restricted to the Operating Systems that are supported on the official download page such as Windows. We will be building the Unreal 4 Engine on Ubuntu 20.04 LTS Focal Fossa.

Ubuntu 20.04 LTS Focal Fossa

Technically speaking it’s not entirely necessary to use this version of Ubuntu, or Ubuntu at all. As previously noted building from source removes such limitations so you could use just about any modern Linux-based distro. However, for the sake of convenience and simplicity, we’ll be using Ubuntu’s most current LTS (Long Term Support) version.

In continuing the Open Source ecosystem of this project we will not be relying on propitiatory Autodesk products (with all due respect). Instead, we will be utilizing the Open Source Blender 3D content creation suite to develop and import assets into our Unreal project. Of course, one of the main benefits of this approach is that to get you up and running will not cost you a dime in terms of software purchasing, licences or the like. However, it is worth noting in the long term if you do happen to make a game that does start making money then depending on how you distribute it, UE4 licensing could result in some costs.

Now that we have an idea of where we’re heading lets jump right in and get started with building UE4 on Ubuntu.

Build from source

When building a project from source code you will need to take into consideration the programming language/s used to develop that code. Without this consideration, source code is nothing but ASCII text, similar to that in a word processing document. However, in order to make something useful of the source code it needs to be converted into something that a machine can understand, this refers to the process of compiling which forms a part of the build process.

Building an application from source code will generally involve several steps but simply put in the context of what we are doing, we will

  • Resolve dependencies
  • Create online accounts
  • Clone the codebase
  • Configure resources
  • Make and install from binaries

Take your first steps into the increasingly popular field of 3D animation, with this free beginner course in modelling, texturing and character development fundamentals

Resolving Dependencies

Before we get started with accessing and downloading the source code there are a few additional tools we will need in order to ensure the build is successful. The process of installing the requirements needed to build an application from source can be referred to as resolving the software’s dependencies.

  • In terms of UE4, the source code is written in C++ and as a result, you will need the tools for compiling a C++ application.
  • As UE4 makes extensive use of hardware acceleration in order to render very sophisticated 3D graphics in realtime, you will need the latest drivers for your Nvidia or AMD graphics card. This is not as much a dependency as a software requirement, as this only becomes relevant after the application has been built.
  • Finally neither a dependency nor a software requirement, but arguably an essential component within any developers toolkit would be a solution for Version Control Systems (VCS). In this case, the recommendations would be Git and GitHub.

As previously mentioned the operating system we will be using is Ubuntu 20.04 LTS. Although it might be possible to build UE4 on other versions of Ubuntu, the process is somewhat simplified on 20.04 and encountering difficulties with graphics drivers on 16.04 after successful builds or failed builds on 18.04, can prove to be less of an issue.

build-essential

Make sure build essential is installed on your system you can do this through Synaptic or by entering the following command in a Terminal,

sudo apt-get install build-essential

Build-essential is a meta-package, that is to say, it is not essentially an application itself as much as a tool for installing other software. In this case, the other software would primarily be for the purposes of building applications developed in C++. Among the software it will install will be g++ (the GNU C++ compiler), various development libraries as well as make, which is a utility that assists with the compilation process. Of course, you could simply install these packages individually based on your skill level and proficiency with regards to C/C++ development. If you don’t wish to install build-essential you could also rely on the setup process, noted below resolving these dependencies within the build toolchain. Nonetheless, it’s worth noting in the event you encounter difficulties or wish to expand on your C++ development skills.

You might have also noticed from the above screenshot that bumblebee is not installed on this system. After having encountered issues with the Nvidia 660M discreet card not initializing on this system, removing Bumblebee resolved the issue. If you are utilizing an Nvidia card, it is recommended that you install the latest drivers supported by that card. As a result, the system noted for this installation uses version 4.40 of the Nvidia drivers. It is also essential that you use drivers that support the Vulkan API. OpenGL and Direct3D alone will not suffice for UE4.

As you will need an Integrated Development Environment (IDE) for the purposes of developing code for your game or for editing the UE4 source code, Visual Studio Code 2017 or greater is an official recommendation. You can obtain it from Ubuntu’s Software Center, it is also available as a Snapshot.

Nvidia 4.40 drivers

Git is essentially a source code management tool that you would install locally on the system you will build UE4 on. Although you do not need git to build UE4 there are however some key benefits to using it, which we will have a look at shortly. To install git you could simply do so through Terminal again with the following command,

sudo apt-get install git

This will install all the requirements for working with git through the Command Line Interface (CLI). Git forms one of the main components in working with Version Control Systems (VCS), but to have an effective solution for software development you will also need a remote hosting and VCS solution. There are several providers that can fulfil this requirement in various paid or unpaid capacities, however, GitHub is the service provider we will be using. As a result, you will need to visit github.com and sign up for an account if you do not already have one. You will not require more than a basic free account. Once you have git and GitHub setup you are ready to start retrieving the UE4 source code.

Finally with regards to accounts, make sure you have signed up for an unrealengine.com. This account is needed in order to access the UE4 source code.

Get the Source

Log into your Unreal Engine account, under your user profile you will find a link to your account settings.

Click the PERSONAL link. This section lets you customize characteristics about your account, as well as allows you to connect 3rd party software.

In the panel on the left is a button called CONNECTIONS

Click this button to customize what 3rd party software your Unreal Engine account can access. In this section, you will find an option to link your GitHub account. Click the CONNECT button and follow the prompts to connect your Unreal Engine and GitHub accounts. Ultimately you will need to become a collaborator on the Unreal Engine GitHub repository in order to obtain the source code. Don’t worry if you don’t have any experience with code, you will not be able to make changes to the main source code that easily. However, if your intent is to modify the source code then you should probably consider Forking the Unreal Engine source code and editing that. We’ll discuss options for accessing the code shortly, but first, you will need to finalize the connection between the accounts by visiting GitHub and clicking the Authorize EpicGames button.

Once you have authorized Epic Games access to your GitHub account, you should then receive an email inviting you to join the @EpicGames organization. This email will be sent to the address associated with your GitHub account, worth noting particularly if you are using different emails for your UE4 and GitHub accounts. Accept the invitation and you will finally have access to the Unreal Engine source code.

Now that you have access to the EpicGames repository as a collaborator when you visit their repository at https://github.com/EpicGames/UnrealEngine you will notice that the UrealEngine repo is now accessible.

Read through the Readme file, which has a friendly welcome message as well as some very useful links for learning how to use the engine.

You are now ready to Download, Clone or Fork the engine’s source code. Before proceeding there are a few suggestions that should be taken into consideration regarding these options.

  • Fork: This option is particularly useful if you wish to modify the source code. Choosing to create a Fork will replicate the repository within your personal GitHub account. You are then able to modify and change the source to your requirements. This does not download the code onto your local workstation and is therefore not recommended as a standalone solution if your interests are primarily in creating games or realtime applications with UE4.
  • Download: This might initially seem like the best option and although there is no particular reason not to utilize this option if your interests are primarily to get up and running with potentially as little overhead as possible, you will be defaulting on the benefits of the VCS provided.
    To download the source you will first need to choose a branch. By default, the release branch is selected as this is a well maintained and tested branch with updates regularly being added and merged. If you are somewhat familiar with VCS you might be tempted to download the master branch, however, this branch is primarily the master for development and therefore not subject to as much testing as release. You can also choose to download older versions of the engine and the developer teams also have their own branches prefixed with dev-branchname. However, these branches tend to be considered as bleeding-edge and should not be used for the purposes of production as much as for development.
  • Clone: Cloning the source perhaps provides the most versatility in terms of the three options as this will download the source onto your local machine, allow you to modify and test your changes locally as well as benefit from working with fully-fledged VCS enabling you to do things such as checkout other branches on the same system, modify, stage, commit and push remotely if you initially forked.
git clone https://github.com/EpicGames/UnrealEngine.git

To clone the repository navigate to the desired directory through a CLI and run the command above.

As you can see from this screenshot, cloning required approximately 3.41GiB of disc space. Depending on the speed of your internet connection this could take a considerable about of time to obtain. Although this might seem like a lot of data be aware that it is only a small part of the requirements for building and running the engine, as in order to use the engine you should have at least 100 Gigabytes of free space on your drive. That is not a typo and does not accommodate the additional space requirements of your personal projects.

Compile Binaries

At this point you would have obtained a copy of the source code, installed all the required dependencies and are now ready to start setting up the engine and installing it on your system.

To clarify, the source code you have downloaded is in a human-readable format, which makes it possible to edit and maintain. You will now need to convert this code into a machine-readable format through the process of compilation, this will result in binaries which will be executable therefore allowing you to finally launch and run the engine.

If you have followed all the steps to this point compiling the binaries will be quite straight forward and will only require a minimal knowledge of the Command Line Interface.

Open a terminal window and navigate to the root directory of your UE4 source code. You can do this by changing to the appropriate directory.

cd UnrealEngine

Then run the Setup shell script. This step might take some time depending on your system’s configuration as it will download a native toolchain that will ensure the UE4 codebase compiles and links successfully on your system.

./Setup.sh

You will then need to generate the Unreal Engine project files, once setup has successfully completed.

./GenerateProjectFiles.sh

Finally you will need to run make to build the binaries

make

This step will require substantial system resources and time depending on the target machine you are compiling against. You will need at least 8GB of RAM and a multi-core processor (8 or more) to complete this step in an hour or less. Once this step has successfully completed and no errors have been returned you are finally ready to launch the Unreal Engine 4 Editor. Congratulations!

Launching the Editor

Well done for getting this far, it’s now time to launch the Editor and start your journey towards learning how to make your first game. Navigate to the directory where the binaries were created and launch the editor.

cd Engine/Binaries/Linux

Now launch the editor.

./UE4Editor

The first time you launch the editor, it will need to compile shaders which will take some time but fortunately, this only needs to happen the first time the editor is launched.

At the Create New Project screen, select Games then Next and you will be presented with the Select Template screen. Choose Blank Project, then make sure Blueprint is selected and No Starter content is required as we will be importing our own mesh from Blender.

Blueprint is the Unreal Engine visual scripting language. With it you can create gameplay through a node-based system using simple drag and drop techniques to create connections between nodes in the editor. This means that as a designer you are not necessarily limited to having to learn a programming language in order to develop your game.

Importing a Mesh into the Editor

Now that we have UE4 up and running on Ubuntu we’re going to use Blender to export a 3D asset that we will import into the editor. In the interest of keeping things simple for now, we are just going to discuss the basic concepts of importing a static mesh.

Within the Editor locate the Content Browser section in the bottom half of the screen and click the Add New button.

Click New Folder to create a new folder within the project, that will be used to store the imported asset.

You will need an asset to import into your project. There are various options for importing 3D meshes into UE4, although the official file format for this is usually considered to be FBX. The FBX file format supports animation as well as many other features, but we are just using it to familiarize ourselves with the import/export workflow.

In order to prepare your mesh for exporting from Blender make sure that all transforms have been applied. In other words if your mesh has any rotation, translation or scale attributes these will need to be baked into the mesh so that your mesh transforms are all zero. If your object moves while trying to Apply it’s transforms you might need to enter the object’s Edit mode select the object’s vertices and then translate the object back into the correct position. Essentially your object’s Center of Mass should be as close as possible to the origin without intersecting the ground plane.

Apply the object’s transforms by selecting the object in the 3D viewport, then clicking the viewport object menu and Applying all transforms from the Apply menu.

Next export the mesh by choosing FBX from the export options.

The default settings are ideal for the purposes of what we are doing, and the only option you would need to change is to ensure that the Selected Objects option is checked from the FBX export settings. Save your exported mesh and it’s time to import it into the Unreal Editor.

From the Content Browser in the Unreal Editor click the Import button, to import the FBX file you just exported from Blender.

When importing this file again accept the default settings. If all goes well your assets will now appear in the new project folder you previously created.

In completing this final step you would have successfully imported your 3D model into the Unreal Editor. From the Content Browser click and drag your asset into the editor viewport. Your model is now visible within the Unreal 4 3D rendering engine’s editor. Click the Play button to preview what your asset will look like in-game.

With the techniques you have learned here try import other models as well as other types of assets including texture maps.

Visit the Asset Store for More Realtime Models made in Blender

The elephant model used in this post, as well as other realtime models, are available from the RabbitMacht Store to use in your own projects.

This series of models come with 4K Textures, Non-overlapping UV’s, Normal maps, Rigged and are available in High resolution as well as realtime versions.

Posted on Leave a comment

Minotaur XIV : FK and Pose Rigs

Forward Kinematics (FK) Rigging

If you have been following the other posts on the Minotaur you might have noticed that this is not the first post that mentions rigging. In fact, the very first post for the Minotaur “Minotaur” mentions rigging and another post on Skinning “Minotaur XI” also mentions rigging. As you can imagine rigging is not essentially reserved for the purposes of animating a character however it is primarily used to fulfil that purpose.

  • In the first post on the Minotaur, a rig was used to pose large portions of the model by deforming the mesh in such a way as to avoid geometry intersecting itself, ultimately this rig was used as a tool for modelling.
  • In the post on skinning, a rig was used to pose the final version of the modelled character for a wide action shot. A rig like this is not suitable for animation and is intentionally kept simple as once the character is in the desired pose, the deformation is baked into the geometry and the rig is discarded.
    Deformations that usually would be done with weight painting (and would be visible for that particular pose) can then be added with lattice deformers, sculpting and modelling tools.

In the following posts, we will be discussing creating a Forward Kinematics (FK) Rig, a Controller Rig and skinning the Minotaur to the FK rig all for the purpose of creating animations.

Take your first steps into the increasingly popular field of 3D animation, with this free beginner course in modelling, texturing and character development fundamentals

The above video demonstrates the Minotaur attached to an FK rig and posed for a turntable. The render is of the realtime model (<7k polys) as it is seen from the 3D viewport, and has incomplete textures.
One of the most important technical qualities of a character that needs to be set up for animation is to have multiple levels of detail of which at least one of the mid to low levels provides a close to final output representation of the character’s deformed geometry while still maintaining realtime playback in the 3D viewport.
Relying on non-realtime rendered viewport previews (also known as Playblasts) can hinder the process of creating animations significantly, and maintaining a responsive 3D environment in which you create your animations is crucial.

The above video is a demonstration of the Minotaur moving from one pose to another driven by an FK rig while scrubbing the playback head in Blender. Note the character’s realtime responsiveness to the timeline (at the bottom of the frame) as the mouse moves back and forth. This is true even in an older version of Blender.
When the Minotaur was bound to its armature which comprised solely of an FK rig almost every bone in the rig was enabled for deformation. The controller rig is then built on top of the FK rig once weight painting has reached a reasonable representation of what the finished product will look like.

As mentioned the FK rig is then posed in various ways to test how the mesh deforms, it is in these poses that weight painting occurs. Typically, it should not be necessary to paint weights on a character in its default/bind pose.

Although the term weight painting implies a superficial task that is related to the surface of the mesh, I prefer to think of weight painting as an extension of the modelling process. It is true that weight painting is performed only on the surface or “skin” of the mesh but the objective of the task is to modify the volume of the skin, muscle tissue, flesh etc that is affected by the bones that are rotated to create that deformation. As a result, we are simulating the deformation of a volume by means of a tool that addresses the surface of the model. This effectively results in displacing vertices by moving them towards or away from the area of deformation.

In Blender, we paint vertices as red if we would like them to be more affected by a bone’s deformation, in Maya we would paint them as white. Regardless of what software you use the principle remains the same, we are effectively modelling what we would like the areas surrounding the armature’s/skeleton’s joints to look like when those bones are rotated into a position other than their rest positions. We do this so that we can ensure that every time the bone is rotated into a particular position that the volume of geometry surrounding that bone and its joints will fold, wrinkle and deform the same way each time.

This character has two layers of FK bones, one used to deform the Minotaur and another used to deform
the Minotaur’s armour.

A rig like this is far too complex and cumbersome to animate with the simple rotations and a single translatable parent that FK would allow for, so in order to make the animation process more intuitive, a controller rig will be created for the FK rig.

The controller rig, as the name implies, is responsible for providing the controls used to transform the rotations of the FK rig. Sometimes the FK rig might be referred to as the base rig as it will be the most low-level rig and ultimately the tool that provides the link between the animation system and the rendered character.

One fundamental difference between the base rig and controller rig is that the controller rig will primarily be used for the purposes of creating translation transforms, often by means of Inverse kinematics (IK). This is in contrast to the base rig which will generate rotation transforms, primarily. The combination of these transforms result in the keyframe data that eventually make the character animated.

Another level of complexity making up this character’s rig will be that of a third rig for the purposes of dynamic secondary animation. However, this is something that will be addressed at a later stage.

Posted on Leave a comment

Minotaur XIII : Texturing, Materials and Uv’s

Texturing

The Minotaur consists of several textures and materials that are composited together, this render depicts the current painting status of the Minotaur’s color texture channel.

This above image took approximately 6 minutes to render using the Blender Internal (BI) Renderer. This includes 3 Sub-Surface Scattering (SSS) passes (each with its own material), a color map, a normal map, and 28K polys subdivided 3 times at render-time. Although there is still a lot of work that needs to be addressed, particularly regarding the specularity/ reflections pass and completion of texture painting for color and normal maps, I find the render times and quality from BI to be very reasonable and certainly something I am pleased to work with.

Materials and SSS Considerations

The main reason for having multiple materials composited for the Minotaur is so that three layers of Sub-Surface Scattering (SSS) can be addressed independently. These layers represent the epidermal skin layer, the subdermal skin layer, and the backscattering layer.

  • The Epidermal skin layer is the outermost layer of skin and as such will tend to be the layer that most prominently shows the current texture map that is being painted in the previous rendering.
  • The Subdermal layer is used to represent the fatty tissue that exists under the epidermal layer. Its texture map will differentiate most significantly in that it will also include the color of the Minotaur’s veins. The material’s primary function is to create the impression of the character having volume, as opposed to appearing like an empty shell.
  • The Back Scatter layer is the SSS layer that is most discernible as it adds a reddish tinge simulating blood vessels within the Minotaur’s body. This will be particularly noticeable in areas where the Minotaur’s volume is significantly less so that it is easier for light to pass through it, such as in his ears.

The following two images demonstrate the three materials composited together, with SSS properties. The first image is of a low resolution render followed by the same material and lighting setup on a high-resolution model.

Low Poly SSS composite
Hi-res SSS composite

As you can see the SSS material properties affect the renderings with significant differences based on the mesh density. This is yet another benefit of using actual geometry displacement, and not relying on normal or bump textures for surface variation (as would be the case in the first of the two renderings).
Fortunately, rendering high-density geometry that is subdivided at render time (see Minotaur XI Skinning post for more details regarding setup) is a feasible option in Blender.

Multiple UV Layouts

As I mentioned in a previous post this character will require multiple UV layouts so that more texture space can be allocated to certain areas of the model that will be featured in some close-up shots.
One of the downsides of multiple UV layouts at this stage is having to re-bake the character’s normals with the new UV layout. Although this is not a problem for me as I save my working files incrementally, it does mean having to revisit previously saved files which some people might find to be problematic (depending on your file saving habits). As the character’s UV’s are adequately laid out, I will only need to add one additional UV layout.
The following image shows my current progress with the color texture channel. As you can see I prefer to work on one side of the character as I lay down a base for the details that follow, then mirror and modify this base in an image editor before painting more detail and variation in the texture. I use composites of photographs laid down in an image editor (the GIMP in this case) then export the image as a PNG and paint over it with the clone and paint texture tools in Blender.
This texture is approximately 10% completed in this image. I hope to have more posts of this map as it develops.

Posted on Leave a comment

Minotaur XII : Optimizing a Poly Count for Rendering

Optimization Testing

Early multi-angle, animation test with the Minotaur character.

Synopsis

The model and armor take about 4.5min/fr to render. Using the rendering technique outlined in my previous post on skinning (“Minotaur XI”) the Minotaur in this animation is subdivided 3 times at render- time, then targeted to a high resolution sculpt which was baked at level 03 to displace the subdivided geometry (see “Minotaur XI: Proxy Model Setup” for details). The high-res sculpt is saved externally and linked to the current render file, this reduces the file size for this particular character from approximately 0.5GB to 100MB. Smaller file sizes help clear up unnecessary RAM usage, which has been reduced from 8GB (RAM) + 3.5GB (Swap Space) to current usage of 4.1GB at render time and 2.1GB when loaded (Blender startup uses about 1.3GB of RAM for this setup). This reduction in RAM usage accounts for the reduced render time which was previously 30min/frame to the current time of 4.5min/frame. This makes a vast difference in pre-rendered animation when you consider that approximately 25 frames are required to account for only a second of animation.

Testing Criteria

Only Two separate passes of 1) character and 2) armor was used in this render. No texture maps have been completed yet, as this render is mainly used to gather data on three main categories:

  • How geometry is being displaced at render-time over the entire mesh
  • How normal mapping affects the displaced geometry
  • And render timings on optimized models.

Armour Geometry Targeting Displacement and Normal Mapping

Several basic renders were also created testing the same criteria in the armor, the results follow.

The preceding image is of the Minotaur’s right shoulder guard. The lighting is particularly “unflattering” in these images as certain areas of the geometry are highlighted for consideration. Any areas that indicate stretching of the normal map will need to be addressed with multiple UV layouts, but this will likely only be addressed at a much later stage when the camera has been locked down for the final shots.

The following image is a shot of the right shoulder guard from the back of the character. It’s evident from this test that geometry displacement did not recess the polygons comprising the holes in the strap adequately, as was the case in the sculpt data. Custom transparency maps will need to be used to compensate for this lack of displacement on the character’s armor straps.

The preceding image is of the lower body area with the toga armour between the legs. The sculpt data on this geometry was exceptionally high and as a result, is a serious consideration with regards to detail loss during optimization. However, the geometry displaced considerably well when the Simple algorithm for calculating subdivisions was chosen (as opposed to the standard Catmull-Clark method). Subsequently, the toga armor straps only required a single level of subdivision (the lowest for all the characters components). The Toga is also planned to be a hairy surface in the final render, so a large amount of detail would have been wasted with more subdivisions.

Posted on Leave a comment

Minotaur XI : Proxy Model Setup

Skinning is the process of attaching geometry to an underlying skeleton or armature. We then use the armature to pose the model and simplify the process of animating it. There were several issues that had to be considered when skinning the Minotaur character, of which the main issue was inconsistent Normals.
This post will cover a method I discovered on how to correct this issue without having to resort to a lattice/cage deformer and still allowing for the (very important) proxy object method of rigging.

What’s so bad about inconsistent Normals?

As all polygons will have two surfaces, the software you are using will need to know which of those two surfaces is pointing away from the mesh. Geometry’s Normals should be perpendicular to the surface of the polygon pointing away from the outer surface of the mesh. This is not only important for skinning but also for sculpting.

📝 If you are feeling a bit lost a this point, you can brush up your knowledge on Normals in my free 3D course

1. Easy Fix

The best way to ensure consistent Normals in a 3D package is to Apply or Bake all transforms of the model, then select the model’s faces, normals or vertices (depending on your 3D software) and recalculate the direction of the selected component’s Normals.
In Blender this is really easy as ctrl-n will automate this process for your selection. In other software you might need to turn on “show surface normals” to ensure that the objects Normals are pointing in the correct direction, and if not select the erroneous component and choose “flip normal”. This will reverse the direction of the selected component’s normal.
You might come across inconsistent Normals when sculpting with a tool that translates geometry away from the surface or into the surface of the mesh. For example, creating a stroke along the surface of your mesh (with a sculpt tool such as “Draw”) could start as concave but end as being convex. In this case the Normals are possibly inconsistent and need to be addressed.
If this problem is not addressed and the same model is used in skinning, that model is likely to suffer from poor deformations.

Problems Arising From The Modifier Stack

Although this problem might be trivial at times and fixing it is simply a matter of performing the steps outlined above (see Easy Fix), sometimes the above method will not be practical as it does not respect an objects modifier stack. If you are using one or more modifiers in your objects stack such as multi-resolution or other modifier that deforms the object at a component level, reversing the Normals of the base mesh will effect everything in the stack above it.


Secondly, if you are using a multi-resolution mesh you will probably know that working with a mesh at it’s highest level of subdivision is simply not practical. The problem is that in order to recalculate the object’s Normals you need to bake the modifiers into the object’s stack before you can recalculate the Normals, and recalculating the Normals of a high resolution mesh baked from a multi-resolution modifier is not practical and sometimes not even possible (for lack of system resources).

2. Normal and Displacement Maps Method

If you have come across this problem, one of the most common solutions seems to be to bake out a normal and displacement map from the sculpt data although, I found that this method produces results that are in some ways vastly different from renderings that include the highest level of sculpt data.
However, as you can see in the image below, the results are not completely unusable, but warranted too much of a compromise on quality to be used as a sole solution.

The above image demonstrates the results of this method. It is a single subdivision from the sculpt data baked into the mesh, this means that the mesh being used for the final render is a realtime mesh. Since the multires has been applied/baked into the mesh the normals can be recalculated safely. The model then has a Normal map and a Displacement map (which were both previously baked from the highest level of sculpt data) applied to it. A subdivision surface modifier is then applied to the model and it’s levels are increased for render time (the viewport model remains realtime).

Cons

As you can see the results are not too bad, but substantial detail is lost in the lower body and the outline of the model is exceptionally smooth in an undesirable way.

Pro’s

The main benefit of this method is that it computes optimally for realtime viewport interactivity and has a relatively short render time. If your character is not a focal point and nothing closer than a wide shot is required you’ll probably be able to achieve useful results from this method.

3. Weight Transfer and Mesh Deform Method

One of the methods I am familiar with in dealing with this problem is to create two models a realtime model and another high resolution model from the sculpt data. The realtime model is then skinned and animated, the high resolution model is then bound to an identical or the same rig before render time and the weights of the realtime model are transferred to the high res model. As a result the only process intensive task performed on the high resolution model is rendering. No manual production tasks need be performed on the high resolution model, which would be impractical. This tool set has existed in Maya since version 6.5 (if I’m not mistaken).
I was expecting to use this method in Blender, however it slowly became undeniably apparent that Blender does not (as of version 2.63) have a transfer weights option that matches the usability that I’d previously been accustomed to.

The issue is being addressed by user 2d23d and you can read about it at this post on blenderartists.org
The addon looks very promising and I sincerely hope it continues to be developed, as at present it is unable to address exceptionally high levels of geometry. Which made it unusable as a solution in this particular situation.
Other methods are suggested in the above thread such as the usage of a “mesh deform” modifier, which I think was added to Blender during the Apricot project which resulted in the open source game Yo Frankie!
Unfortunately, the mesh deform modifier proved to be the most cumbersome and difficult method (particularly as weight transfer only takes a couple of minutes in Maya). Creating the deformation cage took a total of 10 hours, and the results were unfortunately still unusable. I would recommend that anybody attempting to use this method creates a cage from scratch and do not try and convert a base mesh into a cage, especially if your model is not symmetrical or has a lot of sharp edges.

If I had been able to apply the mesh transfer method I would have ended with a result similar to the one below.

The above image is a rendering of the actual sculpted model at it’s highest level of resolution. As you can imagine this is a mammoth sized polygonal model, for example the cracks in the skin are geometry and not a normal map. Looking at this rendering and the final render below it’s difficult to tell them apart, however the most notable difference is in the characters tail which you can see faintly behind the character’s left calf. The highest level sculpt render (above) shows protrusions extending from the end of the tail, the same protrusions created from the realtime model (below) do not extend as far. This can, however, be corrected by using a level 05 sculpt for the shrinkwrap target (method explained below) and increasing the subdivision surface levels at render time. But in this case it would not warrant the
additional time that would be added onto the render as the end of the tail will mainly be covered in fur.

4. ShrinkWrap Method

The method that I finally settled on turned out to be exceptionally simple and takes only a few minutes to setup.

  1. Bake a Normal Map from the highest level of sculpt data (and a Displacement Map if desired).
  2. Create a Realtime model and a High Resolution Model to be used as a reference at render time.
    The High res model does not need to be baked at the highest level of sculpt data. I chose level 3 of 5 because at level 3 all indentations in the mesh are visible therefore breaking up the smooth outline problem mentioned in earlier render tests (see Normal and Displacement Maps Method).
  3. After ensuring that the model’s normals are facing the correct direction, for both models (see Easy Fix). Place the models on different layers and hide the high res model so as to speed up viewport interactivity and apply the Normal and Displacement maps to the realtime model as per usual.
  4. Select the realtime model and apply a Subdivision Surface Modifier (so as to increase the poly count at render time as this will be the final model used for rendering), then add a Shrinkwrap modifier and target it to the high res model. Order is important here, as the surface needs to be subdivided at render time before it can be shrinkwrapped to the high res model.
  5. Bind/Parent the realtime model to an armature, with the armature modifier completing the stack in the setup. Once again, order must be respected this is so that the armature deforms a representation of the high res model (by means of the other two modifiers) at render time.
    As you can see using this method the high res model remains hidden and out of the way as it requires no manual process intensive work such as weight painting, binding (to cages or armatures) and no proxy mesh data transfer is required either. The longest part of this setup is baking the Normal and Displacement maps.

The above image is the final result, as you can see the highest level of sculpt detail is retained, for example the bumpy and folded knee area in this render can be compared with the initial rendering for the Normal and Displacement Maps Method where the knees are extremely smooth. Also note that the models outline is no longer smooth, either.
Another benefit of using this method as opposed to rendering high resolution geometry from a viewport (such as in the weight transfer method), is reduced render times. This model takes approximately 5 minutes to render compared to the image shown in the “Weight Transfer and Cage Deform Method” section that takes approximately 30 minutes to render (as my system has to resort to swap space in an effort to cache 8GB (RAM) + 3.6GB (Swap Space) of polygonal data).
PS. You might have noticed that the Minotaur suddenly has some color but I’ve not mentioned anything about it until now. That’s because I’ve started texturing him, but as you can see this is not yet completed.
So you can expect a post on materials, textures and rigging soon!

Posted on Leave a comment

Minotaur X : Sculpting Armour

This image is of the highest level multi-res sculpt which, at level 5, is about 1,4 million polys for this model. Since this is just a test render mainly to see what the character currently looks like when wearing the armour, the armour is rendered at a low resolution. The Minotaur’s lower-body draping-clothing goes up to level 7, it is rendered here at level 4 in the front to level 1 at the back. The depth of field effect that this creates is merely coincidental and no post production is done on this image (as it is for testing purposes only).
The Minotaur sculpt data is about 95% complete in this picture, and attention needs to be directed towards the hands, mouth area (where removal of symmetry has begun). The final stage of the the Minotaur’s sculpt can only be completed when the armour sits flush against the characters skin, which would subsequently cause indentations in the flesh and possible abrasions. The highest level sculpt is not created with symmetry, but in order to emphasize the effect lower level sculpts often need to be readdressed so as not to create the impression of the highest level data looking like it’s been painted on the model, but is more solidly integrated into the characters anatomy.

Posted on Leave a comment

Minotaur IX : High Resolution Sculpting

These are some level 5 renders. This model won’t go beyond level 5. In these two renders the top half of the Minotaur is almost done and the bottom is only sculpted to level 4.

It’s worth noting at this point, that the technique used for sculpting this model is not typical as sculpting is performed non-destructively. In other words, sculpting does not alter the model’s topology by means of Blender’s Dyntopo technology instead sculpting is performed by simply displacing existing vertices on the model.

This basically means sculpting on top of a Multiresolution Modifier.

Although this technique, has over the years, tended to fall out of favor over the almighty Dyntopo, it still has it’s benefits. I would not say that either technique is better or worse than the other they are just simply different approaches to solving similar problems. As such, one will be beneficial in some areas where another technique will be less so.

Dyntopo

One of the most alluring characteristics of Dyntopo is the natural approach it lends it’s self towards. When using Dyntopo you don’t need to be concerned with technical matters such as topology, the propagation of high level sculpt data to the base mesh and the implications of the inverse. You don’t need to concern yourself with the even distribution of subdivided polygons over your model and how topology can be used to influence the weighting of distribution, thereby salvaging valuable system resources and forcing detail only in areas where necessary. …Not at all, as Dyntopo will easily handle the matter of adding detail for you by subdividing polygons according to the abstraction of a “paintstroke”. Which is of course exactly what you should expect, as an artist, after all.

Nonetheless, with all of these pro’s there must be some cons. Well, I wouldn’t go so far as to call them con’s as much as caveats of which the most obvious is the lack of consideration for continuous edge loops, quads and all matters relating to generating decent topology for the purposes of an animatable character, in general.

Ultimately, what this translates to is that if you wish to utilize your character for animation purposes it would not be recommended that your model (you have just spent all this time sculpting) is itself, physically included in that pipeline. Typically, what you would do in this case is retopologize the model and bake a normal map from the sculpt data. Sounds easy enough? Well, that depends on your skill level and also if you have any fancy tools at your disposal.

Either way you look at it, topology does certainly play an important part in ensuring that characters intended for animation deform in a predictable manner. Whether you attain good topology through retopologizing (as previously discussed) or whether you take the non-destructive approach as outlined in the Minotaur character, is a matter of what suits your needs at the end of the day.

Level 5 Sculpt data on a Multiresolution character

Non-destructive Sculpting

For the purposes of this character, I opted for the non-destructive approach just because I wanted the benefit of not having to complete the sculpting process before I was able to effectively apply the first useful texture map to the character. I also like working on more than one area of interest at a time, this way I can experiment with sculpting and texture mapping simultaneously and immediately see the effects of how each aspect of character development influences another. These effects can easily be visualized through a software render that would give me an arguably, more accurate representation of the final output. That, to me, can also be quite a natural approach to character development, as a whole.

In the above image you can clearly see that this model has not been sculpted with Dyntopo, as even at the higher levels of sculpting the edge loops continue to follow the base model’s topology to a large degree. The high res sculpted data effectively forms part of the production pipeline, with this technique.

LOD

Another benefit of this technique is that if you plan on exporting your character for a games engine, you will already have various Levels Of Detail (LOD) models, each with the same UV layout and vertex ordering. An LOD variant can then simply be created by collapsing the model at a predetermined multiresolution level. For example, in this model I would effectively be able to create 5 LOD’s with a few simple clicks and retain complete control over each model’s topology, without having to rely on a plugin that automates retopologizing, to save time. That can be quite a big plus in terms of game engines, particularly for next gen engines that can crunch through millions of polys per second and utilize what would typically have been film quality assets.

As the vertex ordering is exactly the same on all of the LOD’s you will also be able to use the same rig, weight maps and subsequently animations on all of them, but we’ll get into all of that a little later.

For now, I’m happy with the main sculpt and so I can move onto the next step. Of course, it’s worth also noting that as the sculpting process is non-destructive I don’t have to decide that I’m done with sculpting at this point in time. In fact, I could revisit sculpting at any stage in the future including after the character has had UV’s laid out, been texture mapped, rigged and animated.

As a result, you can see there are still alot of benefits to using this sculpting technique. Although, I am perhaps over-simplifying some of the complexities of this setup you will nonetheless come to see that it is certainly achievable and definitely effective in production as we discuss the setup in more detail in posts to follow.

Posted on Leave a comment

Minotaur VIII : Sculpting and System Stability

Sculpting

I generally lay out UV’s before sculpting commences…


This may not be an ideal solution in some cases as vertex translation of the base mesh could occur, when the higher level sculpt data is propagated to the base level.

In other words there is always the risk that high level sculpting could inadvertently modify the basic form of the character. If you have a texture map applied using the current UV layout, stretching will occur, as a result of vertices being translated to match the new form changes from snapping vertices on the base mesh to the higher level sculpt.
However, I won’t apply a texture map to the model until the sculpt can be applied to the base level mesh, in order to avoid this side-effect. So why not just do the UV layout after sculpting is done, and you’re ready for texturing?
Well you could, but I prefer having an idea of what my UV layout will look like before completing the sculpt, I can then bake out normal map tests while sculpting. This gives me a clear indication of whether my UV’s are spaced out enough to provide adequate room for sculpt details at a reasonable texture size.


My UV’s can then accordingly be tweaked again before arriving at a final layout.

System stability is also a big issue for me.

Sometimes the systems I use to create models have a limited amount of RAM which could be as low as 8GB. Although this might be adequate in many circumstances, it will require that I adopt a different approach for the level of skin detail that is needed with this model. Basically, it will mean having to cut the model up into smaller components or “dice” the model in order to sculpt micro-level details and maintain a workable, interactive 3D environment.

BACK LOWER BODY Level 2 Sculpt

Typically, this might mean having to separate the head, the torso, lower body etc all into separate files but I don’t like doing this because it could result in hard edges in areas where the components of the model would have been separated. Instead I keep the model unified and use the system’s resources for rendering the model in the realtime, openGL viewport, which is particularly important for sculpting. In this case performance might be compromised at the highest multi-resolution (or subdivision) level, in which case this can be counter-acted by hiding enormous amounts of geometry and concentrating only on small portions of the mesh. Of course, this is the purpose of the highest subdivision level, to create micro-level details so there is no problem with hiding two thirds of a model, thereby reducing the viewport vertex count from 100’s of millions to 1 million or less. This hiding (or masking) re-establishes realtime viewport interaction.

You might be aware that in order for such a high level mesh to be usable it will need to be baked to produce a normal map. However, normal map baking is a product of rendering and rendering requires additional RAM. The Minotaur at it’s highest subdivision level is using about 2GB to 3GB of total RAM (depending on OS configuration) to open and display the file, rendering the model in this state is not an option as the amount of RAM required will increase by three to four times that amount. Which would exceed the available 8GB of RAM on the current system, at which point Swap space (or virtual memory) will be used. This will make the system unstable as other software and services try to compete for available resources.

FRONT LOWER BODY Level 2 Sculpt
FRONT MINOTAUR Level 2 Sculpt

Keeping your 3D program’s RAM usage below 50% of your total system’s RAM, and not exceeding this
will provide a much more stable environment, where crashing during a rendering and wasting time in the
process can be avoided.

  • With the models UV’s laid out, I am free to jump back into edit mode once all highest level sculpting is completed.
  • In edit mode I can delete entire portions of the model such as everything but the head, return to object mode and render a normal map for the head without compromising system stability as the amount of object data has been substantially reduced by dicing.
  • Since the UV’s are already in place I can repeat this process for the other model components, arms, legs, torso etc until I have several high resolution maps with the model’s components already in their correct positions.
  • As long as all the maps are rendered with the same image aspect ratio and pixel aspect ratio, the files can easily be imported into a single multi-layer document and exported as a single high resolution normal map, that retains the model’s micro-level details and can then be applied to the original model which can then be collapsed to the base for furthering production.

As you can see using this method the model’s vertex order is retained, no additional vertices are added or merged (which would have subsequently modified the UV layout) and you have the benefit of working in a stable 3D environment.

MINOTAUR FRONT Sculpt Level 5
The final image is the start of the current highest level sculpt, as you can see veins are starting to occur at this level, and so to are pores which will only become clearer in later renders.