The Magic of Photogrammetry

March 18, 2016

hellblade

 Hellblade – 3d-scanned face of actress playing Senua

Photogrammetry is a fascinating process you may or may not have heard about before, but it is becoming much more common. This technique has appeared in a number of places, but for games the Vanishing of Ethan Carter did it first. The most mainstream example, however, is Star Wars: Battlefront. In fact, at GDC this past week there was a talk specifically on Photogrammetry in SW:Battlefront! Because of my interest/background in photography and video games, this combines a couple of interests at once.

What is it?

Photogrammetry (for video games) is the process of producing 3D models and textures through the use of source material from digital photography. But photos are two-dimensional, right? Yes, that’s true! The trick with photogrammetry is through the software it uses. This software (Agisoft Photoscan the most common) is able to extract unique points in each picture for the object. So, by taking a number of photos from many angles of an object, these photos can be matched against each other for unique points, and the software is able to figure out from where (what angle) each photo was taken. It sounds clever, but the real magic comes from the software extrapolating all the unique points from all the photos, ultimately producing a textured 3D model!

Why not just do this the old fashioned way?

The traditional method of creating a 3d object and its textures is typically based on producing textures that tile (repeat) or are symmetrical. While this works well for some (often man-made) objects, other objects don’t look natural when they are uniform. Also, the level of (texture) detail of objects is something that, traditionally, someone has to create by hand. The more detail, the more time invested. Photogrammetry is able to extract this from megapixels of photos, giving it far more detail than the typical artist is willing to put in.

ethancarter

Waterfall from The Vanishing of Ethan Carter – this level of unique, asymmetrical detail would be incredibly time-consuming to produce traditionally

So how does it work?

In order to illustrate the process, I’ve selected a wall-mounted lamp in my backyard. It seemed reasonably easy to photograph, plus is isn’t perfectly symmetrical (some broken bits) and a distinct texture (dirty).

1. Take photos

 DSC00594

There is a whole technique for doing this, but fundamentally you just need to get good coverage of your object. Too few photos and you don’t get enough information, but too many and your computer will struggle to be able to process it. It’s also worth mentioning you want to shoot on an overcast day (or inside in an evenly-lit room), because uneven light causes shadows and this means some angles will have less detail.

2. Load them into the software and align them (Generating unique points)

alignphotos

Lamp photos aligned, with unique points visible

Once you have your photos (in my case, I took 33), you load your Photogrammetry software and drop in the photos. I demonstrate this with Agisoft Photoscan, as it offers a free trial. Once this begins it will take a while (depending on your computer and number/quality of photos). Once done, you get a neat sphere showing where all your photos were taken around the central object.

Before you move on, you can remove points you don’t want. These may be points that are errant or are part of the background. The more you remove here, the more consistent/clean your object will end up.

3. Create a dense cloud of points

densecloud

That’s not a texture – that’s a PILE of unique points!

denseCloudZoom2

A closer zoom in, with points visible

Now that you have your initial points, you can generate a dense cloud. This, again, can take a while. When done, you should have a clear view of what your object looks like. What you see is not a texture, but just a pile of unique points! In my lamp, I have 17 million points (that’s a lot!) You can also do another pass at removing unnecessary points, and this will help produce a smoother mesh.

4. Create the mesh

wireframe

The Lamp Mesh

With the dense cloud created and cleaned up, it’s time to generate a mesh. Yep, more wait time. The result, however, is pretty detailed. My lamp is 1,189,000 faces. Note: for those not in 3d modeling, this is a ridiculous number of faces for something basic like a lamp. This will ultimately need to be scaled down to a reasonable amount. As a source mesh, this has (well in excess of) all the detail we’ll ever need.

5. Create the texture

shaded

The mesh with the texture applied

Now that we have the mesh, we can generate the texture. Photoscan lifts all the visual detail from the photo and maps it onto the mesh, producing a glorious texture map. However, it will produce textures that are broken up into many smaller pieces (not user friendly). This is something that can be tweaked by increasing the texture size, and by manipulating it in a 3d program after the fact. In all honesty, it’s the texture production which has the greatest visual impact for the end user. Everything looks unique.

6. Import to the 3D program of your choice (to fine-tune/clean up, rig, etc)

Once you’ve saved the mesh and texture file, you can import it directly into 3D Studio or Maya and start manipulating it.

So why don’t we do this for everything?

Well, for one, it’s still relatively new. With the success of the process in Star Wars: Battlefront, you will start to see a more mainstream uptake of this technology. A good example is HellBlade, where they used the process to capture the face of the protagonist in this video (Hellblade takes it to 11 by adding photogrammetry to in-engine live facial and motion capture).

Secondly, it’s worth noting only a couple of development houses have been willing to take on the financial burden of the photogrammetry rigs. To capture faces quickly and effectively, they use a LOT of SLR cameras in sync (by a lot, I mean 40-80!). So it is a bit expensive to start with. But with proven results and reduced production time, it will inevitably become cheaper and more common in the years to come.

Ten24 capturestage

Ten24’s capture stage. That’s a LOT of DSLRs!

Lastly, as I mentioned above, you get a lot of detail (mesh and texture) – more than you will need. So depending on the application – for example, a game for mobile phones, it is likely not an efficient process. Phones aren’t powerful enough to handle the detail, so it’s wasted. Heck, if I had a bunch of objects in a game as detailed as my lamp example above, it would struggle to run on powerful gaming PCs. It always needs to be scaled back (unless your end result is a video).

Anyhow, I hope that gives you some insight into this fascinating process, as I can guarantee you’ll see it used more and more in the coming years.

For more detailed information: