CWG Animated Maps – Shrinking the City

Creating the CWG Animated Maps was not only a proud moment for us, it was also a huge learning opportunity. It allowed us to explore mapping data and processing software to shrink our world. Our animated event routes for the CWG’s wouldn’t have been possible without the base of mapping data for our guide route. Here’s an insight into how BlueSky and RUAS helped us create the hosting city.

An abstract CGI model of Birmingham in a raised container showing the starting point of the Commonwealth Games.
Still taken from Time Trial – Tabletop Miniature

BlueSky

Norfolk based UAV specialists, BlueSky offer a bespoke range of Aerial Services throughout the UK using the latest specialist equipment. BlueSky provided us with Nadir data of each map location which created the bases for our animations. On top of the Nadir data, they shared high-detailed data called Metrovista which covered our Marathon route in central Birmingham.

Marathon – Nadir and Metrovista Data

Metrovista Vs Nadir Data

The Metrovista mesh was generated using proprietary software from dense Digital Surface Models (DSM) and aerial imagery from different sensors. For the mesh model over the central region of Birmingham, the aerial imagery was acquired with a hybrid sensor. This includes a nadir camera and four oblique cameras capturing images at a 5cm resolution. For the other regions, aerial imagery was acquired from a large format nadir looking camera. Each image consists of approximately 450 million pixels and an image resolution of 15cm.

Owing to two different sensors being used for the project, two different approaches were applied by the mesh generation software. However, common to both approaches was the use of a dense DSM. This is a regular grid of 3D points at a spacing of 25cm which represents the topography of the area with 16 points per square meter.  These DSM points are used to generate a geometric model of the topography, including natural terrain, buildings, vegetation, water, transportation routes and such. 

Still taken from Birmingham City centre in high CGI detail.
Still from Marathon – Metrovista Central

This typically generates a very dense geometric model with a huge number of polygon faces, too many to incorporate into the final textured mesh and so a reduction of the quantity of polygon faces is needed to make the final model more manageable. Once the model is thinned to an acceptable level, balancing the need for a reduced quantity of polygon faces and geometric representation of features, the model is then textured.

All imagery captured goes through an aerial triangulation adjustment. This process generates a set of exterior orientations for each image, accurately replicating the position and orientation of the cameras focal plane at the moment of exposure. In addition, the adjustment also ensures that the aerial images are tied to an existing network of ground control points, ensuring any data derived from the imagery is accurate and will tie into other third-party datasets.

The Bluesky software adopts different strategies depending on whether nadir/oblique imagery is available or just nadir. A high-level overview of the algorithm defines what strategy is implemented. Once all the polygon faces of the geometric model have texture information assigned, the data is organised for export to OBJ format.

RUAS

RUAS is the UK’s leading drone service and training provider in aerial technology. As a special request we asked RUAS to capture drone imagery for the Road Race event taking place in Warwickshire.

Drone Image of Warwick Castle
Warwick Castle – Drone Image by RUAS

The Data Capture

To undertake this task, the RUAS team was equipped with a full DJI Matrice 300 RTK airframe and a Zenmuse P1 gimbal-mounted camera which hosts a 45Mpx full-frame sensor that we have paired with the 24mm lens. To make sure that the data is accurately scaled and georeferenced we also used the DJI D-RTK-2 base station for all missions, and this meant that all images were geotagged with precision geolocations and orientation out of the box without the implementation of Ground Control Points (GCPs).

Although the main aspect of this project was about visual representation and not so much survey grade accuracy, RTK positioning also adds an extra level of safety as it reduces interference from other radio sources that is very common in built-up areas.

A detailed computer generated aerial shot of Birmingham showing the location of the finish point of a race.
Still from Road Race – Warwick Castle

Unlike typical drone mapping, when the is camera simply facing downwards in a NADIR position, 3D modelling requires the introduction of oblique imagery in order to render facades of buildings and other features accurately and in good detail.

For this, there were two possible methods available for us (1) to use a “crosshatch” flight pattern with a 35degree camera angle or (2) use the DJI M300 RTK unique smart oblique feature. In the testing phase of the project, we captured data with both of these methods, and both produced good 3D models and had pros and cons In the filed but, eventually we have settled with the ‘smart oblique’ as tracking the gradual movement of the drone and keeping VLOS (Visual Line of Sight) was much easier.

Post-processing

After all the flights were completed, the next step was to organise and process the data. During the flights, RUAS captured almost 27000, 45Mpx images that had to be rendered into a detailed mesh with good quality textures. It was clear from the beginning that it will have to be done in sections as there is no consumer level workstation computer on the market that would be able to handle this amount of data at once. To be sure that the chosen setting will work as planned, a couple of tests renders on smaller data sets were run to make sure both mesh and texture quality will be up to par.

A computer generated aerial shot of Birmingham showing the location of the finish point of a race.
Still from Road Race – Finish Marker

Once happy with the output quality the data was split into 10 separate chunks, configured the batch processing, and pressed “play”. Once all the 10 chunks had been rendered, they were cleaned up and textured, they were decimated for better handling and then merged down as a single 3D model.