Categories
Uncategorized

A-Frame Part Four: Elevation Data

So to the final part of the test A-Frame app (see https://gitlab.com/nickw1/aframe-expts/-/tree/master/4 for full source code), and elevation data. Up until quite recently, it was awkward to incorporate elevation data into an AR or VR app, as the most readily-available source of this data is NASA SRTM .hgt files, but these are arranged in latitude/longitude tiles (1×1 degree) rather than more application-friendly XYZ tiles. Indeed, the first version of Hikar (0.1) incorporated some rather messy, nasty code to deal with the fact that the OSM data and height data was in different projections and tiling systems.

Around a year ago, however, I was introduced on the OSM mailing list to the Terrarium PNG tiles. These, based on SRTM data, were originally prepared by Mapzen, are in standard XYZ format and have been made freely available on Amazon S3. These work through encoding the elevation in the RGB colour channels of the image, so can be read with any PNG-reading library. I have used pngjs, a JavaScript library to read and decode PNG images; the current version (on github, but NOT the npm package) also provides a useful convenience method, getRGB8Array(), to load the uncompressed pixel data into an array of RGB values.

The elevation (metres) is encoded as:

(red*256 + green + blue/256) - 32768

So how to read these as tiles? Remember in the second post, I introduced the Tiler class which is used to manage downloading XYZ-format tiles, and also mentioned that Tiler‘s readData() method is overridden in subclasses to read the specific type of data we’re interested in. Originally we looked at a subclass to read in JSON data (used for the OSM data) but now we need to create a DemTiler subclass to read Terrarium DEM data:

const Tiler = require('./tiler');
const PNGReader = require('./pngjs/PNGReader');

class DemTiler extends Tiler {

    constructor(url) {
        super(url);
    }

    async readTile(url) {
        return new Promise ( (resolve, reject) => {
            const arrbuf = fetch(url).then(res => res.arrayBuffer()).then
                (arrbuf => {
                    const reader = new PNGReader(arrbuf);
                    reader.parse( (err,png) => {
                        if(err) reject(err);
                        const imgData = png.getRGBA8Array();
                        const elevs = [];
                        for(let i=0; i<imgData.length; i+=4) {
                            elevs.push(Math.round((imgData[i]*256 + imgData[i+1] + imgData[i+2]/256) - 32768));
                    }
                    resolve ({ w:png.getWidth(), 
                               h:png.getHeight(), 
                               elevs:elevs });
                });
            }); 
        });
    }
}

module.exports = DemTiler;

As you can see, this DemTiler class extends from Tiler and overrides readData(). For ease of programming, readData() is declared as an async function and we await its associated promise to be fulfilled in the Tiler code to manage downloading tiles. Because PNGReader is not promise-based, we have to create our own Promise object, and resolve with the loaded data, as soon as it’s been loaded.

Note that, when using the fetch API, we obtain the response as an array buffer; this means we can safely deal with binary data such as PNG images. We then use the getRGB8Array() method of pngjs’s PNGReader object, and calculate the elevations from the pixels using the equation as above. Finally we resolve the promise with an object containing the width and height of the image, and the actual elevations themselves.

We now move on to our Terrarium A-Frame component, which utilises the elevations we have returned to generate a terrain. How does this work? We’ll start with the update() method, which runs if we reset any of the component’s parameters:

 update: async function(oldData) {
        this.system.setTilerUrl(this.data.url);
        this.system.setTilerZoom(this.data.zoom);
        this.system.color = this.data.color;
        this.system.opacity = this.data.opacity;
           this.system.setDefaultElevation(this.data.defaultElevation);
        this.system.setParams(this.data.yExag);
        if(this.data.lon >= -180 && this.data.lon <= 180 && this.data.lat >= -90 && this.data.lat <= 90) {
            const cameraPos = this.system.adjuster.lonLatToCamera(this.data.lon, this.data.lat);
            document.getElementById(this.data.cameraId)
                .setAttribute('position', cameraPos);
            
            const demData = await this.system.updateLonLat(this.data.lon, this.data.lat);
            
            this.system.firstLoad = false;
            this.notifyDemLoaded(demData, true);
        }
    },

Some of this code is fairly basic and self-explanatory; for example, note how the properties include the URL, default zoom level, and default colour and opacity of our terrain. More interesting is the yExag (Y-axis exaggeration) parameter. I have found that if you do not exaggerate Y coordinates by a given factor, the terrain does not have enough prominence in a VR environment. I have found that an exaggeration of 3 looks best, based on tests in areas I am familar with.

We then validate our latitude and longitude, and convert the lat/lon to a camera position using this.system.adjuster.lonLatToCamera(). (What is this doing? The Adjuster is a class which now contains much of the coordinate conversion logic, and accounts for Y-axis exaggeration. So it converts lon/lat firstly to Spherical Mercator, and then to camera coordinates). We need to do this because once we’ve loaded our terrain, we need to adjust the camera position to the ground position (as measured by the terrain) of our current latitude and longitude.

Digging deeper into the code to actually load the terrain now, starting with the call to updateLonLat() with our new longitude and latitude. This straightaway converts lon/lat to Spherical Mercator and then loads the tiles (similar to our OSM code):

updateLonLat: async function(lon, lat) {
const sphMerc = this.tiler.lonLatToSphMerc(lon,lat);
return await this.updateSphMerc(sphMerc);
},

updateSphMerc: async function(sphMerc) {
const dems = [];
const newData = await this.tiler.update(sphMerc);
newData.forEach ( data=> {
const dem = this.loadTerrariumData(data);
if(dem != null) {
dems.push(dem);
}
});
return dems;
},

As you can see, we’re using the tiler to download new tiles, as before. We then loop through our returned tiles and call our system’s loadTerrariumData() to actually generate the terrain. Looking at this method:

loadTerrariumData: function(data) {
let demData = null;
if(data !== null) {
const geom = this.createDemGeometry(data);
geom.geom.computeFaceNormals();
geom.geom.computeVertexNormals();
const mesh = new THREE.Mesh(geom.geom, new THREE.MeshLambertMaterial({
color: this.color,
opacity: this.opacity
}));
const demEl = document.createElement('a-entity');
demEl.setObject3D('mesh', mesh);
this.el.appendChild(demEl);
const dem = new DEM(geom.geom, geom.realBottomLeft);
this.dems[`${data.tile.z}/${data.tile.x}/${data.tile.y}`] = dem;
demData = { dem: dem, tile: data.tile};
}
return demData;
},

loadTerrariumData() creates a terrain for a given tile; the data parameter will contain the width, height and array of elevations for the current tile. First step is to create a three.js geometry; this is done with the createDemGeometry() method. So to look at this:

createDemGeometry: function(data) {
        const demData = data.data;
        const tile = data.tile; 
        const topRight = tile.getTopRight();
        const bottomLeft = tile.getBottomLeft();
       
        const centre = [(topRight[0] + bottomLeft[0]) / 2, 
            (topRight[1] + bottomLeft[1]) /2];
        const xSpacing = (topRight[0] - bottomLeft[0]) / (demData.w-1);
        const ySpacing = (topRight[1] - bottomLeft[1]) / (demData.h-1);
        const realBottomLeft = [bottomLeft[0], bottomLeft[1]] ;
        const geom = new THREE.PlaneBufferGeometry(topRight[0] - bottomLeft[0], topRight[1] - bottomLeft[1], demData.w - 1,  demData.h - 1);
        const array = geom.getAttribute("position").array;
        let i;
        for (let row=0; row<demData.h; row++) {
           for(let col=0; col<demData.w; col++) {
                i = row*demData.w + col;
                array[i*3+2] = -(centre[1] + array[i*3+1]); 
                array[i*3+1] = demData.elevs[i] * this.adjuster.yExag;
                array[i*3] += centre[0];
            }        
        }

        return {geom: geom, realBottomLeft: realBottomLeft };    
    },

The data parameter passed to this method contains the actual DEM data (array of heights, and width and height in pixels of the Terrarium tile) together with a Tile object defining the X, Y and Z of our current tile.

We obtain the bottom left and top right coordinates of our tile in Spherical Mercator (using methods of the Tile class) and the centre point, also in Spherical Mercator. Next we create a THREE.PlaneBufferGeometry() to create a plane geometry (as a buffer, for efficiency); this expects, as arguments, the width and height of the plane (in OpenGL units, which here are equivalent to metres) and the number of inter-vertex ‘edges’ in the x and y directions. The number of ‘edges’ will equal the width and height of the plane (in vertices) minus one. Since our Terrarium tiles are 256×256 pixels, the geometry will be 256×256 vertices, so there will be 255 ‘edges’ in each direction. This will then create a plane geometry in the xy-plane with (0,0,0) at the centre and x and y increasing by (width/255) and (height/255) per vertex as we head right and upwards, respectively. The z coordinates will initially be zero, i.e. it will be a flat plane pointing upwards.

However, what we need is a plane actually representing terrain in a particular location within our world and on the xz-plane not just a flat upright surface on the xy-plane centred on the origin. The transformation needed is shown below.

So to do this we need to loop through each vertex of the generated plane, and adjust our vertices using the centre coordinate (in Spherical Mercator) of our current XYZ tile, changing y coordinates to z coordinates and negating them. Finally, we set the y coordinate of each vertex in the plane geometry to the corresponding value from the elevations array, multiplied by our Y-axis exaggeration, which was discussed earlier.

As a result, thanks to the plane geometry now having x, y, and z coordinates for each vertex, with the y coordinate representing elevation, we now have a terrain!

The terrain object is now returned from the method, alongside the bottom left coordinate, which is needed later for overlaying the OSM ways onto the terrain. Note the ‘realBottomLeft‘. Here, I am accounting for the fact that later, I might want to include some ‘padding’ round the tile to avoid tile-edge artefacts, and the ‘real’ bottom left will be slightly down and to the left of the original bottom left of the tile.

So, if we now jump back to our loadTerrariumData() method:

const geom = this.createDemGeometry(data);
geom.geom.computeFaceNormals();
geom.geom.computeVertexNormals();
const mesh = new THREE.Mesh(geom.geom, new THREE.MeshLambertMaterial({
                color: this.color,
                opacity: this.opacity
}));
const demEl = document.createElement('a-entity');
demEl.setObject3D('mesh', mesh);
this.el.appendChild(demEl);
const dem = new DEM(geom.geom, geom.realBottomLeft);
this.dems[`${data.tile.z}/${data.tile.x}/${data.tile.y}`] = dem;
demData = { dem: dem, tile: data.tile};

As can be seen, we calculate the normals on our new plane (necessary to be able to apply lighting correctly, as the normals indicate the “up” direction of our plane faces and vertices), and create a THREE.Mesh; an actual 3D object with material used to render our geometry.

Note how we use the material type THREE.MeshLambertMaterial. Lambert material is described on the three.js website as “a material for non-shiny surfaces, without specular highlights” – in other words, ideal for ground cover such as grass. I originally tried THREE.MeshStandardMaterial, thinking that the word “standard” would imply that it would be fairly general purpose… but in fact, it produced unacceptably high levels of shininess and glare from the light source, even if the light source was dimmed and its angle reduced. Changing the MeshStandardMaterial parameters – for example, to minimise metalness or roughness – did not produce the desired effect, but then I discovered Lambert material and found it was fine for my purposes. The differences are shown below.

Lambert material.
Standard material.

So we create our material and then create a DEM object. The DEM class (my own custom class) is a class to represent a DEM as pure data; it contains functionality to find the elevation of a particular point on the DEM, using bilinear interpolation – we use it later when projecting the OSM ways onto the DEM.

So that’s the essentials of the Terrarium component; there are some other features of the component not discussed here but they are not especially interesting and similar to functionality in the OSM component.

Having loaded our DEM data and rendered a terrain, we now need to apply the DEM to our OSM data. Luckily, this was quite an easy task as I had alread developed code to do this in other languages – as described above – so it was just a case of converting this code to JavaScript. First step is to emit a demloaded event from our Terrarium component:

this.el.sceneEl.emit('demloaded', {
            demData: demData,
            yExag: this.system.adjuster.yExag,
            elevation: moveToGround ? this.system.getElevation(this.data.lon, this.data.lat, this.data.zoom)+10*this.system.adjuster.coordAdjust*this.system.adjuster.yExag: -1 
        });

Note how the event contains data that the OSM component will need: the DEM data itself, the Y exaggeration (the idea is only to store the Y exaggeration in one place – the Terrarium component – and to transmit it to any other code that needs it, minimising the chance of bugs), and the elevation. (Why is this last parameter needed? It’s not used by the OSM component itself, but by the camera controls component – which also subscribes to the event – and is used to set the camera Y coordinate to the ground elevation at the latitude and longitude we originally passed in, so the camera is placed on the ground rather than above – or worse, below – it).

Let’s look now, then, at the OSM component. As soon as I started writing this I realised that its functionality would have to be significantly different to the original OSM component discussed in Part Two, as it relies on the Terrarium data having been loaded – the approach is, load the Terrarium data first and then, when and only when it’s all loaded, we can load the OSM data and apply the DEM to it, to add elevation to the OSM data. However, the original OSM component (from Part Two) just downloads OSM straight away. I could have implemented logic in the original OSM component to behave differently depending on whether Terrarium data was going to be applied or not, but decided that would be much too messy – so I decided to create a new osm-data-3d component designed specifically to have a DEM applied to it to produce 3D OSM ways, and separated out all the functionality common to the original OSM component and the new osm-data-3d component into its own class – OSMLoader.

Thus, the OSMLoader class is responsible for loading OSM data and creating models from it, and can be used in both OSM components:

const OsmWay = require('./osmway'); 

class OsmLoader {
    constructor(system) {
        this.system = system;
        this.drawProps = { 'footway' : { color: '#ffff00', width: 5 },
             'path' : { color: '#ffff00', width: 5 },
             'steps' : { color: '#ffff00', width: 5 },
             'bridleway' : { color: '#ff8080', width: 5 },
             'byway' : { color: '#ff8080', width: 10 },
            'track' :  { color: '#ff8080', width: 10 },
            'cycleway' : { color: '#00ffff', width: 5 },
            'residential' : { width: 10 },
            'unclassified' : { width: 15 },
            'tertiary' :  { width: 15 },
            'secondary' : { width: 20 },
            'primary' : { width : 30 },
            'trunk' : { width: 30 },
            'motorway' : { width: 60 }
        }
    }

    loadOsm(osmData, tileid, dem=null) {
        osmData.features.forEach  ( (f,i)=> {
            const line = [];
            if(f.geometry.type=='LineString' && f.geometry.coordinates.length >= 2) {
                f.geometry.coordinates.forEach (coord=> {
            
                        const h = 
                            dem? dem.getHeight(
                                    coord[0],
                                    coord[1]
                                ) + 
                                  this.system.adjuster.yExag*2: 0;
                       line.push([coord[0],
                                h,
                                -coord[1]
                                ]);
               });
                    
                
                const g = this.makeWayGeom(line,     
                       1 * 
                        (this.drawProps[f.properties.highway] ? 
                            (this.drawProps[f.properties.highway].width || 5) :
                         5));

                const color = this.drawProps[f.properties.highway] ? 
                    (this.drawProps[f.properties.highway].color||'#ffffff') : 
                    '#ffffff';
            
                const mesh = new THREE.Mesh(g, new THREE.MeshBasicMaterial(
                    {color:color}
                    ));
                        
                this.system.el.setObject3D(`${tileid}:${f.properties.osm_id}`, mesh);
            }  else if (f.geometry.type=='Point' && f.properties.name) {
                const h = dem ? 
                    dem.getHeight(f.geometry.coordinates[0], 
                        f.geometry.coordinates[1]) : 0;

                const p = { 
                    x:  f.geometry.coordinates[0], 
                    y: h+20*this.system.adjuster.yExag,
                    z: -f.geometry.coordinates[1] };

                
                [p.z+1, 
                p.z-1].forEach ( (textZ, i) => {
                    const textEntity=document.createElement("a-entity");
                    textEntity.setAttribute("text", { 
                        value: f.properties.name,
                    });
                
                    textEntity.setAttribute('position', {
                            x:p.x,     
                            y:p.y, 
                            z:textZ
                    });
                    textEntity.setAttribute('scale', {
                            x:1000,     
                            y:1000, 
                            z:1000
                    });

                    textEntity.setAttribute('rotation', { y:180*i });
                    this.system.el.appendChild(textEntity);
                });
            }
        }); 
    }

    makeWayGeom(vertices, width=1) {
        return new OsmWay(vertices, width).geometry;
    }
}

module.exports = OsmLoader;

Note how the loadOsm() method acts differently depending on whether or not a DEM is passed in. If it is not, we simply create our OSM geometries with the Y coordinate always set to zero. If it does, we use the DEM object (created in the Terrarium component and passed to the OSM component via the event) to find the elevation for each vertex of our OSM ways.

Note how we also create text entities for each point of interest (GeoJSON feature with a Point geometry) with a name. We do this by creating a new <a-entity> for each POI and then add a ‘text’ attribute to it, containing the POI name. Note that two entities are created, one either side of the POI location – and facing in opposite directions; the second entity is rotated by 180 degrees around the Y axis. This is so that we have two text labels facing both north and south, so that the text appears the right way round whichever direction (well… out of north and south, that is) you view it.

(I’ve now discovered Three sprites, which seems to be a rather better way of doing this.. but more of that in a later post!)

This then leaves the osm-data-3d component itself really quite simple:

const OsmLoader = require('./osmloader');
const Adjuster = require('./adjuster');

AFRAME.registerSystem ('osm-data-3d', {

    init: function() {
        this.tilesLoaded = [];
        this.osmLoader = new OsmLoader(this);
        this.adjuster = new Adjuster();
    },

    loadData: async function(url, dem) {
        const tileIndex = `${dem.tile.z}/${dem.tile.x}/${dem.tile.y}`;
        if(this.tilesLoaded.indexOf(tileIndex) == -1) {
            const realUrl = url.replace('{x}', dem.tile.x)
                                .replace('{y}', dem.tile.y)
                                .replace('{z}', dem.tile.z);
            const response = await fetch(realUrl);
            const osmData = await response.json();
            this.tilesLoaded.push(tileIndex);
            return osmData;
        }
        return null;
    },

    applyDem: async function(osmData, dem) {
        await this.osmLoader.loadOsm(osmData,`${dem.tile.z}/${dem.tile.x}/${dem.tile.y}`, dem.dem);
    }

});

AFRAME.registerComponent ('osm-data-3d', {
    schema: {
        url: { type: 'string' }
    },

    init:function() {
        this.el.sceneEl.addEventListener('demloaded',  async(e)=> {
           
            this.system.adjuster.yExag = e.detail.yExag;
            for(let i=0; i<e.detail.demData.length; i++) {
                const osmData = await this.system.loadData(this.data.url, e.detail.demData[i]);
                await this.system.applyDem(osmData, e.detail.demData[i]);
            }
        });
    },
});

In the init() function, we register an event listener for the demloaded event, which was emitted by our Terrarium component. In that listener, we save the Y exaggeration that was passed in, so that the osm-data-3d component uses the same Y exaggeration as the Terrarium tile. We then loop through the DEM data and load the corresponding OSM tile (with the same XYZ as the DEM tile), and then apply the DEM to each OSM tile, making use of the OSMLoader.loadOsm() method to give the OSM ways an elevation (y) component, as discussed above.

So that’s the full demo! As I’ve already mentioned, you can see it online at https://hikar.org/aframe/. There are still a number of flaws, in particular:

  • limited in scope. No OSM data aside from highways and POI names are shown; you will not, for instance, see lakes, rivers or the sea. However, the purpose of this exercise was simply to be able to work with OSM and elevation data in the browser; given the eventual aim is to produce a web-based version of Hikar rather than a full-featured VR app, this is intentional.
  • Artefacts from superimposing ways on the terrain. Sometimes, a way will appear to “bury into” the terrain and disappear underground for a while. This is not, unfortunately, due to a tunnel: the most common cause is insufficient nodes within the way, meaning that a given way segment may cross (usually small) ridges and valleys in its journey between one node and another, thus either disappearing underground or seeming to “float”. This is an issue which needs to be resolved in Hikar: the only real difference between the way rendering in Hikar and a VR app is that the terrain is transparent. Obviously, improved surveying and more nodes in ways will help here – but it would be nice to have a better algorithm to ‘drape’ way segments over terrain in between nodes. This is on the to-do list.
  • Artefacts at tile boundaries. In some places, small ‘gaps’ appear at tile boundaries, caused by elevation differentials at the edges of adjacent tiles. I can see one way round this would be to create a one-vertex-wide “padding” round each tile and set the elevation of this “padding” on a given tile edge equal to the elevation of the vertices of this edge – tiles would thus overlap and small ‘border’ effects would appear but this, I think, would be less of an issue than ‘gaps’.

Categories
Uncategorized

A-Frame Part Three: Movement Controls

Part Three of the development of the A-Frame OSM terrain demo brings a couple of items of additional functionality:

  • the ability to react to the camera moving, to allow downloading of new data when you move to a new tile;
  • vertical movement. This is not provided by default in A-Frame; a custom component needs to be provided.

So to start with the HTML:

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>A-Frame Test</title>
<script src="https://aframe.io/releases/1.0.3/aframe.min.js"></script>
<script src='js/bundle.js'></script>
</head>
<body>
<a-scene background="color: skyblue">

<a-camera near="0.1" far="50000" wasd-controls="acceleration:2.6" position-manager vertical-controls id="camera1" /></a-camera>


<a-plane position="-80000 -5 -6630000" rotation="-90 0 0" width="100000" height="100000" color="#7bcba4" shadow></a-plane>

<a-entity id='dem1' osm-data="url: https://www.hikar.org/fm/ws/tsvr.php?x={x}&y={y}&z={z}&way=highway; cameraId: camera1; lat: 51.05; lon: -0.72; zoom: 13"></a-entity>

</a-scene>
</body>
</html>

What has changed here? You’ll notice that the <a-camera> entity now has two additional components: position-manager and vertical-controls. The former is used to manage the position; for now this involves detecting when the camera has moved a certain distance, which can be used to inform our osm-data component of a new camera position, triggering it to load new data if we’ve moved into a tile. The latter simply adds vertical controls to our application.

So looking at the position-manager component:

AFRAME.registerComponent('position-manager', {

    init: function() {
        this.lastPosition = { x:0, y:0, z:0 };
    },
    
    tick: function(time, delta) {
        this.newPosition = this.el.getAttribute('position');

        const dx = this.newPosition.x - this.lastPosition.x;
        const dy = this.newPosition.y - this.lastPosition.y;
        const dz = this.newPosition.z - this.lastPosition.z;
        const dist = Math.sqrt(dx*dx + dy*dy +dz*dz);

        if(dist>=0.0001) {
            this.el.sceneEl.emit('cameramoved', {
                x: this.newPosition.x,     
                y: this.newPosition.y, 
                z: this.newPosition.z
            });
        }

        this.lastPosition = { x: this.newPosition.x, 
                          y: this.newPosition.y, 
                          z: this.newPosition.z }; 
    }
});

How is this working? Crucially, it makes use of an A-Frame component’s tick() method. tick() runs every time the scene re-renders, so is a good place to put code which need to execute frequently to monitor our application’s state.

Looking at the code above, we compare the current camera position with the new camera position each time tick() is called. If we move at least 0.0001 metres (we do not wish this code to run if we stay still!), we emit a cameramoved event. Event emitting (based on the DOM event model) is a useful way to broadcast messages from a component without coupling it unnecessarily to another, as we can then add an event listener in some other component to handle that event. Note here how we emit the event not fron the parent entity (the camera entity) itself, but the overall A-Frame scene (this.el.sceneEl). We need to do this because our OSM component is not a component of the camera entity; when broadcasting events, only components which are children of the entity broadcasting the event can receive them.

So the natural next step is to look at the code to handle this event. This is, predictably enough, in our OSM component. We set it up in that component’s init() so that we can immediately begin receiving cameramoved events.

init:function() {
        this.el.sceneEl.addEventListener("cameramoved", e=> {
            const sphMerc = this.system.cameraToSphMerc(e.detail);
            this.system.updateSphMerc(sphMerc);
        });        
    },

This code is simple enough. Note how we included the camera position within the event when we emitted it; we retrieve it using the detail property of the event object. We then convert the camera position to Spherical Mercator (camera x -> Spherical Mercator x, camera z negated -> Spherical Mercator y, as we saw last time) and trigger a position update in our OSM component, which will check whether we’ve entered a new XYZ tile and download data if needed.

Next we’ll look at the vertical-controls component:

const KEY_Q = 81, KEY_Z = 90;

AFRAME.registerComponent('vertical-controls', {
    init: function() {
        window.addEventListener('keydown', this.onKeyDown.bind(this));
        window.addEventListener('keyup', this.onKeyUp.bind(this));
        this.vertVelocity = 0.0; 
    },

    onKeyDown: function(e) {
        this.vertVelocity = e.keyCode==KEY_Q ? 50: (e.keyCode==KEY_Z ? -50 : 0.0);
    },

    onKeyUp: function(e) {
        this.vertVelocity = 0.0;
    },

    tick: function (time, delta) {
        // m/s to m/ms
        this.el.object3D.position.add(
            new THREE.Vector3(0, this.vertVelocity*delta*0.001, 0)
        );
    }
});

This is fairly simple; essentially we are adding simple keydown and keyup event listeners to the window, just as in a non A-Frame JavaScript application, and setting the vertical velocity to either +50 or -50 metres per second depending on whether we press Q or Z. In our tick(), we use the delta parameter (the number of milliseconds passed since the last tick()) to calculate how much to actually move the camera. Note how we can change the camera’s position by accessing the three.js Object3D representing the position and then adding a vector to it; obviously, we need only change the y component.

I’m not sure if this is the ‘best’ way to do this, incidentally; given A-Frame’s stock wasd-component is already listening for key events, we’re basically creating two key listeners, which seems inefficient. Longer-term, extending wasd-component in some way and overriding its key press behaviour, seems the way to go… but given this is just a demo app, this’ll do for now!

So… that’s our OSM component with vertical controls, and with the functionality to load new data from the server when we move to a different tile. In the fourth and final post on this, we’ll cover the most difficult part… terrain and attempting to overlay OSM data on it!

Categories
Uncategorized

A-Frame Part Two: OSM data

So in the previous post we saw how we could create a very basic custom A-Frame component: a simple polyline made up of triangles, which could form the basis for an OSM Way. In Part Two, we will look at how we can render tiled OSM vector data on a flat plane. The end product will look something like this:

Tiled OSM data rendered on a flat plane with A-Frame.

How does this work? We need the following:

  • a custom A-Frame component to download and render OSM data;
  • an OSM vector tile server;
  • a client-side tiling system which keeps track of which tiles have been downloaded already.

I have the second already, as I already use the Freemap API – a GeoJSON based “xyz” OSM tile server – to power a range of projects including Hikar, OpenTrailView and MapThePaths. So I will begin with looking at the first, our custom A-Frame component – starting with the HTML:

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>A-Frame Test</title>
<script src="https://aframe.io/releases/1.0.3/aframe.min.js"></script>
<script src='js/bundle.js'></script>
</head>
<body>
<a-scene background="color: skyblue">

<a-camera near="0.1" far="50000" wasd-controls="acceleration:2.6" id="camera1" /></a-camera>


<a-plane position="-80000 -5 -6630000" rotation="-90 0 0" width="100" height="100" color="#7bcba4" shadow></a-plane>

<a-entity id='osm1'  osm-data="url: https://www.hikar.org/fm/ws/tsvr.php?x={x}&y={y}&z={z}&way=highway; cameraId: camera1; lat: 51.05; lon: -0.72; zoom: 13"></a-entity>

</a-scene>
</body>
</html>

The interesting code is our custom entity with the ID of osm1. This contains a custom component, osm-data, which includes url, lat, lon and zoom properties. Hopefully the meaning of each property is apparent: we need a URL to download the data from, we need a default latitude and longitude, and we need a zoom value when downloading our vector tiles, to describe how large each tile will be. Zoom level 0 will download the whole world, while one tile at zoom level 13 is around 3x3km or so.

So having looked at the HTML, let’s look at the JavaScript for the component.

AFRAME.registerComponent ('osm-data', {
    schema: {
        url: { type: 'string' },
        cameraId: { type :'string' },
        lat: { type: 'number' },
        lon: { type: 'number' },
        zoom: { type: 'int' },
        defaultElevation: { type: 'number', default: 500 }
    },

    init:function() {
        
    },

    update: async function(oldData) {
        this.system.setTilerUrl(this.data.url);
        this.system.setTilerZoom(this.data.zoom);
         this.system.setDefaultElevation(this.data.defaultElevation);
        if(this.data.lon >= -180 && this.data.lon <= 180 && this.data.lat >= -90 && this.data.lat <= 90) {
            this.system.updateLonLat(this.data.lon, this.data.lat);
            const cameraPos = this.system.lonLatToCamera(this.data.lon, this.data.lat);
            document.getElementById(this.data.cameraId)
                .setAttribute('position', cameraPos);
        }
    }
});

This time, the init() function contains no code (for now), because we’ve moved it all to update(). As is described in the A-Frame documentation, the update() function of a component runs whenever any properties are changed. We want to be able to respond to dynamically setting the properties from JavaScript, so that the component will update accordingly. For instance, we want to ensure that whenever we update the latitude and longitude, the correct data for this area is downloaded.

So what do we do next? We check the latitude and longitude are within range, and then we call the updateLonLat() method of the system. A-Frame uses the entity-component-system architecture, whereby each component has a corresponding system in which the detailed logic is placed. This is accessed from the component via this.system.

So let’s move onto this updateLonLat() method of our system:

updateLonLat: async function(lon, lat) {
        const newData = await this.tiler.updateLonLat(lon, lat);
        for(let k in newData) {
            this.loadOsmData(k, newData[k]);
        }
    },

What’s this doing? Firstly we use async/await to allow us to write asynchronous code sequentially: remember that if we await the results of a function we are waiting for a promise returned from that function to be resolved. (Ideally of course we should use a try/catch here, just in case the promise is rejected!)

We then make use of another object, the tiler. This is not part of A-Frame itself; this is a custom class I have created to allow easy management of tiled data. Because I may want to use this class in other components, or even outside of an A-Frame application completely, it’s entirely separate and independent. Let’s look at it:

const GoogleProjection = require('../../common/GoogleProjection');
const Tile = require('../../common/Tile');

class Tiler {
    constructor(url) {
        this.tile = new Tile(0, 0, 13); 
        this.tilesLoaded = [];
        this.url = url;
        this.sphMerc = new GoogleProjection();
    }

    setZoom(z) {
        this.tile.z = z;
    }

    async updateLonLat(lon, lat) {
        const loadedData =  await this.update(this.sphMerc.project(lon,lat));
        return loadedData;
    }

    async update(pos) {
        const loadedData = {};
        let t;
        if( t = this.needNewData(pos)) {
            console.log(`need new data...`);
            const tilesX = [t.x, t.x-1, t.x+1], tilesY = [t.y, t.y-1, t.y+1];
            for(let ix=0; ix<tilesX.length; ix++) {    
                for(let iy=0; iy<tilesY.length; iy++) {    
                    const data =
                        await this.loadTile(new Tile(
                            tilesX[ix], 
                            tilesY[iy], 
                            t.z)
                        );
                    if(data != null) {
                        loadedData[`${t.z}/${tilesX[ix]}/${tilesY[iy]}`] = data;
                    }
                }
            }
        }
        return loadedData;
    }

    needNewData(pos) {
        if(this.tile) {
            const newTile = this.sphMerc.getTile(pos, this.tile.z);
            const needUpdate = newTile.x != this.tile.x || newTile.y != this.tile.y;
            this.tile = newTile;    
            return needUpdate ? newTile : false;
        }
        return false;
    }

    async loadTile(tile) {
        const tileIndex = `${tile.z}/${tile.x}/${tile.y}`;    
        if(this.tilesLoaded.indexOf(tileIndex) == -1) {
            const tData = await this.readTile(this.url
                .replace("{x}", tile.x)
                .replace("{y}", tile.y)
                .replace("{z}", tile.z)
            );
            this.tilesLoaded.push(tileIndex);
            return tData;
        }
        return null;
    }

    async readTile(url) {
        return null;
    }

    projectLonLat(lon, lat) {
        return this.sphMerc.project(lon,lat);
    }
}

module.exports = Tiler;

To summarise, what it’s doing is downloading XYZ-tiled data from a given location. To do this, it:

  • converts latitude/longitude to Spherical Mercator coordinates (using GoogleProjection which is a general-purpose Spherical Mercator conversion class which I’ve used in other projects);
  • works out which XYZ tile for a given zoom level corresponds to these coordinates;
  • downloads not only the current tile but the 8 surrounding ones, to ensure that data some distance from our current location is downloaded (so it doesn’t look like we’re on the ‘edge of the world’). We also check that the tile hasn’t already been downloaded; we keep an array tilesLoaded which stores the IDs (z/x/y) of each tile downloaded already.

Note that the specific method to actually download a tile is readTile(). The code required for this will depend on exactly what data we download; the default implementation simply returns null – essentially it’s the equivalent of an abstract method in languages like Java.

We thus need to subclass Tiler to deal with specific types of data. Here we use a JsonTiler subclass that will parse JSON received from the web, and override readTile():

const Tiler = require('./tiler');

class JsonTiler extends Tiler {
    constructor(url) {
        super(url);
    }

    async readTile(url) {
        const response = await fetch(url);
        const data = await response.json();
        return data;
    }
}

module.exports = JsonTiler;

Having discussed the Tiler, we’ll now return to the updateLonLat() method of our system. To repeat:

updateLonLat: async function(lon, lat) {
const newData = await this.tiler.updateLonLat(lon, lat);
for(let k in newData) {
this.loadOsmData(k, newData[k]);
}
},

Our tiler returns a hashmap-style object of each newly-loaded tile, with the tile ID the key and the actual JSON data, the value. We loop through this object and call the loadOsmData() method of the system on each tile, to actually generate our 3D OSM way objects. How does this work?

loadOsmData: async function(tileid, osmData) {
        if(osmData !== null) {
            osmData.features.forEach  ( (f,i)=> {
                const line = [];
                if(f.geometry.type=='LineString' && 
                    f.geometry.coordinates.length >= 2) {
                        f.geometry.coordinates.forEach (coord=> {
                            line.push([ coord[0], 
                                0,
                                -coord[1] ]);
                        });
                    
                
                       const g = this.makeWayGeom(line,     
                    
                        (this.drawProps[f.properties.highway] ? 
                            (this.drawProps[f.properties.highway].width || 5) :
                         5));

                    const color = this.drawProps[f.properties.highway] ? 
                    (this.drawProps[f.properties.highway].color||'#ffffff') : 
                    '#ffffff';
            
                    const mesh = new THREE.Mesh(g, new THREE.MeshBasicMaterial(
                    {color:color}
                    ));
                        
                    this.el.setObject3D(`${tileid}:${f.properties.osm_id}`, mesh);
            }
            }); 
        }
    },

For each tile, it loops through each GeoJSON object returned from the OSM data, and checks that the current object is a LineString with at least two coordinates. If so, we call a further method makeWayGeom() (not shown) which creates a way geometry (a polyline made up of triangles), much as was described in Part One. We then look up the correct colour for that way (roads are in white, footpaths in yellow, and bridleways and tracks in red) in the this.drawProps object (not shown, but can be seen in the repo) and finally create a THREE.Mesh mesh object using the geometry, and a THREE.MeshBasicMaterial with the chosen colour. (Note that, as we’ll see later, other types of material exist which allow us to take advantage of lighting – but we don’t wish to apply lighting effects to our OSM ways – for this simple demo at least – as that would be overkill!).

Lastly, returning right back to the update() method of our component, we convert the lon/lat properties to a camera position, and set the A-Frame camera’s coordinates to that position, so we ensure that we view the specified location. When doing this, we have to be careful that in A-Frame, the ground plane has x increasing eastward, y vertical, and z decreasing northward (as per the standard OpenGL coordinate system), while Spherical Mercator has x increasing eastward but y increasing northward! If you look at the full code, you can see logic to deal with this.

We then call A-Frame’s setObject3D() to add the mesh to the current entity (this.el). An important point is that a single A-Frame entity can have multiple three.js Object3Ds associated with it, as long as each has a separate key. So here, we’re specifying that the <a-entity> which contains our OSM component will have multiple Object3Ds, one for each OSM way. As you can see from the code, we use a unique key consisting of the tile ID and OSM way ID (it’s possible that one OSM way may span multiple tiles, so that’s why we need the tile ID as well as the way ID in the key).

const cameraPos = this.system.lonLatToCamera(this.data.lon, this.data.lat);

document.getElementById(this.data.cameraId)
                .setAttribute('position', cameraPos);

So that’s that… a basic, and terrain-less, OSM way renderer using A-Frame. You can see the code at https://gitlab.com/nickw1/aframe-expts/-/tree/master/2.

Categories
Uncategorized

A-Frame OSM demo now live

Just a quick update… I’ve now made the A-Frame OSM demo live. It’s available at https://hikar.org/aframe/. Do note it’s not perfect – it’s really just a proof of concept and has a number of issues, to be returned to later.. but it does show how OSM data can be overlaid on a terrain using A-Frame. A decent graphics card is advised as it may crash otherwise, particularly after moving some distance from the original location. Also note you can set the start latitude and longitude via query string parameters in the URL, e.g. https://hikar.org/aframe/?lat=...&lon=....

See https://gitlab.com/nickw1/aframe-expts/-/tree/master/4 for the full source code. The next posts will continue to detail how it was developed.

Categories
Uncategorized

A-Frame Part One: A Way Component

In the next four posts I’m going to detail how I developed the A-Frame OSM demo that I mentioned last time; by explaining how it was built, I’m hoping it will be a useful resource for others who might want to experiment with A-Frame and OSM data. Just one disclaimer – do bear in mind I’ve only just started playing with A-Frame, so what’s described here might not necessarily be the “best” way of doing it!

This first post simply details how to produce a custom component, and will produce a scene like this:

A custom Way component, together with the built in <a-sphere> and in the distance, a text component reading ‘Hello World’.

Not that impressive, I’m sure you’re thinking… but a journey of a thousand miles has to begin with a single step. It uses a custom ‘way’ (polyline) component: the red path leading from the sphere to the text which you might be able to make out in the distance.

Why is this example relevant? Hikar needs vector renderings of OSM ways in order to superimpose them onto the camera, so a web-based version of Hikar would need this. Looking through some pre-existing A-Frame components that can render OSM data, there are a few interesting ones, such as the A-Frame Tangram component which will render a Tangram map into an A-Frame scene – but this is a bit different to what I’m trying to achieve. What I’m looking for is a way of rendering OSM-derived polylines (without a full map) in 3D with elevation data included so I figured I’d start to get into A-Frame by creating a simple ‘way’ 3D polyline component, which isn’t included in the defaults for A-Frame.

So how to do this? I won’t explain A-Frame basics in detail – you can visit the site for that – but I will explain what I did to create the custom component, as it might be useful for beginners to A-Frame who wish to create their own components. Starting with the HTML:

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>A-Frame Test</title>
<script src="https://aframe.io/releases/1.0.3/aframe.min.js"></script>
<script src='js/way.js'></script>
</head>
<body>
<a-scene background="color: #fafafa">

<a-plane position="0 0.5 -3" rotation="-90 0 0" width="100" height="100" color="#7bcba4" shadow></a-plane>

<a-sky color="skyblue"></a-sky>

<a-sphere position="0 1 -3" radius="0.5" rotation="0 0 0" color="#4cc3d9" shadow></a-sphere>

<a-entity position="50 1 -50" text="value: HELLO WORLD; font: roboto; color: magenta" scale="10 10 100" material="side: double"></a-entity>

<a-entity way='vertices: 0 0.51 -3, 3 0.51 -6, 3 0.51 -9, 1 0.51 -12,12 0.51 -24, 24 0.51 -30, 36 0.51 -42, 48 0.51 -48; width: 1' material='color : #ff0000'></a-entity>

</a-scene>
</body>
</html>

We create an <a-scene> to represent our scene, together with an <a-plane> for the ground, some <a-sky>, an <a-sphere> and a text entity reading ‘HELLO WORLD’. Note how the text is someway from the camera, at position x=50, y=1, z=50 (i.e 50 units to the right and ‘into the screen’ from our default).

The interesting part is our custom entity:

<a-entity way='vertices: 0 0.51 -3, 3 0.51 -6, 3 0.51 -9, 1 0.51 -12,12 0.51 -24, 24 0.51 -30, 36 0.51 -42, 48 0.51 -48; width: 1' 
material='color : #ff0000'></a-entity>

Note how we’re creating an entity containing a ‘way’ component (our custom component, to be described below) together with a standard ‘material’ component describing the colour (red) of our entity. Note also how the ‘way’ component includes two properties: vertices (a string of vertex coordinates making up the way) and a width (in OpenGL units).

So how do we write our custom component? Here is the JavaScript for the ‘way’ component, way.js:

AFRAME.registerComponent('way', {
    schema: {
        width: { default: 1, min: 0 },
        vertices: { type: 'string' }
    },

    init: function() {
        const vertices = this.data.vertices.split(",")
            .map( vcoords => vcoords
                                .trim()
                                .split(" ")
                                .map( xyz => parseFloat(xyz))
                );
        const realVertices = [];
        const faces = [];
        let dx, dz, len, dxperp, dzperp;
        const k = vertices.length-1;
        for(let i=0; i<k; i++) {
            dx = vertices[i+1][0] - vertices[i][0];
            dz = vertices[i+1][2] - vertices[i][2];
            len = distance(vertices[i], vertices[i+1]);
            dxperp = -(dz * (this.data.width/2)) / len;
            dzperp = dx * (this.data.width/2) / len;
            realVertices.push (new THREE.Vector3(
                vertices[i][0] - dxperp, 
                vertices[i][1], 
                vertices[i][2] - dzperp)
            );
            realVertices.push (new THREE.Vector3(
                vertices[i][0] + dzperp, 
                vertices[i][1], 
                vertices[i][2] + dzperp)
            );
        }
        realVertices.push (new THREE.Vector3(
                vertices[k][0] - dxperp, 
                vertices[k][1], 
                vertices[k][2] - dzperp )
        );
        realVertices.push (new THREE.Vector3(
            vertices[k][0] + dzperp, 
            vertices[k][1], 
            vertices[k][2] + dzperp )
        );

        for(let i=0; i<k; i++)     {
            faces.push(new THREE.Face3(i*2, i*2+1, i*2+2));
            faces.push(new THREE.Face3(i*2+1, i*2+3, i*2+2));
        }

        const geom = new THREE.Geometry();
        geom.vertices = realVertices;
        geom.computeBoundingBox();
        geom.faces = faces;
        geom.mergeVertices();
        geom.computeFaceNormals();
        geom.computeVertexNormals();

        const mesh = new THREE.Mesh(geom); 
        this.el.setObject3D("mesh", mesh);
    }
});

function distance(v1,v2) {
    const dx = v2[0] - v1[0];
    const dy = v2[1] - v1[1];
    const dz = v2[2] - v1[2];
    return Math.sqrt(dx*dx + dy*dy + dz*dz);
}

We’re using the standard AFRAME.registerComponent()functon to register our custom component, and then writing an init() function which will initialise it, given the input properties.

So how does it work? First we parse the vertices property of our component and produce an array from it. These vertices are simply the points making up our way (polyline). However what we don’t want is a “pencil-thin” line. We need a thicker line of a given width, to better represent a road, path and so on. There is no primitive in A-Frame or three.js for such a polyline, so we need to create our own by building it up from triangles as shown below:

So the code loops through each input vertex (shown in blue above), and calculates two adjacent points per input vertex perpendicular to the current line segment and the specified width apart. The adjacent points are shown by the numerical indices (e.g. 0 and 1 for the first input vertex, 2 and 3 for the second, and so on) in the diagram above. Each of these numbered points is added to the realVertices array – as this will be the real vertices used to define our geometry for our final line.

Once we’ve done that, we need to define the faces of our polyline. Each face is a constituent triangle, and to define a face, we specify the vertex indices making up that face, so that the first triangle consists of vertices 0-1-2, the second, 1-3-2, the third, 2-3-4 and so on (in OpenGL, which our code is ultimately based on, triangles are defined anticlockwise).

Finally, we create a three.js THREE.Geometry object using these vertices and faces, and then create a THREE.Mesh using that geometry. Finally we set the “object3D” property (an Object3D is a three.js type representing a 3-dimensional object) of the component’s parent entity (this.el) to the mesh we’ve just created, and we’re done!

Categories
Uncategorized

Porting Hikar to the web?

After spending this long, dark winter mostly on other projects, primarily OpenTrailView, I’ve been thinking it’s high time to get back and do some further development on Hikar, which last saw a release (0.3.1) in the early autumn, just after I presented it at SOTM Heidelberg 2019. However, while browsing the web for interesting things – as you do – I came across this project, AR.js. Developed by WebGL and three.js expert Jerome Etienne, this looks really exciting. For a while it’s featured marker-based AR, but now – thanks to contributor Nicolò Carpignoli and his GeoAR.js project, it now features geographic augmented reality and already features the ability to show points-of-interest markers in the browser – using pure HTML and JavaScript.

This got me thinking – is now maybe the time to start thinking about the possibility of a web-based version of Hikar? (alongside the Android version which I intend to continue developing – more on that in a later post) Web APIs can now contact device hardware such as the camera and sensors, and we’ve seen the advent of progressive web applications and service workers in recent years so maybe it’s time to experiment with developing a version of Hikar using web technlogies.

AR.js, including its geographic AR feature, is implemented using three.js and A-Frame. The latter first caught my attention at FOSDEM 2019 in Brussels, when Robert Kaiser presented a very interesting demo showing OpenStreetMap features in 3D (trees and buildings) overlaid on an OSM map rendered onto a ground plane. One of the highlights of the event, I thought A-Frame and OSM would be something interesting to play with, but being busy with the Hikar app at the time I never got round to it.

So what is A-Frame? As the website describes, at a very basic level it’s a framework for easily developing immersive (VR and AR) applications in the browser, in which entities within your 3D world are defined as HTML tags which can be accessed and manipulated from JavaScript using the standard DOM API. Entities consist of one or more components – for instance, a component could be a geometry (shape) or a surface material (colour, texture and so on). You can see some examples on the A-Frame website.

So, how to get started with this? To get familiar with A-Frame, I thought I’d develop a simple VR demo to overlay OSM data in 3D on a terrain. Given that Hikar needs to show OSM data in 3D and account for elevation, such a demo would then form the basis for a “Web Hikar”. I have now completed this (see repo; needs to be ‘tidied-up’ a bit before going live though), and the next few posts will detail how it was done. I developed a similar demo as long ago as 2009, using raw WebGL (just as it was released), and – as you will see – the A-Frame equivalent is quite a bit simpler!

Categories
Uncategorized

Welcome to the Hikar blog!

Welcome to the Hikar blog! This blog aims to explore my experiments and investigations with augmented reality and other immersive technologies using OpenStreetMap data, focused on outdoor users and particular hikers.

First of all, let me introduce myself – my name is Nick Whitelegg, long-term OpenStreetMap contributor (username nickw), keen hiker and developer of several past and present OSM-related websites and applications, starting with Freemap, an England- and Wales-focused free mapping site for walkers in 2004 (which originally used its own database before switching to OSM). More recently, augmented and virtual reality – and immersive technologies in general – has captured my interest and this will be the prime focus of this blog.

The main things I’ve been developing include OpenTrailView (https://opentrailview.org), an effort to produce a fully FOSS StreetView-like application using OSM data and aimed specifically at hiking trails, and Hikar (https://hikar.org), an augmented reality Android app which superimposes OSM data on the device’s camera feed together with virtual signposts.

Anyway, that’s probably enough for an introduction… so watch for updates!