Dealing with volatile data sets is difficult enough. The complexity increases by an order of magnitude when you’re also integrating data from multiple live sensors.

Situational Awareness Data Volatility

Author Photo
Written by TerraLens Engineering Team, Kongsberg Geospatial

Data Volatility is defined as "pertaining to the rate of change in the values of stored data over a period of time".

When it comes to situational awareness data, 'time' really is the key. Depending on the use-case for your geospatial system, milliseconds could make the difference between the correct location of a target, or a complete miss. For use-cases where geological time scales are in play, continental shift may be a factor, but most places where our products are used, real-time is what matters.

We’ve all been there: “in 500m, turn left”… but that exit is closed! GPS providers must rapidly react to freeway exists closed by construction or risk the ire of their users. In that moment when a decision needs to be made, if the data in use isn’t accurate, the consequences can range widely: from being late for Christmas dinner, to being “too late” to the emergency room.

Maps can and should change, especially if the needs of the user dictate high resolution data. Ask any Special Operations team or Search and Rescue organization in the world, and they will tell you that up-to-date satellite data is worth lives.

In the commercial world, companies involved in any low altitude Beyond Visual Line of Sight flights with unmanned aircraft are constantly at risk of obstacles that "weren't there yesterday". Obstacle databases can be provided by groups or government organizations, but they are only useful if updated regularly, most only being valid for a period of hours. UAS platforms are now typically outfitted with detect and avoid solutions of some sort or another to offset the fact that an unplanned construction crane in their flight path is a real possibility.

High resolution data along coasts is especially at risk of inaccuracy due to tidal changes, in some places altering water levels by as much as 20m. Vector data is also at risk from this unless viewed at the same time in the tidal cycle at which it was measured, and only for that specific location. An excellent example of the importance of coastal waterlines comes from the D-Day invasions, which were planned for low tide to allow for demolition teams to clear obstacles that were placed by the axis powers intended to hinder or damage any landing craft. The low water times at the 5 landing beaches were different by over an hour.

Other users, such as Air Traffic Control, do not care for surface data, instead relying on periodically updating airspace definition data that is updated on a set periodicity (such as the 56 day cycle used by NAV CANADA). By building tools that allow our clients and users to update any portions of their dataset at any schedule that they can think of we answer this challenge easily.

What air traffic controllers care about more than their aeronautical data is the accuracy of their on screen flights. Depending on the hardware used, some aircraft may not update on screen often; once every 8 seconds for a spinning radar, once every 4 seconds for data feeds broadcast from satellite, perhaps even once per second for self-reported ADS-B data.

Many technologies involved in situational awareness can now measure at much higher frequencies than that, providing 20, 50, even 200 updates per second. Now the question becomes: How long should your data last on screen? How long is that data valid for?

Radar operators will be very familiar with the concept of "coasting", where a track that is lost by the radar system does not immediately eliminate it from its internal tracking processes. Instead the track is dead reckoned, estimating where it may be for a period of time before ghosting (freezing) the track at the last estimated position and then finally removing it altogether when the data is deemed to not be useful. As you can imagine, 2 different radars with two different data volatility values can get complex. Once you add in cameras, RF sensors, audio sensors, lidar, self-broadcasting GPS, telemetry from INU and IMU hardware… the problem becomes exponentially more convoluted, requiring a very sophisticated solution.

At Kongsberg Geospatial, we have been working to help minimize this problem for our users for decades. In our IRIS application, each and every sensor that provides input to our system has its own time sensitivity criteria. An incoming UAV from a staring radar or electro optical sensor may expire in seconds, a self-reported fleet vehicle position that only reports when moving may be valid for 30 minutes, a ship at sea received via AIS might be valid for an hour, and a shipping container which reports its position by satellite 3 times a day could be valid for 8 hours.

By rigorously lending our expertise to our clients and ensuring we understand the volatility of their data, they know that their products will be the authoritative source that they need to do their job.

You will only ever be as accurate as your source data allows... but the people and tools you choose to use to solve your problems can help you a lot.

Build Your Own Real-Time Solution with TerraLens®

Real-Time Geospatial SDK

TerraLens® powers real-time C2 systems and displays in the following defense platforms:

US Navy AEGIS  ·  Joint Battle Command Platform (JBC-P)  ·  NATO AWACS  ·  TRITON (BAMS) System  ·  US Navy Littoral Combat Ship  ·  THAAD  ·  NASAMS  ·  Ground/Air Task Oriented Radar (GATOR)  ·  Radiant  ·  Sentry  ·  AEGIS Ashore  ·  AN/TPS-6x/7x


terralens, lidar, 3dterrain, terrain data

Kongsberg Geospatial is a fully owned subsidiary of Kongsberg Defence & Aerospace
Kongsberg Geospatial is a wholly-owned subsidiary of Kongbserg Defense and Aerospace
Kongsberg Geospatial Ltd.
United States and Canada
This email address is being protected from spambots. You need JavaScript enabled to view it.