Press "Enter" to skip to content

The confusing shared visual representation

You just finished a novel from your favorite author, while reading you have build a visual image of the scenery described by the author. A bit later the filming by a well known Hollywood director makes it to a cinema nearby. A big fan as you are, you’ll be surely going out to see this movie! But after the opening scene you will find yourself confused and disappointed, the depiction created by the director doesn’t match yours at all.

You wake up in the middle of the night during a holiday in a hotel, you have no idea where you are and it takes a few seconds to realize the environment you are in is the hotel you checked in the day before.

This phenomenon is perfectly described in a recent article (//www.sciencedaily.com/releases/2011/09/110928131800.htm) , and there is even a good explanation in it !

In the Fire Captain training in the Netherlands you are taught not to compose an image based on the verbal information you receive from the dispatch unit over the radio. Even the on site explanation by an informant should be listened to carefully. The reason for doing this is to prevent the same confusion as described in the first two paragraphs. Composing an image based on information you haven’t yet confirmed yourself creates a great risk. Being a Fire Fighter you want to march into action, the decisions you make are not the best, but the most acceptable for that moment in time. If you construct a plan based on an image you didn’t confirm you might find yourself confused.

Recent developments in the information management field for first responders are more and more facilitating the composition of shared visual representation. The idea behind this concept is that we create a common user interface with a visual representation where all the information that is available is displayed. We have some very large concerns with that approach:

  1. The assumption is made that every single player understands the way the information is presented. GIS based applications are generally used for these visualizations, that is fine for people who work with building plans in their daily life, but volunteer fire fighters may only be confronted with this type of visualization while working for the fire department. It adds to the confusion
  2. Not all the first responders have the time to interpret and analyze all the data shown on the interface, select what they need for their job, and realize that the actual situation might be different from what they concluded from the shared visual representation ( see paragraph 1)
  3. The detail that can be shown on this interfaces is really high, but that creates a expectation. Experience in Amsterdam has shown that the satellite navigation system sometimes shows the wrong location for a certain address, it happened more then once that the unit rushing to the scene focused completely on the navigation system and passed the actual address.
  4. With the detail level of the interfaces comes another issue, the data you display on these interface has to be of equally high quality. The recently released dutch building register for example contains a building that is roughly 320km² big !

So with all these great techniques and available datasets we actually create more confusion! The techniques assume that everybody understands the visual representation, while the datasets are expected to be correct. In my previous blog post I already expressed my concerns about opendata.

Pushing all the data and information into one representation is done to generate a context for it. Most bits and pieces of data and information have no context themselves, deep knowledge of its structure is needed to be able to understand it. So what could we do if the bits and pieces of information supplied actually contain the meta-data to describe them? Then we could use intelligent software agents to collect those bits an pieces of information which are needed for our current situation and present them in a way that makes sense to us, and doesn’t assist in creating confusion!

The technique to include the meta-data is called linked data, linked data contains the vocabulary to describe it, and is machine readable. Another important feature is the linking to external datasets, which allows us to infer relationships and automatically enrich the data with these external sources. Since linked data is  machine readable it’s very suitable to be processed by intelligent software agents, who can collect information based on the needs of the operator and the context they have to operate in. Another important feature of linked data is the distributed nature, we do not have to generate a large data management solution that contains all the information we might need, we simply reference external sources which are being kept up to date by their respective owners.

Fire department Amsterdam-Amstelland is using linked data already to assist in creating a small assistant for navigation, this is already described in previous post.