Vision technology lifts blinkers from tunnel vision
First publishedin ITS International
Tunnel entrances and exits pose particular challenges for vision-based systems
Sony’s Jerome Avenel looks at how advances in imaging technology are helping improve safety.
On the 24th March 1999, a Belgian truck transporting flour and margarine through the 11.6km Mont Blanc tunnel caught alight when a cigarette stub entered the engine induction snorkel, lighting the paper air filter. The fire left over 30 dead and many more injured. At the time, the Mont Blanc tunnel disaster was the world’s worst tunnel fire.
The fire’s intensity, which caused armoured metal cables to melt, was increased significantly by the nature of being in a tunnel - which prevented the dispersion of gases and heat, and lead to extreme temperatures. This can also be seen in similar subsequent disasters, such as Austria’s 2001 Gotthard Road Tunnel fire, which claimed 11 lives and left 128 people injured, and 2000’s Alpine funicular fire, which claimed over 150 lives and was so hot that only the train’s chassis remained.
The risks associated with road tunnels are further exacerbated with the increased risk of crashing in the low and changing light levels.
In the aftermath, European legislation was passed to ensure the owners of all tunnels longer than 300m implemented automatic incident detection systems to alert authorities to both smoke and cars that have stopped.
How this is interpreted and implemented varies from country to country, with many going beyond the minimum standard. Sweden, for example, has implemented camera surveillance systems on almost all large tunnels, with one of the latest to gain this technology being the Norra Länken (Northern Link) motorway in Stockholm.
Here, the authorities worked with Swedish integrator ISG to install roughly 500 Sony FCB cameras to specifically monitor the entire motorway tunnel and road network.
To further minimise the risk of disaster the EU has sought to implement regulations to restrict access for vehicles transporting dangerous goods in potentially dangerous areas – tunnels and city centres. Indeed, approximately 30 tunnels around Paris and 30 around Belgium already ban vehicles transporting such substances.
Automatic identification of Hazchem is helping enforce bans on dangerous goods entering certain tunnels
Enforcing this ban, however, has proved difficult and several trucks are caught each year ignoring signs. Governments have therefore turned to technology in seeking ways to automatically monitor for trucks that enter tunnels. This allows them to monitor for those that pose a risk but enter legally; to track, and ensure correct emergency procedures can be followed, for example using foam rather than water to fight chemical fires; and to administer hefty fines to drivers and companies that enter tunnels illegally.
Here, one such example comes from the French integrator Survision
, which has worked with the European authorities and has undergone testing of an optical character recognition system akin to ANPR, but for the orange dangerous materials signs that are used on the front of lorries.
Its systems use two Sony FCB cameras to detect both the dangerous material signage and the number plate. Upon identification, the cameras instantly send an alert to digital road signs by the entrance of the tunnel to warn the driver to either turn back or continue if permitted. The vehicle can also be tracked throughout the tunnel using a series of cameras, ensuring the authorities are aware of any incident instantly and precautions can be taken accordingly.
ITS applications are, by their very nature, a challenging environment. A sharp image is needed for near-instant analysis, even though vehicles are often moving at speed. Coupled with this, cameras placed outside - in the run up to the tunnel - have to cope with not just extremes of light (both midnight and mid-day) but fast-changing light levels too, be it from moving cloud cover or shadows cast by large vehicles. Additionally, cameras inside tunnels need to deliver video quality that enables the algorithms to be run accurately, despite being in a low-light environment which is also subject to shadows, as well as glare from vehicle lights.
In such applications where speed is vital, exposure time was traditionally cut by setting the captured area’s lighting at a high level, or increasing the light sensitivity of the image sensor itself. But this often leads to glare in another part of the image.
Transmission distance vs throughput - the major tradeoffs
Image processing can be used to partially correct problems, but they also increase computational load, slowing the system and making it harder to distinguish real defects from illumination artefacts.
The move to CMOS sensors enables a greater range of features to counter this (for example wide dynamic range, see below), however fast-moving vehicles are subject to blurring effects of traditional (rolling-shutter) CMOS architectures, where each frame is captured by scanning across the scene rapidly and therefore not all parts of the image of the scene are recorded at exactly the same instant.
One way around this issue is to overclock the sensor and run it at up to 120 frames per second (fps). However, European highways agencies have almost exclusively adopted GigE, which enables an exceptionally long transmission range, but also limits the data it can transmit to 1 gigabite per second – roughly equivalent to 24fps of 5-megapixel images.
Alternatively, newer global shutter CMOS sensors capture complete frames in the same instant and are used in high-speed manufacturing processes to counter blur. These are starting to be adopted for ITS applications both for this reason and because they add another way to manage exposure, control the capture at the pixel level, and through the use of the higher frame rates enabled by CMOS imagers, overcome these problems of illumination consistency.
GigE does however have one very significant advantage and that is the ability to synchronise cameras using the IEEE 1588 precision time protocol. Cameras all run at a unique clock speed, and IEEE 1588 dynamically assigns one as the master. This uses a master clock to regularly synchronise all cameras, giving accurate ring timings for the likes of average speed cameras, as well as synchronistion with the lens, or the lighting - which, critically for ITS applications, enables a reduced flash intensity/duration.
Using the precision timing protocol it's possible to synchronise multiple elements in a system to the master clock (blue) and achieve microsecond firing accuracy
The protocol also allows for GPO control, acquisition scheduling and multiple command cues - even though few camera manufacturers implement this.
Non-ITS system cameras running on shorter-range, higher-throughput standards, like CameraLink, can use wide dynamic range techniques which use multiple shots taken in very quick succession, differing only in exposure times. The technique merges these to enhance dark sections and reduce glare on bright sections, optimising the overall image and creating a sharper picture, free from effects such as heat shimmer or blurring.
Again, this creates large file sizes, which are unsuited to GigE. ITS applications in environments such as tunnels can, however, still benefit from this feature by focusing on regions of interest – programming the image sensor to send only portions of an image, and enabling bandwidth to be optimised. In turn, this allows more imaging subsystems, more sensors and more captures made, so individual processes can be monitored more effectively.
ITS vision systems are changing, moving from simple revenue and enforcement applications to bring in deep-learning and analytics, monitoring the way people are driving and it is likely we’ll see systems implemented in the near future that use this, especially in high-risk environments such as long tunnels, to identify erratic driving or issues such as overloaded vehicles.
Also, as camera systems are used in litigation to prosecute a driver or company, there is a need to ascertain who is driving. Drivers are not required to self-incriminate and it is therefore up to the authorities to prove who was driving. Here, advances in polarised-light sensors are likely to make their way into ITS cameras in the coming years. These will further reduce glare and make identification easier, even in these particularly challenging and high-risk environments.