Just a few years ago, terms such as ‘embedded’ and ‘polarisation’ were buzzwords.
But now they are real and present examples of vision technology in action – and, Adam Hill finds, the ITS industry is waking up to a number of possible applications
Every aspect of the intelligent transportation systems industry moves quickly – but developments in camera technology change with a rapidity which can appear quite bewildering. And with ITS providers constantly searching for an edge against fierce competition, advances in vision technology have the ability to be game-changers. A number of these were on show at Vision 2018 in Stuttgart, Germany – which gave ITS International the chance to catch up with the experts.
“Embedded, being just a buzzword several years ago, has been grown to a strong trend in computer vision to ease the customers’ pain of cost pressure and scalability,” says Enzio Schneider, senior product manager at Basler. “The embedded approach enables customers to provide more competitive, smaller and tailor-made solutions for a reasonable price to most of the vision industries’ needs.”
Axel Krepil, vice president, sales at Framos, agrees with this assessment. “The biggest and most vast development is the trend towards embedded vision, making all kinds of machines and devices to see,” Krepil says. “Based on criteria like performance, availability, and connectivity, there is great potential in embedded solutions for automotive and infrastructure applications. With focus on technologies like 3D, smart sensors, artificial intelligence (AI), and modular design, the embedded approach allows to create systems that are both standardised and customised to meet specific needs and to differentiate from competition.”
This is not only saving development time
and accelerating time to market, but also brings cutting-edge solutions for traffic and infrastructure, with imaging technology increasing safety and convenience, Krepil insists.
Jools Hudson, marketing manager at Gardasoft, sees three major developments which have come to the fore over the last couple of years. The first of these is the trend for replacing multiple camera configurations with a single camera. “Today’s machine vision cameras are far more advanced than cameras of the past,” she says. “This frequently enables systems with multiple cameras to be replaced by a single camera so that multiple lanes can be watched by a single camera.”
The second big change is the move from analogue to digital. “Analogue cameras have inherent performance limitations for ITS systems,” she explains. “New, digital cameras with GigE Vision and CoaXPress interfaces permit a long distance between cameras and computers.”
The third significant development which Hudson identifies is the switch to industrial-grade cameras: “ITS cameras have to withstand variable and harsh conditions. Industrial cameras provide a tougher solution to keep operational in all weather conditions.”
Polarisation is another idea which has come to the fore in recent years. Lucid Vision was the first company to launch the Phoenix polarisation camera featuring Sony’s IMX250MZR/MYR this year. “Image sensors now offer even higher resolutions, more sensitive pixels and faster frame rates,” says Torsten Wiesinger, general manager EMEA, Lucid Vision Labs. “Also, new sensing technologies beyond the visible - such as polarisation, SWIR, LWIR, UV, multispectral, et cetera - have become more popular and allow us to significantly enhance inspection capabilities. For example, two years ago nobody really talked about polarisation, and now several camera manufacturers are offering polarisation cameras.”
The introduction of 2.5GBASE-T and 5GBASE-T standards is another interesting development, Wiesinger says. These allow data transmission over twisted-pair copper wire at speeds of 2.5 Gbit/s and 5 Gbit/s respectively, creating two intermediate standards between Gigabit Ethernet and 10 Gigabit Ethernet. “The standards allow camera-to-computer connections of up to 100m on unshielded low-cost Cat 5e and Cat 6 twisted-pair cables,” he adds. Lucid presented its new 5GBASE-T PoE Atlas camera, featuring the new 31.4 MP Sony IMX342 sensor, at Vision 2018 in Stuttgart, Germany in November.
Teledyne Imaging offers VIS, infrared, LIDAR, fusion (a combination of VIS and infrared) cameras and sensors that enable enhanced detection and improved object classification. Manuel Romero, senior product manager at Teledyne Dalsa, sees machine vision, AI and deep learning systems as key developments in improving the safety and reliability of intelligent transport systems over the last two years. “The addition of infrared sensors to autonomous cars, in particular, have improved the differentiation of humans from inanimate objects, helping mitigate the chance of pedestrian accidents,” Romero explains. “Equipped with better vision and deep learning technology, intelligent transport systems are now capable of making smarter real-time decisions by mimicking the pattern recognition abilities of human intelligence to train on certain features or patterns and comprehensively evaluate big data.”
Given that technology moves so fast, and that competition is fierce, it is no surprise that companies are eagerly looking for what they see as the next major area which the industry should be concentrating on.
“We are seeing a major shift in the industry, driven by many emerging technologies such as the industrial Internet of Things (IoT), AI, deep learning, embedded vision, et cetera,” says Lucid’s Wiesinger. “In addition to IEEE 1588, more standardised time sensitive network (TSN) technologies are being developed by the IEEE 802.1 working group that will offer deterministic operations between Ethernet-connected devices. These solutions will provide critical real-time communication on the same physical network, guaranteed latency and low-jitter connectivity. Another interesting initiative is the OPC UA Vision standard led by the VDMA to enable easy integration of machine vision in Industry 4.0 and smart factories of the future.”
ITS systems have been evolving fast due to the increased demand for imaging intelligence, says Hudson at Gardasoft. “This has sparked many hardware and software innovations to satisfy the demand,” she continues. “One of the most exciting trends is the integration of machine vision and machine learning into ITS systems. ITS systems of the future will use machine learning technology to analyse traffic patterns and help alleviate congestion.”
At Basler, Enzio Schneider points out that there are many markets – such as security, IoT and smart cities, which have not yet been completely penetrated by computer vision companies. “Due to decreasing prices on embedded processing boards, image sensors and off-the-shelf cameras, computer vision solutions become much more attractive for these markets,” he says. “The same trend has enabled the vision industry to assimilate the traffic market in the past 10 years.”
For Axel Krepil at Framos, imaging and vision technology will be everywhere, making machines to see in all industrial and consumer markets. “Robotics, IoT and industrial processes, as well as the smart home, mobile solutions and entertainment will benefit from new vision technologies,” Krepil adds. “But real-time vision applications show very huge potential in autonomous vehicles, ride-sharing systems, smart parking, enhanced traffic-monitoring and control programmes, and networked infrastructure. Imaging is a basic technological necessity required to solve a variety of challenges in automotive, traffic and infrastructure management.”
This leads us on to perhaps the most significant question: are we becoming smarter when it comes to the applications that we are using the technology for?
“Of course,” continues Krepil unequivocally. “Imaging technology enables machines to see, intelligent processing algorithms help them to understand and react autonomously, as seen in self-driving cars or drones, avoiding crashes by intelligent recognitions of their environment. Embedded camera systems with high image recognition accuracy — even in poor light conditions for 24/7 operations — provide image data to be evaluated by the algorithms in real time - improving traffic safety, traffic flow and overall inspection and monitoring tasks for the management of today’s smart and modern cities.”
Lucid’s Torsten Wiesinger agrees that the future is bright. “With the emergence of OPC UA TSN and the OPC Vision Initiative, industrial companies can use a single Ethernet network for both time-critical applications such as image capture and less time-critical IT systems,” he explains. “Since the OPC UA TSN standard can be applied to computer-based nodes on the network, including cameras, PCs, PLCs and server-based systems, it will be especially useful in developing edge-based and cloud-based network applications.”
At Gardasoft, Jools Hudson is similarly upbeat. “The increasing adoption of machine learning and AI technology is certain to have a profound impact on ITS systems,” she insists. “As the capability to gain intelligence from a captured image grows, so the demand for sophisticated intelligence will continue to grow. ITS systems will increasingly be able to reliably identify information such as number of vehicle occupants, seat belt compliance, driver in-cab activity and make and model of vehicle.”
For Schneider at Basler it is a nuanced question. He suggests that the notion of becoming ‘smarter’ depends on what you are talking about. “If you mean, if we understand the applications better – the answer is
probably no, because R&D costs need to be spent efficiently, so high complexity is not really helpful,” he says. “This will lead to specialised companies providing easy to use tool boxes to reduce the effort on the customer side. On the other hand this means that just a few people - at the customer side - actually deal with the application itself. The provider of the toolbox is going to be too far away to understand the details of the various applications.”
But, he says, if the question is whether we are in general getting smarter by using the various applications (for instance, in traffic, security or retail), then the answer is “not necessarily”. “If you gather big data it does not help if you don’t have the capabilities and capacity to analyse it,” Schneider concludes. “However, this will be supported on the long run by AI/neural networks/machine learning.”
So…there is agreement on many issues but a divergence of opinion across some key areas – proof, if it were needed, that predictions are a tricky business. But today’s vision products offer a tantalising glimpse into the ITS solutions of the future.