Seoul Robotics has introduced SENSR 3.0, the latest iteration of the SENSR 3D perception platform.
When embedded into 3D sensors such as Lidar, SENSR uses AI deep learning to track, detect and identify hundreds of objects within a 4cm range.
The company says 3D systems are now in the same bracket as 2D cameras, making them attractive for ITS partners which want to avoid collisions and make traffic flow more efficiently.
Included with the software is QuickTune - a new snap-to-point tool to expedite sensor calibration - and QuickSite, a cloud-based simulation tool that enables users to virtually design and scale 3D systems, accounting for positions and angles and virtually adjusting sensor positions until coverage is optimal, thus reducing installation times.
"Imagine what we can accomplish if we can more accurately perceive our world beyond what’s visible to the human eye: roadways will be safer for both drivers and pedestrians, stores are optimised based on the customer journey, and airports experience reduced wait times," said William Muller, vice president of business development at Seoul Robotics.
"Those are just the immediate benefits that come from installing 3D systems."
The company says during set-up QuickTune finds "commonalities in an environment, such as a wall or corner, and automatically calibrates multiple sensors in a system to speed up deployments".
Users also have the option to select a specific spot for sensors to calibrate around, making it useful for multi-sensor installations, requiring just a few clicks to reduce calibration time.
The SENSR platform is both hardware- and sensor-agnostic, making it compatible with a range of different systems and scalable even across large footprints.
It is also verified to have OS support on multiple versions of Ubuntu.
Seoul Robotics will be showcasing SENSR 3.0 during CES 2023 at booth #5408 in the West Hall of the Las Vegas Convention Center.