Xilinx and Motovis have revealed they are collaborating on a solution that pairs the Xilinx Automotive (XA) Zynq® system-on-chip (SoC) platform and Motovis’ convolutional neural network (CNN) IP to the automotive market, specifically for forward camera systems’ vehicle perception and control. The solution builds upon Xilinx’s corporate initiative to provide customers with “robust platforms to enhance and speed development”.
Motovis develops advanced AI technology embedded in automotive chips to provide autonomous driving products and solutions. The company has deployed full-stack autonomous driving software and hardware systems, including parking & cruising, front view & surround view applications based on accurate environmental perception, sensor fusion, localization, path planning and vehicle control algorithms.
Forward camera systems are a critical element of advanced driver-assistance systems because they provide the advanced sensing capabilities required for safety-critical functions, including lane-keeping assistance (LKA), automatic emergency braking (AEB), and adaptive cruise control (ACC).
Global government mandates and consumer watch groups including The European Commission General Safety Regulation, the National Highway Traffic Safety Administration and the NCAP have issued formal mandates or strong guidance regarding automakers’ implementations of LKA and AEB in new vehicles produced between 2020-2025 and onward.
The solution, which is available now, supports a range of parameters necessary for the European New Car Assessment Program (NCAP) 2022 requirements by utilizing convolutional neural networks to achieve what the companies call a “cost-effective combination of low-latency image processing, flexibility and scalability”.
“This collaboration is a significant milestone for the forward camera market as it will allow automotive OEMs to innovate faster”, said Ian Riches, vice president for the Global Automotive Practice at Strategy Analytics. “The forward camera market has tremendous growth opportunity, where we anticipate almost 20 per cent year-on-year volume growth over 2020 to 2025.
“Together, Xilinx and Motovis are delivering a highly optimized hardware and software solution that will greatly serve the needs of automotive OEMs, especially as new standards emerge and requirements continue to grow”.
The forward camera solution scales across the 28nm and 16nm XA Zynq SoC families using Motovis’ CNN IP, a combination of optimized hardware and software partitioning capabilities with customizable CNN-specific engines that host Motovis’ deep learning networks.
The solution supports image resolutions of up to eight megapixels. For the first time, OEMs and Tier-1 suppliers can layer their own feature algorithms on top of Motovis’ perception stack to differentiate and future-proof their designs.
“We are extremely pleased to unveil this new initiative with Xilinx and to bring to market our CNN forward camera solution. Customers designing systems enabled with AEB and LKA functionality need efficient neural network processing within an SoC that gives them flexibility to implement future features easily”, said Dr Zhenghua Yu, CEO, Motovis.
“With Motovis’ customizable deep learning networks and the Xilinx Zynq platform’s ability to host CNN-specific engines that provide unmatched efficiency and optimization, we’re helping to future-proof the design to meet customer needs”.
Xilinx and Motovis will be speaking at the Xilinx Adapt 2021 virtual event on September 15, 2021. Adapt 2021 will feature keynotes with appearances from partners and customers, along with a series of more than 100 presentations, forums, product training and labs.
Stay up to date with the most recent automation, machine vision, and robotics news on MVPro. Read the best stories every Friday with our newsletter.
 Strategy Analytics, 2025 forward camera processor market, “ADAS Semiconductor Demand Forecast”, August 2021