László is a serial entrepreneur. AImotive evolved from his first venture, Kishonti Ltd., which quickly became a leading high-performance graphics and computing solutions company. The firm then turned its focus to automotive as artificial intelligence and autonomous driving started to gain global momentum. László has an education in Economics and Finance.
AImotive is the leading global provider of AI-powered self-driving technology. Using cameras as primary sensors our solutions mimic the visual capabilities of human drivers. This approach results in technology that can readily scale, and helps make autonomous driving a reality around the world.
Tell us about the journey that led you to AImotive.
AImotive grew from my first company, Kishonti Ltd., which quickly became a leading high-performance graphics and computing solutions company. When I saw that the mobile industry was plateauing, and the AI and autonomous industry was gaining momentum, I set up a team in-house to explore the possibilities. Our expertise in high-performance parallel computing, GPU computing and graphics was extremely useful because self-driving is such a data and performance intensive process. As the small team grew it was eventually spun out to found AImotive.
What are the top misconceptions people have about autonomous driving technologies?
The most dangerous one (misconception) is probably that they overestimate the capabilities of current production advanced driver assistance systems (ADAS). There is currently no system deployed which is self-driving, and the driver should keep their eyes on the road at all times. For now, the driver is still responsible for what the vehicle does.
Another common misconception is more connected to artificial intelligence than only self-driving.
Current AI solutions utilize what’s called narrow AI, networks that are only able to complete a small selection of very well-defined tasks. General AI, the kind you see in movies, is extremely far away. AI is a powerful tool, without which self-driving would probably be impossible. However, it has its limits in both algorithm complexity and processing power.
Tell us about your team.
The AImotive team currently has almost 180 members worldwide. The majority work at our headquarters in Budapest, where development happens. Our offices in Helsinki, Finland and Mountain View, California serve as testing and sales hubs. We also have a representative in Japan with two test cars, and will set up a similar satellite office there over the summer of 2018. The team is constantly growing and we should hit around 220 by the end of the year. We currently have around 20 PhDs in the company, a number I’m sure will also rise.
Talk us through the three main challenges that the autonomous vehicle industry is facing. How is AImotive tackling these challenges?
Testing is probably the biggest and most important challenge. Estimates are that a self-driving system would have to cover 5,000,000,000 miles (8,000,000,000 km) in testing to be safe. That is at a minimum 150,000 car years (a fleet of 150 self-driving prototypes would complete the tests in 1,000 years). Calculating with two engineers in the car that means at least $18 Billion.
Simulated testing can dramatically reduce this number. This is because scenarios can be repeated easily in different weather patterns and lighting conditions. The scenarios are also more interesting than the endless boring kilometers of road testing.
The second major difficulty is the lack of regulation for the development and deployment of self-driving cars. Nearly all current regulation concentrates on testing, and while the technology may be ready in a couple of years, it could be several years later that it is actually deployed because of regulatory issues.
The third issue is one of processing power and hardware. Current prototypes have GPU-based systems built into them. These consume a huge amount of power and take up a lot of space. To bring consumption down from 1500 W to around 50 W in a smaller form factor without sacrificing processing power is an extreme engineering challenge. The chips will also have to be automotive grade. Our answer to this challenge is aiWare, our AI accelerator design, which is showing great improvements in power consumption compared to GPU-based systems.
What kind of testing procedure does the aiDrive technology go through? Tell us about aiSim.
The development of aiDrive happens in a unique development pipeline that relies heavily on aiSim. When new feature requests are made, either internally or by our partners, our safety and requirement engineering teams analyze the feature. Their guidelines form the basis of development work. The safety team will also define safety scenarios for the system to be virtually tested on.
Once development is complete the feature is first tested in aiSim, which is a photorealistic simulator. aiSim is so embedded in the development process that every code change triggers a series of tests on a batch of 1600 scenarios. We use both fixed-time step and real-time simulation to test different aspects of the technology. Fixed-time step runs on any heterogeneous hardware setup, even a developer’s laptop, while real-time gives us insight on the runtime of the algorithms and visual information on the behavior of the car, and allows for hardware-in-the-loop testing. If the simulated tests are successful, real-world tests start on a closed test course and then move to public roads in Hungary and our other testing locations.
Naturally, data from all these tests – real-world and simulated – is fed back to our development teams to improve the system. aiSim can also generate ground-truth data for preliminary neural network training. This allows our teams to examine possible solutions in the simulator. However, before using these networks in the real world they have to be trained on real-world data.
What factors does the aiDrive technology consider when making decisions about conditions and people around the car while on the road?
This is a very complex question. The preparations begin at the very beginning of our pipeline, with safety and requirement engineering. These requirements define development and the safety scenarios used in simulated testing. We test on a modular and full system level. The former includes recognitions tests along with decision making, control and vehicle dynamics tests. The system is verified on several different levels that consider several environmental and special factors. Different weather conditions, times of day, lighting conditions are all taken into consideration, such as dense fog, clear noon, cloudy with wet road surface. Special situations, for example, are comprised of unexpected objects or animals on the road, or road works and constructions sites. During testing we also inject different faults, such as sensor failures, software and hardware errors to verify how the vehicle would behave in such cases.
Naturally, there is also a regulatory aspect, as road traffic laws differ from country to country. We also test the behavior of aiDrive in situations that characteristically lead to accidents when humans are driving. These scenarios are created based on data from different national and international databases such as the NHTSA Pre-Crash Scenario Typology and the EURO NCAP active safety coverage. ISO standards for ADAS functional performance testing are also observed. These factors all influence the functional and performance testing of our systems which defines what the software is capable of in certain conditions. In performance testing we observe how different curve radiuses, banked roads, or even different weight distributions affect the performance of the self-driving software stack. Functional testing is aimed at developing new and existing features, and we are safety conscious in this regard as well, as only versions that have been verified in the simulator can be tested on public roads.
How does aiDrive technology leverage computer vision and AI?
As far as we know AImotive was the first startup that wanted to replace human drivers with a system that works in the same way as they do. aiDrive currently uses AI in recognition and decision-making tasks. AI is great for recognition because it makes systems more robust. If trained properly, an AI-based system can recognize objects that are partially covered, or recognize lanes for a greater distance ahead etc. In decision making the incorporation of behavioristic elements is most important. This means aiDrive makes predictions on what other actors are going to do around it, and plans its trajectories accordingly. An AI-based solution is much more robust because it can adapt its training to different scenarios, rather than relying on decision trees.
Our vision-first approach is based on the fact that our current road networks are set up around visual cues. This means that cameras provide the greatest information density for self-driving cars. A vision-first solution can, in essence, mimic the capabilities of a human driver. We use other sensors for redundancy and safety but the backbone of the technology is artificial intelligence and a vision-first sensor setup.
Tell us about your partner companies, and the collaborative work you are involved in.
We are working with Groupe PSA, the French OEM, on a Level 4 highway autopilot project. This fruitful collaboration has allowed us to test our systems in France, alongside our own testing permits in Hungary, Finland, California and Nevada. We are also members of the Samsung DRVLINE Platform which was announced at CES 2018. Our other partners include Volvo and a Chinese OEM. Regarding aiWare, our artificial intelligence accelerator architecture, we are working with VeriSilicon and Global Foundries on producing test chips to run in our cars in H2 2018.
How do you see the marketplace for autonomous vehicles shaping up? What changes if any would you like to see in the community?
The first marketplaces to become accessible to L4 and L5 autonomous vehicles will be those less affected by regulation, such as closed loop and commercial logistics. It is also important to understand that the L4 and L5 system under development increases the safety of L2/L3 systems entering production.
Developers and OEMs must also be prepared for fragmented regulatory changes. Current patterns already indicate that regulation for self-driving will happen at different times in different regions, and may also be limited to single roads. For example, in the Netherlands self-driving systems are currently not allowed, however, if certain highway sections were to be open to such solutions OEMs and developers would have to be able to support these regulatory changes.
What are your thoughts on the kind of regulation that the industry needs?
Looking at current regulations in the US, it seems that local governments are having difficulty understanding what is needed. I think at some point a government will lay down a fixed test, similar to a driver’s license for humans, to allow self-driving cars on the road.
This doesn’t contradict the previous answer, only augment it. This would be the next step in the process, after fragmented regulations. In the long-run it would be best if regulations became international, to put clear guidelines in place.
What can we expect from AImotive in the near future?
Our current focus is on scalability in development, in testing and in deployment. We will continue to rely heavily on simulation technology to accelerate development and ensure safe and economical testing. With two new offices planned for 2018 in Japan and China, we are also looking to expand our road testing opportunities in Asia. We will continue our current collaborations and seek new partnerships, and also hope to bring aiDrive to urban environments in 2018.
Thank you Laszlo! That was fun and hope to see you back on AiThority soon.