The inaugural Ride AI conference has just concluded in Hollywood, California. This meeting of minds included innovators and influencers on topics related to self-driving cars, entailing hardware, artificial intelligence, and human experience, all in the context of the global future of mobility. At Ride AI, luminaries from across the autonomous driving space met to discuss current focuses and future opportunities for getting people around without people being in control.
Many companies are focused on creating equivalents to or improvements on consumer-ready systems like Tesla’s Full Self-Driving or General Motors’ Super Cruise. But a recurring notion through the day was that expanding autonomous capabilities of privately owned vehicles is only one small part of the overall pursuit.
The Toyota Research Institute (TRI) thinks autonomous driving development shouldn’t just be for its own sake—there needs to be purpose. TRI’s representatives see relatively near-term technologies as less of a replacement and more of an assistant or teacher for human drivers. Using the self-drifting Supra we previously rode in as an example, TRI hypothesized how such capabilities could teach a driver how to better control their car and intervene in scenarios like a hazard-induced skid that pushes the limits of a driver’s skill. TRI encouraged others in the space to not be constrained by narratives of what autonomy is or isn’t.
In that sense, work continues to create foundations on which autonomous applications can be built. Nuro is one example. Its early focus was on creating driverless delivery vehicles only intended to carry cargo, not human passengers. However, as Nuro developed, it realized it was matching or outpacing major mainstream automakers. Now it aims to license its autonomous vehicle tech stack to other producers, be they producers of passenger vehicles, roadgoing delivery drones, or something else.
Likewise, Wayve is taking a hardware-agnostic approach, developing an AI-based driving brain that can apply to vehicles of different types and with different sensing systems. Wayve wants to meet its clients’ needs by developing an adaptable logic for autonomy from Level 2 up to Level 4. It can function with sensing done by a relatively simple camera-only system or one combining cameras, radar, and lidar. Wayve can train its driving AI in accordance with its clients’ hardware parameters and user experience preferences.
User experience is vital for any brand looking to leverage autonomy; using self-driving technology must assuage fears and provide tangible benefits for users whether they are individuals or institutions. User experience is a key consideration at Waymo as it earns and retains new riders. That shows up in small ways, like allowing a rider to choose the onboard music, or correctly rendering details in surrounding traffic, or using interior sensors to alert a rider if they leave something behind.
But driving performance is paramount to Waymo’s user experience considerations, and that remains a challenge. Waymo’s goal is to deliver consistently safe, predictable, uneventful rides, but how a Waymo car knows to do that in somewhere like San Francisco is vastly different than doing so in Los Angeles, and those locations are subject to the same California traffic regulations. Factors like terrain, driving pace, and road conditions are quite different between those cities, and a Waymo vehicle must know how to navigate each with equal skill. As Waymo seeks to expand into global markets like Tokyo, it needs to learn entirely different driving styles in remarkably different places yet deliver that same user experience.
How autonomous vehicles might learn better driving behaviors is another complex part of the equation. Although sensor-strewn autonomous vehicles collect vast real-world road data every day, that’s insufficient to generate logic for all the potential scenarios a self-driving system may encounter. Leaders from Mobileye and Bot Auto stressed the importance of AI simulations—specifically, accuracy of those simulations—to continually, iteratively generate training knowledge for autonomous driving systems. Rather than use distance driven in simulations as a metric, measuring accuracy of simulations is at least as important. That accuracy can be based on typical road data, as well as by re-creating accidents and hazards. But no matter how precise AI simulations might become, real-world data must remain part of the pursuit so that hardware problems or road scenarios outside of AI’s grasp can be part of the considerations.
The United States regulatory environment poses its own impediments to autonomous vehicle deployments. Fragmented and remote oversight by different local and federal agencies leads to a confusing, inefficient path to getting technology on the road. China could serve as a model for structuring a regulatory framework for autonomous vehicles. That country’s government is relatively receptive to self-driving cars, establishing clear regulations and procedures that creators can reference to self-certify their developments. With these rules laid out clearly and transparently, technologists and consumers alike have less ambiguity about an autonomous vehicle’s qualifications or capabilities. As such, robotaxi services in China such as Baidu’s Apollo Go or Pony.Ai find increasing popularity and integration on urban roads.
Big ideas and insights were in no short supply at the Ride AI conference. Yet this diversity of thought also indicated a need for closer alignment between all related parties. With this, technology developers, automakers, and government regulators can reduce inefficiencies and increase understanding for what it takes to succeed in tomorrow’s autonomous vehicle industry. The first Ride AI provided an environment for these synergies to take hold—so discussions at future editions of the conference can more quickly accelerate autonomous mobility.