Dojo Will Secure Tesla’s Lead in Rolling Out Autonomous Taxi Platform, Says ARK Invest

Dojo will secure Tesla’s lead in rolling out a nationwide autonomous taxi platform, according to ARK Invest. Tesla’s predicted increase in AI training capacity gives high hopes for this.

Tesla gives update on Dojo

Earlier this week, Tesla launched a new Twitter account dedicated entirely to information about its developments in the field of artificial intelligence (AI). In a series of tweets, the manufacturer revealed several details of the progress of its supercomputer, Dojo. This provoked a reaction from some analysts. Among them was ARK Invest, which closely monitors the company’s development in the AI field.

Ark Invest dives into the details

Ark Invest analyst Frank Downing wrote his assessment of the company’s progress and expectations in response to the published information. He emphasized that Tesla predicts an increase in AI learning power to 100 exaflops by the fourth quarter of 2024. Downing explained that the huge number, which implies over 20x scaling from the 4.6 exaflops disclosed at AI Day 2022 (14k Nvidia A100 GPUs), and implies over 50x the A100 capacity discussed in 2021. The analyst explains that this is a compound annual growth in training capacity of 273% from 2021 to 2024 (if they hit their target), based on the 5,760 A100s they disclosed at CVPR 21. The analyst writes:

  1. At AI Day 2022, Tesla projected to have their first Dojo exapod in production by Q1 23, scaling up to 7 exapods in Palo Alto over time. The July 23 production date on the chart, assuming they’re not ignoring an already in-production pod, implies Tesla is a bit behind on exapod #1, but plans a faster, larger ramp to ~28 exapods by Q1 24 to reach capability of 100k A100s.
  2. If the new capacity is all Dojo, and not Dojo + Nvidia mixed), 300k A100 equivalent performance is just under some estimates of Nvidia’s total A100 SXM shipments over the last 12 months (NextPlatform estimates 350k server GPU sales of the variety Tesla is augmenting/replacing with Dojo, I’m assuming most of these were A100s as H100 started ramping recently).
  3. The chart appears to be using flops as the measure of training compute. In reality, a lot more goes into end-use performance than just flops. At AI Day 2022, Tesla estimated that 4 Dojo cabinets (0.4 exaflops) could replace 4,000 A100s (1.2 exaflops) for autolabelling. This is made possible because of the increased compute density and software optimizations Tesla expected to achieve. So either they haven’t been able to achieve the optimizations they thought they could, or 2024 training capability could be even higher than this chart suggests (in terms of A100 equivalent capability).

Additional assumptions:

-28 exapods by Q1 24 Tesla is exclusively adding Dojo capacity after July, and hasn’t added Nvidia capacity since August. The number of exapods would be lower if they plan ramp Nvidia capacity (i.e. H100s) in parallel, especially considering H100s are 4-6x more performant than A100s at AI training, per Nvidia.

Cathie Wood expects Dojo to secure Tesla’s lead in rolling out a nationwide autonomous taxi platform

Quoting Downing’s tweet, ARK Invest’s managing director Cathie Wood concluded that for Tesla, the development could be a huge success. She wrote that the buildup in compute will increase the likelihood that Tesla will be the first in the industry to deploy an autonomous taxi platform in the US. In her opinion, this is an opportunity to get SaaS-like (software as a service) margins.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *