Table of Contents[Hide][Show]
Tesla first introduced Autopilot in late 2014, and it has been pushing the boundaries of self-driving technology ever since. While new Tesla vehicles come equipped with basic safety technologies, Autopilot, especially the Full Self-Driving option, which was reintroduced in early 2019, is where genuine self-driving qualities come into play.
Hardware 3 is Tesla’s next-generation Autopilot and Full Self-Driving (FSD) computer.
In this article, we’ll take a closer look at the Tesla FSD chip, also known as Hardware 3, as well as what to expect from Hardware 4. This post could be technical, but I will do my best to clarify the main elements in plain English.
Technical Details of Hardware – 3
To begin, we can see a broad representation of the board. The board provides perfect redundancy, which means that any system on it can fail and the computer would continue to function normally.
All of the cameras are connected to the board on the right side, and the power supply, as well as various input and output connections, are connected to the board on the left side. Tesla used two chips in the center of the board.
Tesla employs two chips for redundancy and to cross-reference the findings, rather than to boost performance. The operating system is stored on flash memory chips beneath the CPU and somewhat to the left of the CPUs.
The size of each chip is unknown at this moment, however, given that a micro-SD card with a capacity of 500 GB is now available, it might be rather enormous. On the left and right sides of each CPU, there are four LPDDR4 chips.
Because the chips are made by Samsung, some people have assumed that the RAM is made by Samsung as well, which is false.
Tesla chose Micron over Samsung because their LPDDR4 RAM has a higher clock rate, 2133Mhz, than Samsung’s 1600Mhz. LPDDR4 is the low-power variant of DDR4, which is now utilized in desktop and laptop computers. LPDDR4 is a little slower than DDR4, although depending on the type, it can exceed DDR3 in specific situations. LPDDR4 is also the type of memory found in today’s smartphones.
Full System on a Chip-(SoC)
Let’s look at a comprehensive system-on-a-chip now (SoC). Tesla crammed a CPU, a graphics card, a neural processor, and several other components into a single chip. Tesla explains the entire process by tracking the data from the cameras.
First, the data is input at a maximum rate of 2.5 billion pixels per second, which is roughly equivalent to 21 Full HD 1080p displays at 60 frames per second. This is a lot more data than the sensors that are already placed provide.
This then goes into the DRAM we covered previously, which is one of the chip’s initial and significant bottlenecks because it is the slowest component. The data is then sent to the chip and processed by an image signal processor capable of processing one billion pixels per second (roughly 8 Full HD 1080p screens at 60 frames per second).
This section of the chip converts raw RGB data from camera sensors into useable data, as well as improving the tone and reducing noise. The neural network processor, or NPU, is thus the most fascinating portion of the entire semiconductor. The data is saved in the SRAM array as the initial stage in the procedure.
To accommodate the two neural network processors, Tesla’s neural network processor has a stunning total of 64MB SRAM, which is split into two 32MB SRAM segments. Tesla considers its enormous SRAM capacity to be one of the most significant benefits it has over any other type of chip it might have utilized.
Because the frames are not low-quality JPEGs, but huge improved lossless frames, this can be enough capacity to store, render, and process a single frame from all cameras and sensor inputs together.
The data travels through the chip’s principal corridors/hallways, also known as the “Network on a Chip” or NOC, and then the LPDDR4 DRAM, which has a bandwidth of 68 gigabytes per second and is used to store the data.
A neural network processor is a fantastic tool. Although a large amount of data passes through it, some of the computational jobs have yet to be changed to run on a neural network processor or are incompatible with one. This is where the graphics processing unit (GPU) enters the picture.
This chip’s GPU has mediocre performance (according to Tesla), operates at 1GHz, and can handle 600 GFLOPS. According to Tesla, the GPU is now used to conduct various post-processing activities, which might involve the generation of human-readable images and films.
The CPU also performs some general-purpose processing activities that are not suited for the neural processor. According to Tesla, the chip contains 12 ARM Cortex A72 64-bit CPUs operating at 2.2GHz. Although, a more true definition would be that it contains three 4-core CPUs. However, Tesla’s decision to use ARM’s Cortex A72 architecture is perplexing.
Elon Musk and his team explained it by claiming that this is what they had when they began designing the chip two years ago. Perhaps the inclusion of three older CPUs rather than one or two newer or more powerful ones was a cost-cutting measure for Tesla, which would make sense if multithread performance is more important to them than single-task performance.
Multithreading normally takes a little more programming effort to appropriately divide jobs, but hey, this is Tesla, so it’s probably a piece of cake for them. In any case, this chip’s CPU performance is 2.5 times better than Tesla’s prior HW2 version.
What to expect from Tesla Hardware 4?
All we know for now is that it will be geared at enhancing safety. The only thing it really tells us is that it won’t be focused on teaching an old automobile new tricks, but it doesn’t rule out the possibility that it will.
Here’s a list of possible HW4 updates and upgrades, sorted from most likely to most speculative:
- Tesla will most likely employ a newer CPU version, which will most likely be the Cortex A75, depending on when Tesla began building the architecture. The enhanced processing capability allows Tesla to conserve power and space on the chip, allowing additional crucial components to be added.
- With even more SRAM, the neural processing units have been upgraded.
- Tesla may switch to LPDDR5, which would result in significantly faster speeds and lower power usage. However, if the HW4 chip is still in development, or to save money, Tesla may choose LPDDR4X. LPDDR4X saves electricity by utilizing a lower voltage, although it might still result in a performance improvement if numerous chips are utilized in parallel.
- Depending on whether the chip’s processing capability can handle the full resolution and frame rate that the cameras are capable of, Tesla’s HW4 may have additional cameras and sensors with a better resolution and maybe even a higher frame rate.
- A better image signal processor (ISP). Tesla aimed to make their chip as inexpensive and powerful as possible. That’s why there’s a big gap in HW3 between what the chip input can do and what the ISP can handle, necessitating the need for a beefier or secondary ISP, depending on whether the option takes less power, less space, or costs less.
Conclusion
The HW3 computer from Tesla is a monster. It can handle seven times the number of frames and has seven times the size of neural networks. Finally, Tesla has created a strong and powerful processor that is capable of performing a wide range of activities.
2022 should be an intriguing year for Tesla Autopilot Full Self-Driving capabilities, with strong bespoke hardware and specialized AI software to match.
They’re nearly halfway through the four-year development cycle for Hardware 4, which is expected to be completed in late 2022 and will be incorporated into the Cybertruck.
Leave a Reply