Tesla FSD Beta 10.69 goes live with ton of new improvements

Earlier, Elon Musk said that Tesla FSD Beta 10.69 update for all Tesla users rolling out on August 20 but it did not happen due to many major code changes. Now, a user who owned a Tesla car informed us that the Tesla FSD Beta 10.69 started rolling out with dozens of new features and improvements. Tesla’s FSD Beta program currently has about 100,000 public testers in North America. Let’s take a look at FSD Beta 10.69 Release Note.

 

Tesla FSD Beta 10.69 Notes:

  • A new “Deep Lane Guidance” module has been added to the Vector Lanes neural network, which fuses features extracted from the video stream with coarse map data, namely the number of lanes and lane connectivity. Compared to previous models, this architecture achieves a 44% error rate on lane topology, enabling smoother control before lanes and their connectivity become visually apparent. This provides a way to make each Autopilot drive as well as someone driving their own commute, yet adapt to road changes in a general enough way.
  • Improved overall driving smoothness without sacrificing latency by better simulating the system and driving delays in trajectory planning. The trajectory planner now independently accounts for the delay from steering command to actual steering actuation, as well as the delay from acceleration and braking commands to actuation. This results in a more accurate trajectory of the vehicle driving model. This allows for better downstream controller tracking and smoothness, while also allowing for more accurate responses during demanding maneuvers.
  • Improved unprotected left turns with a more appropriate speed profile (“Chuck Cook style” unprotected left turns) when approaching and leaving mid-crossing areas in high-speed cross-traffic situations. This is achieved by allowing an initial twitch that can be optimized to mimic the harsh pedaling a human needs to drive in front of a high-speed object. The side profile approaching this safe zone has also been improved to allow for a better posture, allowing it to align well when leaving the zone. Finally, interactions with objects that are entering or waiting in the mid-intersection area have been improved to better simulate their future intent.
  • Added control over the amount of arbitrary slow movement from occupied networks. This also enables finer control over the shape of more precise objects that are not easily represented by cube primitives. This requires predicting the velocity of each 3D voxel. We can now control the slow-moving UFO.
  • The placeholder network has been upgraded to use video instead of images from a single-time step. This temporal context makes the network robust to transient occlusions and predicts occupancy flows. At the same time, ground truth is improved through semantic-driven outlier culling, hard example mining, and a 2.4x increase in dataset size.
  • Upgraded to a new two-stage architecture to produce object kinematics (e.g. velocity, acceleration, yaw rate) where network computations are allocated as O(object) instead of O(space). This increases the estimated speed of passing vehicles in the distance by 20% while using only one-tenth the amount of computation.
  • Improved smoothness of protected right turns by improving the association of traffic lights to taxiways and the association of yield signs to taxiways. This reduces false decelerations when no associated objects are present, and also improves yield positions when they are present.
  • Reduced false decelerations near pedestrian crossings. This is improving the understanding of pedestrians’ and cyclists’ intentions based on their movements.
  • The update of the full-vector lane neural network improves the geometric error of self-correlated lanes by 34% and cross lanes by 21%. By increasing the size inside each camera feature extractor, video module, autoregressive decoder, and adding a hard attention mechanism, the fine location of the lanes is greatly improved, removing the information bottleneck in the network architecture.
  • Made the speed curve more comfortable when crawling for a smoother stop when protecting objects that might be occluded.
  • By doubling the size of the auto-labeled training set, the animals’ recall was improved by 34%.
  • Crawling to gain visibility at any intersection where objects may cross your path, with or without traffic control.
  • Improves the accuracy of stopping positions in critical scenes with intersecting objects by allowing dynamic resolution in trajectory optimization, allowing it to focus more on areas of finer control.
  • By involving topological markers in the attention operation of the autoregressive decoder, as well as increasing the loss applied to the bifurcation markers during training, the recall of bifurcated lanes was improved by 36%.
  • By improving on-board trajectory estimates as input to the neural network, the speed error for pedestrians and cyclists is improved by 17%, especially when the ego is turning.
  • By adjusting the loss function used during training and improving the quality of labels, the recall rate of object detection was improved, eliminating 26% of missed detections of distant passing vehicles.
  • By incorporating yaw rate and lateral motion into likelihood estimates, object future path prediction in high yaw rate situations is improved. This helps objects turn into or out of the ego’s lane, especially in intersections or cut-in scenes.
  • Increased speed when entering motorways by better handling upcoming map speed changes, which increases confidence in merging into motorways.
  • Reduced delays when starting from a stop by taking into account the bumps of the leading vehicle.
  • Red light runners can be identified more quickly by assessing their current state of motion and expected braking profile.

If you like our news and you want to be the first to get notifications of the latest news, then follow us on Twitter and Facebook page and join our Telegram channel. Also, you can follow us on Google News for regular updates.

Leave a Comment