Foresight Enhances Obstacle Detection With Data Pipeline Automation

About Foresight Automotive

Foresight Automotive is a leader at the forefront of the autonomous vehicle (AV) industry. Their disruptive technology is designed to improve driving safety by providing the highest level of sensing capabilities that are reliable, and accurate, ensuring the lowest rates of false alerts. Foresight Automotive is targeting the Advanced Driver Assistance Systems (ADAS), the semi-autonomous and autonomous vehicle markets and predicts that its systems will revolutionize automotive safety by providing an automotive-grade, cost-effective platform, and advanced technology.

Driving Your AI Projects to Scalability

The struggle that many companies face to reach production is real. Scalability is a critical factor when it comes to production. In fact, according to an Accenture survey, 75% of executives working in AI globally believe they risk going out of business in five years if their organizations don’t succeed in scaling their AI projects to the production phase.

This makes sense, given that AI will increasingly become a competitive necessity over the coming years. The daunting task of moving AI projects to production can either lead to triumph or failure. And because of this, scaling to production is considered an extremely difficult progression for many enterprises. When we throw autonomous vehicles into this equation, the challenge becomes even more difficult as training a system to respond correctly to every possible scenario on the road is crucial due to the cost, but it can also mean life or death.

As Eran Shlomo, CEO of Dataloop says:

Developing AI models is an easier task than actually implementing them into production. This is the exact reason many enterprises struggle with this stage. Data is the key and the nucleus of every AI enterprise. Training a model of a specific dataset in a controlled environment is one thing, but navigating live, real-world data is another challenge entirely. For an experienced driver, making a left hand turn is virtually effortless. However, for a machine it requires accurate predictions of timing, obstacle identification, a dynamic understanding of the immediate environment including other cars and pedestrians, and much more. While humans take these variables into consideration, the inevitable discrepancies between training data and real-world scenarios make it all the more challenging for an AI system. This is where you require humans working alongside AI, as they are familiar with the real world every day and can guide machines toward effectively navigating that world.

A perfect example of this is with Tesla’s deadly crash. Tesla is known to be one of the safest cars in the world. In fact, they’ve achieved the lowest overall probability of injury compared to any vehicle test in the U.S., and they used billions of miles of real-world data, with more than 1 billion having driven with the Autopilot engaged, yet the fatal crash (is believed to have) occurred due to being ‘tricked’ to operate without a driver. The system failed to ensure the driver was paying attention. In addition, it couldn’t even tell if there was a driver at all. This is where you need human intervention working with the AI. The requirement in a Tesla’s auto-pilot requires a fully attentive driver, with hands on the wheel every 10 seconds.

In fact, recently, a Tesla owner posted on social media that the new Tesla Model Y came with a driver monitoring camera enabled. On the car screen the following was written, “The cabin camera above your rearview mirror can now detect and alert driver inattentiveness while Autopilot is engaged.” According to Jake Fisher, senior director of auto testing at CR, “if this new system proves effective, it could be a major improvement for safety.” This could be a huge breakthrough for Tesla, providing even more safety assurance, and a solution to prevent driver distraction.

Where Autonomous Driving Stands Today

It’s important to address here that many drivers are still very wary of self-driving vehicles. Despite this fact, the industry’s growth, which is set to surpass $65.3 billion by 2027, forecasts well for public safety, as more than 90% of all vehicular accidents are due to human error.

Despite our weaknesses as humans, the human brain is wired to process massive amounts of information and stimuli and make split-second decisions. The same demand is required of autonomous vehicles in order to be able to similarly process and understand the fixed rules of the road-recognizing traffic signs, road markings, and speed limits all while identifying, assessing, and reacting to the endless variables. These variables can range from foreign objects, other cars, pedestrians, trees, or ever-changing weather conditions.

As autonomous technology moves towards production, the technology inches closer to maturing and becoming more integrated and intelligent as a whole. The technology sensors, LiDAR, radar, etc., are also advancing, which will lower costs. This will only improve depth perception and make for safer and automated simulations that are closer and closer to reality. This will also bring about the opportunity to process data in real-time, and enable swift decisions.

Foresight’s Journey to Dataloop

Foresight develops advanced safety driver assistance (ADAS) and autonomous driving solutions for the autonomous industry. Its systems are designed to provide real-time data about the vehicle’s surroundings while in motion. The process of building and preparing ML models requires a lot of data which can become quite overwhelming to manually annotate and validate. But with the help of Dataloop, Foresight’s data teams were able to easily and quickly create event-driven automation pipelines and models in order to generate functions to process their data. The automation pipelines were built with a goal in order to boost the effectiveness and productivity of their current annotation services. However, at the same time, it was a process; it didn’t happen overnight. We’ll explain the journey that Foresight took in order to reach their goal of scaling to production…successfully.

Before: Pinpointing the Weakness

Before Dataloop, Foresight was handling multiple projects, with lots of teams. They had a research team of humans who were manning the data. They used an open dataset – this consisted of a high volume of images annotated by outsourced labeling teams. This is a popular route taken in this industry in order to access thousands, even millions of open-sourced datasets (even video data) with labels including thousands of object categories. In the autonomous industry, it’s fairly easy to find these datasets. One would assume this would be a cost-saver, however, it actually achieves quite the opposite for a company that is trying to scale. This is definitely a good opportunity to begin the process of building a model, but it’s not enough. If your datasets aren’t entirely accurate , then your entire AI project is doomed to fail…

At Foresight, they hired an internal team of 15 annotators, but this only created a management overload. The biggest challenge was that they were working manually, and were limited in their ability to validate at a high level. But by partnering with Dataloop, they were able to scale their team, and verify the quality and consistency of their labeling outputs.

What does this mean on the ground? Due to the slow, manual annotations and general taskforce management - results were hindered and they weren’t able to reach their full potential.

Manual labeling of datasets requires significant resources, time, and money, but it also means a drop in quality.

The data Foresight was working with – was simply unreliable. The solution became clear that assistance was needed on a greater scale, and that required automation.

Autonomous vehicles begin and end with data, from the moment a vehicle’s sensor captures an image, a sound, or even a tactile sensation – a complex process of recognition, action determination, and response begins.

After: Using Automation to Drive Success

With the help of Dataloop’s platform, which integrates automatic active learning at every step of the automation pipeline, and human-in-the-loop functions for data validation, Foresight was able to start scaling efficiently.

What did this process look like?

The first step was eliminating the process of manually annotating the data. Next, Dataloop deployed models that were continuously fed with new, authentic, and validated data that came directly from Foresight’s production environment. This ensured that their models produced outputs that were based on the ground truth – even in highly dynamic and diverse settings. Validating with a ground truth will ensure you get the best possible test results.

When it comes to autonomous vehicles the cost of error is big and you must make sure you achieve as close as it gets to 100% accuracy as it can have a critical impact on your business and more importantly people’s lives.

 

Foresight uses Dataloop for monitoring the model, ensuring their models produce outputs with supreme accuracy, even in the most dynamic and diverse settings.

How does this look in action?

The deployed models are constantly being evaluated in order to increase their confidence level, which is embedded into the annotation process. During this process there is a direct channel between managers and annotators whereby they can comment on any missing annotation in real-time, and all open issues will be displayed on a live dashboard. You’ll then be able to comment and give feedback on the annotation if you notice a mistake. If during this process we detect inconsistencies between the annotator and the model, for example, a tree, then Dataloop will send the item for model training. You’ve now fed the model so that it will recognize trees on the road, and the new version is ready to be deployed. All data received needs to be recognized, verified, and validated in a manner that is fast enough and smart enough to ensure that all safety and technical requirements are met.

How do you know your models are performing well?

By continuously comparing model output with human annotators you’re able ensure your model has been fed all the possible scenarios. You’re also able to ensure your model is on par with your annotation dataset. At this point you need to check the model as to whether it requires more data and which data. You can filter annotations by model name, or by the confidence level. This will allow you to better understand your model, and allow you to modify your training set and your data acquisition process to improve your model performance. This drastically expedited the process at Foresight. Not just the time it took to do their annotations, but rather the process from start to finish.

As a centralized solution, the platform is utilized by various teams throughout the entire company. The teams at Foresight are researching for the sake of developing new models, therefore they need data to be able to keep their existing models performing well.

How did this whole process help Foresight?

OEM providers like Foresight need high-quality training data that will enhance effective AI systems at scale. Quality assurance for the data annotation process can pose a problem. In the automotive industry, it’s all about good quality control as cars are inherently dangerous on their own. Quality control allows you to spot problems before releases, ensuring industry standards are met, and potential car accidents are prevented, as well as expensive recalls. When the models are a matter of life and death, as in the case of autonomous vehicles, quality not only matters, it’s a must.

Edge cases

When it comes to autonomous driving, we’re dealing with life-critical situations that demand validation. While an edge case may be potentially harmless, it can potentially cause larger problems. For instance, misidentifying a man as an object
may not be harmful in some cases, but when it comes to self-driving vehicles this is a catastrophic disaster. Furthermore, the facts are that the camera’s image is affected based on poor lighting, bad weather conditions, and even more so when you’re talking about heavy fog, rain, and snow. Foresight’s technology uses 3D video analysis, advanced algorithms for image processing, and sensor fusion. This provides accurate obstacle detection in harsh lighting and weather conditions, even to the point of complete darkness, rain, haze, fog, and glare. Their technology enables the analysis of different terrains and 3D sensor fusion.

Foreseeing possible collisions or obstacles is an essential role in autonomous driving. The process of generating many datasets is extremely labor-intensive and requires human involvement which is crucial when it comes to edge cases. As edge case instances can escalate, it is crucial to prioritize your annotations team to deal with these instances in order to achieve success. This will ensure the vehicle can safely handle any unusual or out-of-the-norm or typical conditions.

When it comes to Foresight, they’re sending their image/video edge cases to us in order to identify cars, people, and objects that could potentially interfere or cause damage. Most of the videos Foresight sends are in night vision and/or taken at nighttime because they want to train their model to recognize these objects. In addition, they send Dataloop images/videos during differing weather conditions through their regular as well as night vision cameras in order to identify these objects as well.

Video analysis

Foresight is able to automatically track an object throughout a video using our tracker plugin. This allowed them to easily and automatically duplicate annotations between video frames and sequenced images, making multiple object tracking easy. Additionally, Dataloop’s video interpolation enables users to automatically change the position and size of an annotation, based on the average differences between fixed frames. This means that Foresight was able to consecutively tag frames, saving them the time of going from frame to frame manually.

Originally, with Foresight’s research team, their video data used to differ substantially, but now with Dataloop, everything is coming from one source. It’s like comparing apples with apples. The data was no longer being annotated by an anonymous student body.

Annotation tools

With the host of annotation tools and features available within Dataloop’s robust annotation studio, labeling was fast, easy, and accurate for Foresight. Dataloop’s AI annotation assistant, expedites the segmentation process with the use of a magic toolset, allowing Foresight to create pixel-level semantic segmentation. With Dataloop, you can define every pixel in an image. You can scroll over pre-labeled objects easily. The process is simple – every segmented pixel is assigned a unique value from the label list, so you don’t need to carefully work around the edges.

Dataloop helped Foresight accelerate their team’s productivity by about 200%.

Conclusion

In order for companies to overcome the challenge of accelerating to production, one must develop smart strategies. All AI applications will inevitably be faced with scenarios they do not recognize, and they will not function as intended. Eliminating time-consuming annotation tasks as much as possible is key to treating them in a way that allows your algorithms to act more independently, free of human supervision or control. Automation allows your annotation team, like those at Foresight, to focus their efforts instead, on identifying and addressing edge cases.

Powered by Dataloop: Foresight Hits Big Wins

In the autonomous vehicles space, it's all about safety and ensuring that your results are full-proof. The way to accomplish this is by ensuring that your pipeline is constantly and automatically updated with new, authentic, and validated data that comes directly from your production environment. It’s all about model accuracy, and eliminating failed detections. This is why Foresight was able to reach supreme accuracy regardless of diverse settings and circumstances.

“The team at Dataloop provides a powerful platform with a suite of tools. Thanks to Dataloop, we're able to successfully test our algorithms and improve our ADAS and autonomous driving features."
David Lempert, VP R&D at Foresight Automotive