shutterstock 2176389901 scaled

Scaling Your AI to Production

This is the third and last post in our “Computer Vision Use Case Breakdown series.”

As mentioned in multiple articles and every pitch deck for an MLops product tool around the world, 92% of all AI projects fail to reach or operate in production. In my first blog of this series, I started by introducing the basics of computer vision theory. The second blog focused on different use cases we encounter daily. In both of these blogs, you’ll find common failure points for projects. 

In this article, I’ll tackle other complexities related to scaling as well as industry-focused challenges such as regulations and data costs that commonly cause these kinds of projects to fail.

Growing Pains – Use Case Validation

Computer vision projects tend to have multiple stages, each one with its own unique challenges. The first step in each of them is the use case validation (POC). Here is a quick recap of some topics that need to be addressed for each of them:

Challenge #1: Technical – As discussed in the previous blogs,  there are multiple ways to solve case validation use cases (detection, classification) and different architectures for each. The data science team needs to find the best solutions for the use case (costs, results).

Challenge #2: ROI – The business team should calculate the return on investment for customers using this new solution instead of the existing one (employees, work time, loss of revenue, employee security, and more). 

The above challenges in some cases have further layers of complexity:

  • Cost of data collection: Can I collect as much data as I want easily or every collection phase is expensive? For example, medical MRI checks can cost a lot for every data point collected while video of football games can be almost free to some companies.
  • Cost of error: Some mistakes have costly consequences (human life-related or expensive equipment) while other mistakes have far cheaper costs such as missing a rotten apple or mislabeling a certain object.  
  • Industry-related issues such as:
    • Regulations (GDPR, HIPAA)
    • Approval process (FDA, ACEA, etc.)
    • Personal information (faces, text, audio)
    • Security – Network access, data storage location

Growing Pains – Real Environment 

Typically this phase can be tested by deploying the first version of the model in a real-world environment. It varies between organizations that build a solution internally and those that sell the solution to other companies.

Internally: There is much more control over the data collection environment and collection methods so this step is smaller, it will usually contain only a partial view of the world and expose the model to much less variance. The bigger the cost of error, the more important it is to have this step done internally.

For example, most of the big retail companies in the world have an innovation store (some are even operational) where they test all of the new technologies they plan to introduce in their stores. I had the opportunity to visit a couple of these stores and it feels almost futuristic to shop in them.

Externally:– This step can be a “make or break” moment for companies. For the first time, the sales team has managed to convince that there is a real value to their solution and it now has to prove its value. 

Where can it go wrong?

  • Data distribution is different than in the training set (background, amount of objects per frame, undersampled classes)
  • New variance in data – every scene has variances (lighting conditions, angles, seasonality, and more) and we would like to capture the entire variance of the “real world” data. Since the same street sign images look different at day/night and rain/sun we need to make sure we have all of those taken into consideration.
  • Other issues can relate to the hardware, network, or any other aspect of this kind of project.

The consequences of an error in this phase can be devastating, small companies invest a lot of resources in these customer POCs and need the success to take the next step and raise more money.

Growing Pains – Production

This step is usually done later in the process when companies tend to already build procedures, learn many of the challenges, and variances, and also accumulate several ongoing environments (either internal or external). 

Now it becomes a completely different game than it was when it was an ever-changing project. Instead of making big adaptations to a bunch of new samples, there is a need to constantly monitor:

  1. Data deviation – change over time of data distribution 
  2. Edge cases – new types of failures in production

The common way to do this is by incorporating human beings into the production flow. The ability to keep validating model predictions for a “never-ending” project presents a new set of challenges:

  1. Anomaly detections – define items that are “different”
  2. Human-in-the-loop pipelines  – manage human work in machine-based flows
  3. Data in a scale tends to break interfaces 

The Dataloop That Grows With You 

As you can tell, each step of the way has its difficulties and that is where Dataloop comes in handy. The main goal of our platform is to adapt the flows to each step of the way, regardless of the data type or the question you want your model to answer.

Here are a few examples of how the pipeline works for customers in different steps of their journey.

POC – quick results, ability to select items smartly, and ability to check different questions and tools:

Example for informative selection (in this case the anomalies are selected in the high dimension and displayed on a 2D view):

The next step of the way is to have a first model out there in the real world. As mentioned earlier, there are two approaches (both will be displayed). And typically the goal is to advance to the second one as soon as possible.

Approach #1– The model is not good enough to support the annotation work. The results should be validated against human ground truth and in cases where the model fails we want to keep the image and use it to retrain the model.

The disadvantage of this phase is that you don’t accelerate your annotation work which in many cases can be the expensive part, however, it improves your dataset, and your model and can prevent costly errors.

Approach #2 – The model can help improve the annotation time but still needs to be improved.

In this case, you accelerate your annotation work if the model has good enough results and you can improve your model results with ease.

It is critical to remember that keeping track of your data at all times is what makes your model recreatable, it makes the changes traceable, and it is the main source of truth about the real-world environment, and the model results. 

Human in the Loop – The 2nd step is also similar to the flows at scale. The difference between them is the data volumes and the fact that you handle much less variance. The goal here is to validate that the newly collected data works well with the existing model and there is no significant data deviation over time. 

The ability to filter items randomly or based on the model results helps to check that in real-time. It keeps a human in the loop of machine improvement.

Crawl, Walk, Run! Implementing an End To End Solution For Your ML Operations 

At Dataloop,  we improve the chances of our users and customers succeeding with their AI projects.  We help companies at every step of the way to cover the end to end flow of ML operations from data collection to putting the model into production and active learning. Instead of being overwhelmed and managing all your data flows, we suggest proceeding in a smart way. 

How can you do this?

1. Add Human Knowledge: We provide a robust set of AI-based annotation tools to quickly get the data prepared for machine learning models training. We offer this for various industries including retail,automotive, precision agriculture, security, robotics and more . The tools required for this are classification, key point, bounding boxes (2D, 3D), elipse, polyline, polygon, and pixel-level semantic segmentation. We also cover data types such as image, video, text, audio, and LiDAR. 

2. Capture the Variance – After the use case is validated, then you can take the next step with ease by simply managing your data:

  1. Filter, query, sort 
  2. Manage metadata
  3. Create versions
  4. Smart image selection
  5. Support millions of items per dataset

These functionalities will help you take the next step to ensure your model is production-ready, and increase your data observability and sharing while maintaining experiment isolation.

The above is critical to the engineering part of the process, but there are many values required for the operational side:

  • Work status visibility
  • Easy work distribution
  • No Code automation
  • Quality Management
  • Complex-task simplification (Recipe flexibility, versions)
  • Training and teaching
  • Performance tracking

These are all supported to increase your annotation team productivity.

3. Automate While Keeping the Human in the Loop: When you’re scaling you’ve got two options. You can automate your processes, or be ready to pay. Data growth is exponential between the steps and unless you build your infrastructure correctly, you will have to choose your poison:

  1. Massive reduction in accuracy score
  2. A huge increase in annotation costs

The Dataloop platform is built to automate processes and run every single automated operation with no external integrations:

  1. Pre-processing
  2. Run ML models
  3. Post-Process
  4. Human annotators

The combination of automation (use model to pre-annotate), Data Management (Filter items and annotations smartly, i.e based on model results), and an AI-based Annotation Studio is the cure for exponential data growth. Get your journey started today, and build your own data loop!

Share this post

Facebook
Twitter
LinkedIn

Related Articles

Illustration of a control tower with floating data and hot air balloons, symbolizing orchestration across hybrid cloud environments

Hybrid Cloud AI Orchestration

Scale AI Workflows Across Cloud and On-Prem Environments Modern AI development is multi-modal, compute-intensive and increasingly hybrid – requiring workloads to run simultaneously across on-prem

Read More