>

Exploring Computer Vision and AI in Defense Environments

6 MIN READ

08/17/2021
Computer Vision data graphic

Understanding Computer Vision and Artificial Intelligence

For years we've been reaping the small-scale rewards of Artificial Intelligence (AI) research. Checks can now be read into an ATM without human interaction, auto-focusing cameras know where the faces are in a shot, social media can auto-tag friends based on their faces and more.

As the world’s access to computing power expands, so does the democratization of AI. New real-world use cases of AI are identified every day, which is why MetroStar constantly expands our capabilities. We aim to expand our partners' access to the same efficiencies the world is seeing in everyday life. In this blog, we discuss a few of the ways we have begun to leverage AI to help our Defense clients.

Let’s use coffee as an example. It can be sad to brew a whole pot of coffee only to have your cup taste bitter. AI can make every cup the perfect cup by allowing us to spend our time on the most important aspects of coffee, or in our world, data.

We no longer need to spend hours manually sifting through mountains of data to find the rich insights.

Modern technology enables us to leverage machines that parse through hours of video data to highlight the specific events we care about, translate foreign languages in seconds, and even understand commands to perform specific tasks when called upon. While we are still far from true generalized AI, modern Machine Learning (ML) has proven this technology is adept at solving specific, cumbersome tasks.

Now, let's dive deeper.

AI and Deep Learning are supported by Artificial Neural Networks (or Neural Networks). Neural Networks are designed to mimic the representation of neurons firing in the brain through the real-world application of advanced mathematics. Both Neural Networks and the math supporting them have been in practice for decades. The difference between then and now is our easy access to computing power.

When video data is fed into a Deep Learning model, the result is a technique called Computer Vision. Historically, the only way to use video data is to have someone watch the video and take notes. If you miss any information, then that means hitting the rewind button and hoping for better luck on the next try. Unless, of course, you’re using computing power to help you.

Computer Vision not only makes your life easier but can also create another form of security. Imagine the movie cliché of a security guard watching tv only to miss trespassers or escapees on the wall-mounted monitors. Computer Vision is the way of the future by forcing our protagonists to become more creative with their escape routes since the security systems will no longer fall on one person's attention span.

The primary value of Computer Vision lies in a human assistive AI—intelligence that helps people do their jobs more efficiently by combining computing power and brainpower. With use cases from improving aircraft Heads Up Display (HUD) systems to video feed analysis and metadata extraction, this emerging technology has proven invaluable to our clients. Human assistive AI ensures we make the most intuitive and securest impact possible with our client’s data.

Applying Computer Vision

A part of our Client Solutions Group (CSG) team focuses on Computer Vision and its ability to ingest video data and produce real-time insights. Tasks that would typically require several analysts, parsing through hours of video, are accomplished in minutes. Computer Vision creates opportunities for our clients' analyst teams to gain actionable insights in real-time through human assistive AI and techniques such as Object Detection, Video Time Series Analysis, Object Tracking, and other applications of Deep Neural Networks.

1. How Do We Train Specialized Models?

Transfer learning is a technique in Deep Learning used to fine-tune powerful generalized models for specific use cases. This technique allows a model to take on generalized tasks learned during the initial training phase and focus on the most crucial subject matter at hand.

Transfer learning substantially cuts down training time for these robust networks.

This technique enables highly generalized models--built for a broad task--to be fine-tuned for more specific use cases. For example, you can take a model that has learned to find people in an image and then train it to understand further what people are doing in an image. This fine-tuning allows a new specific model to capitalize on the broad capabilities learned in the first model, thus refining the objective to be more beneficial for real-world applications. Essentially, you are using not teaching an old dog (your data) a new trick but expanding "play dead" to "play dead then roll over."

These retrained models are readily available through interactive dashboards, empowering our client's analysts with direct access to these models to accelerate their workflow. As our clients work with these models, and through leveraging Active Learning (more on that below), we enable further model training through feedback loops to close gaps not caught in the initial training.

2. How Can We Work with Large Image Datasets?

One of the advantages of neural networks is their ability to grow stronger over time when given access to more data. Our dashboards' feedback loops enable our client's analysts to identify fringe cases. Fringe cases are where the model struggles to perform, and we flag these datasets for review by our data science team. These changes are then added to the model's dataset to learn from past mistakes. This learning allows a model to perform better on future use cases through a continued training technique called Active Learning. Active Learning is built from the premise that some tasks are easier to train in a Neural Network than others.

Suppose a network has no problem telling the difference between housecats and jungle cats but can't reliably discern between wolves and dogs. Why waste time and resources labeling more felines? Answer: you shouldn't. 

Active Learning allows a model to recycle correct predictions from unseen data as new training and testing data, so humans only need to spend time labeling any objects the model struggles with. Just like the brain, your models should always be actively learning.

3. Bringing It All Together: What is Onyx?

We've learned a bit more about Computer Vision, the techniques that make it effective, and how it can impact society and work.

Our Computer Vision work is performed securely in our open-source-based Machine Learning environment—Onyx. Onyx is explicitly built to solve the challenges of training AI in high-security environments.

Due to our work in the Defense community, we've built Onyx in a hardened environment with capabilities found both locally and across cloud platforms. This means that no matter where our clients need to host their experiments, Onyx can be configured to accommodate their requirements. That's the power of secure cloud platforms!

These computer vision systems require many labeled data to be correctly generalized to work well on real-world data outside of the training environment. Onyx's open-source tools allow for large-scale data ingestion and labeling, bringing AI experiments from concept to testing as rapidly and reliably as possible.

In conclusion, the techniques we covered are only one part of how we solve mission-critical challenges for our clients. In general, Computer Vision and Deep Learning are rapidly evolving technologies with new use cases being discovered every day.

This was brief. Looking for a more in-depth conversation about Computer Vision and Onyx? Reach out!

contact CSG

About the Author 

Wilson is an experienced Data Scientist with a Master's in Applied Data Science from the Indiana University School of Informatics and Computing in Indianapolis, Indiana. Prior to his work in data science, he worked as an Account Executive developing business-facing technology adoption strategies at companies like LinkedIn, Gartner, and CDW. This included executive consultation, ROI analysis, and end-user adoption and training strategies.  

Currently, Wilson works within our Client Solutions Group (CSG) on Team AI with a focus on developing Computer Vision models. He works across defense agencies to develop systems that scale the model development process and enable deep learning interpretability. He also creates Computer Vision models such as object detection, classification, time series analysis, and more.