Deep learning has gained more traction and dominance in recent years, with its use now spanning multiple industries and products, security included. But why? We asked our Machine Learning Engineers to give us the low-down on the current state of artificial intelligence.
The popularity of artificial intelligence started to grow back in 2012, when a deep learning model won an image recognition competition by a huge margin. People immediately started to take AI and its potential more seriously. Since then, significant improvements have been made to the technology, which means that deep learning can now successfully manage highly complex tasks, such as what Calipsa does. However, despite the fact that AI is now being used for more and more tasks, it is often still seen as a buzzword and many find the technology is difficult to comprehend.
So what exactly is AI, how does it work, what are its limitations and how is it evolving?
What's what: Deep learning, machine learning and artificial intelligence
First and foremost, it’s important to distinguish between the different terms associated with artificial intelligence.
- Artificial intelligence - Any technique that enables computers to mimic human intelligence using logic, if-then rules, decision trees, and machine learning (including deep learning).
- Machine learning - A subset of AI that includes abstruse statistical techniques that enable machines to improve at tasks with experience. The category includes deep learning.
- Deep learning - The subset of machine learning composed of algorithms that permit software to train itself to perform tasks, like speech and image recognition, by exposing multilayered neural networks to vast amounts of data.
At Calipsa, we use machine learning to power our false alarm filter. This is because machine learning is best suited to tasks that have a lot of data, where each instance of the task is similar to the others and where each task has a clearly defined objective input and output - as is the case with Calipsa’s subject.
How does AI learn, grow and improve?
The way the technology works is by applying a set of parameters to a given input to produce a desired output. At each learning step, the error between what the model produces and what the correct answer is gets measured. The parameters are updated with respect to this error so that on the next try, the error should be smaller.
A single learning step can happen as quickly as a second. However, when dealing with lots of data, a single learning step doesn’t shift the performance on the whole by much.
What are the biggest limitations with AI today?
Although deep learning may have gained traction for a vast number of real-world use cases, it does still have limitations in its current state. One of the biggest limitations being that it is very data hungry and performs extremely poorly in cases where there are not a lot of examples to learn from. In order for deep learning to effective, it requires significant quantities of data. Additionally, as deep learning models get more complex, they often take longer to train, so it can take longer to see performance gains or you end up needing to spend more resources to speed up the learning.
Another limitation is that deep learning models are not great at explaining why they make the decisions they make. This has slowed down their adoption in areas where interpretability is important, such as medicine and credit scoring. In a more general sense, AI is also fairly bad at actuating in the physical world - so terminator type scenarios are a very long way off.
What improvements are being made?
Regardless of current limitations, AI, machine learning and deep learning techniques are constantly evolving and there is ongoing research being done in order to improve the efficiency and effectiveness of the techniques. This includes:
- How to reduce the amount of data required to train a deep learning model. This includes areas such as semi-supervised learning where you provide many examples to learn from but only some with labels. The model then tries to bootstrap itself.
- How to speed up the learning process. Additionally, the industry keeps developing and selling better graphical processing units (gpus). A better gpu allows you to train the same model in a shorter amount of time. Also, with the rise of cloud computing, it’s becoming easier for people to train models. You can pay say £1 an hour to train a model instead of having to come up with the upfront capital for a gpu which can easily be in the £1000s region.
- How to make deep learning models more interpretable. And for specific use cases, developers find ways of better explaining the models. For example, in Calipsa’s case, we highlight where the model thinks it saw movement as well as highlighting the objects that contributed to getting a True response.
In the context of security, deep learning brings huge potential and has the power to transform performance. However it shouldn’t be seen as a replacement to existing processes. Even with the improvements being made, security professionals shouldn’t look to artificial intelligence as a full solution, rather an extension and enhancement to current operations. By building a collaboration between humans and technology, security companies will be able to add more value to customers.
The Calipsa False Alarm Filtering Platform is powered by deep learning algorithms to filter out over 85% of CCTV false alarms. Get in touch to find out how it could benefit your business.