Yonatan Geifman is the CEO & Co-Founder of Deci which transforms AI models into production-grade solutions on any hardware. Deci has been recognized as a Tech Innovator for Edge AI by Gartner and included in CB Insights’ AI 100 list. Its proprietary technology’s performance set new records at MLPerf with Intel.
What initially attracted you to machine learning?
From a young age, I was always fascinated by cutting edge technologies – not just using them, but truly understanding how they work.
This lifelong fascination paved the way towards my eventual PhD studies in computer science where my research focused on Deep Neural Networks (DNNs). As I came to understand this critical technology in an academic setting, I began to truly grasp the ways AI can positively impact the world around us. From smart cities that can better monitor traffic and reduce accidents, to autonomous vehicles that require little to no human intervention, to life-saving medical devices – there are endless applications where AI could better society. I always knew I wanted to take part in that revolution.
Could you share the genesis story behind Deci AI?
It is not difficult to recognize – as I did when I was in school for my PhD – how beneficial AI can be in use cases across the board. Yet many enterprises struggle to capitalize on AI’s full potential as developers continually face an uphill battle to develop production-ready deep learning models for deployment. In other words, it remains super difficult to productize AI.
These challenges can largely be attributed to the AI efficiency gap facing the industry. Algorithms are growing exponentially more powerful and require more compute power but in parallel they need to be deployed in a cost efficient way, often on resource constrained edge devices.
My co-founders Prof. Ran El-Yaniv, Jonathan Elial, and I co-founded Deci to address that challenge. And we did it in the only way we saw possible – by using AI itself to craft the next generation of deep learning. We embraced an algorithmic-first approach, working to improve the efficacy of AI algorithms at the earlier stages, which will in turn empower developers to build and work with models that deliver the highest levels of accuracy and efficiency for any given inference hardware.
Deep learning is at the core of Deci AI, could you define it for us?
Deep learning, like machine learning, is a subfield of AI, set to empower a new era of applications. Deep learning is heavily inspired by how the human brain is structured, which is why when we discuss deep learning, we discuss “neural networks”. This is super relevant for edge applications (think cameras in smart cities, sensors on autonomous vehicles, analytic solutions in healthcare) where on-site deep learning models are crucial for generating such insights in real time.
What is Neural Architecture Search?
Neural Architecture Search (NAS) is a technological discipline aimed at obtaining better deep learning models.
Google’s pioneering work on NAS in 2017 helped bring the topic into the mainstream, at least within research and academic circles.
The goal of NAS is to find the best neural network architecture for a given problem. It automates the designing of DNNs, ensuring higher performance and lower losses than manually designed architectures. It involves a process whereby an algorithm searches among an aggregate space of millions of available model arcuitecures, to yield an architecture uniquely suited to solve that particular problem. To put it simply, it utilizes AI to design new AI, based on the specific needs of any given project.
It is used by teams to simplify the development process, reduce trial and error iterations and ensure they end up with the ultimate model that can best serve the applications’ accuracy and performance targets.
What are some of the limitations of Neural Architecture Search?
Traditional NAS’s main limitations are accessibility and scalability. NAS today is mostly used in research settings and typically only carried out by tech giants like Google and Facebook, or at academic institutes like Stanford as traditional NAS techniques are complicated to carry out and require a lot of computational resources.
That’s why I’m so proud of our achievements in developing Deci’s groundbreaking AutoNAC (Automated Neural Architecture Construction) technology, which democratizes NAS and enables companies of all sizes to easily build custom model architectures with better than state-of-the-art accuracy and speed for their applications.
How is learning objection detection different based on image type ?
Surprisingly, the domain of the images does not dramatically affect the training process of object detection models. Whether you are looking for a pedestrian on the street, a tumor in a medical scan, or a concealed weapon in an x-ray image taken by airport security, the process is pretty much the same. The data which you use to train your model needs to be representative of the task at hand, and the model size and structure might be affected by the size, shape and complexity of the objects in your image.
How does Deci AI offer an end-to-end platform for deep learning?
Deci’s platform empowers developers to build, train, and deploy accurate and fast deep learning models to production. In doing so, teams can leverage the most cutting edge research and engineering best practices with one line of code, shorten time to market for months to a couple weeks and guarantee success in production.
You initially started with a team of 6 people, and you are now serving large enterprises. Could you discuss the growth of the company, and some of the challenges you’ve faced?
We are thrilled with the growth we have achieved since starting in 2019. Now, over 50 employees, and over $55 million in funding to date, we are confident we can continue helping developers realize and act on AI’s true potential. Since launching, we’ve been included on CB Insights’ AI 100, made groundbreaking achievements, such as our family of models that deliver breakthrough deep learning performance on CPUs, and solidified meaningful collaborations, including with big names like Intel.
Is there anything else that you would like to share about Deci AI?
As I mentioned before, the AI efficiency gap continues to cause major obstacles for AI productization. “Shifting left” – accounting for production constraints early in the development lifecycle, reduces the time and cost spent on fixing potential obstacles when deploying deep learning models in production down the line. Our platform has proven able to do just that by providing companies with the tools needed to successfully develop and deploy world-changing AI solutions.
Our goal is simple – make AI widely accessible, affordable and scalable.
Thank you for the great interview, readers who wish to learn more should visit Deci.
Credit: Source link