Introduction to Deep Learning: Our First Blog
Perceptrons, Datasets, and Neural Nets, if you think you’ve stumbled across a biology article you have been mistaken. We’re going to run through the fundamental principles of deep learning at a very high level to give you a smooth introduction to the field, without the mathematical baggage that comes with it.
AI Fundamentals?
The emergence of libraries/services such as FastAI, PyTorch, and AWS brings the focus of deep learning away from nitty-gritty mathematical details, opening up the field to the general practitioner instead of the privileged few with their enormous in-house clusters. To put this into relatable terms, we have now been thrust into the Wild West era of AI. The ease-of-use that comes with FastAI v2, in particular, allows anyone to train state-of-the-art models easily within five lines of code to allow for image classification of dogs and cats.
Within thirty minutes of consuming documentation, anybody with a python script, a unique dataset, and a decent computer can create a highly accurate classification model that could outperform some of the state-of-the-art models from a couple of years ago. If it’s so easy though, why bother with the fundamentals of Deep Learning?
To put it simply — it has become easy to write deep learning code (you no longer have to write low-level NumPy code), getting it to do exactly what you want well still requires insight into the core fundamental principles behind how neural networks function.
Writing and designing deep learning algorithms strays vastly from traditional software engineering. One cannot simply expect to rip some code from an online forum and expect their model to fit a given dataset. You as the deep learning practitioner know your model best, and it is only with a solid grasp of the fundamentals that you can expect to solve most deep learning problems.