Introduction to Hardware Accelerators in Artificial Intelligence

Aarushi Ramesh
4 min readJan 8, 2020

--

A doodle/sketch of the hyped-up topic: machine learning and GPUs!

Ok so last semester one of my professors decided to tell us a joke, and I thought it was the funniest thing ever:

“What is the circumference of a jack-o-lantern divided by its diameter??”

Pumpkin pi !!!!!!!!!!!!

lol, I thought it was funny!!!!!!!!!!!! (it's ok if you didn’t) :)

Happy New Year 2020! Even though it has been a week into 2020 already :)

Ok so I was reading this article a while ago about implementing AI algorithms in hardware chips, and I was like WOW?!!! That sounds VERY VERY cool and interesting and can be valuable and have a positive impact on so many other industries.

So recently, deep learning and machine learning have become pretty big things. This is because it can be used in practically any context. However, it also requires an intensive amount of algorithms and complex processes. We need efficient tools to accelerate the tasks of Artificial Intelligence. But what is deep learning, machine learning, artificial intelligence? I feel like these are like the main words everyone using lol, but we need to understand why we use these types of “learnings”. We use ML (abbreviation for machine learning cuz I’ve already used it 100 times lol) to make a machine intelligent without explicitly programming it.

So let's start with machine learning. It is one of the new technologies that are out right now. If you haven’t heard of the term, that's totally cool! It is a pretty new thing, and many industries (not even tech related btw) are using machine learning to efficiently do human tasks.

All about Machine Learning

​Machine Learning is a branch of study all about training a machine (computer for example) to do complete tasks without explicitly programming it. Image classification is an excellent example to explain machine learning. If you want a computer to classify a specific image as a cat, you would train your computer to learn certain features of the cat that are distinguishable from another animal. Another example is being able to detect if your email is spam or not. So you basically need to feed in large amounts of data to your machine learning model for it to learn patterns from that data, and accurately predict future datasets. This requires lots of algorithms and processes, which is where deep learning comes into play. Deep Learning uses an algorithm called neural networks to process, classify and make predictions on data sets. In order to get accurate results, you need LOTS of data. When you need more datasets, its gonna take a long time to efficiently analyze the data. This is where accelerating the ‘analysis’ of data comes into play: and hardware processors can take care of that.

Making a processor/microcontroller do many “intelligent” things sounds like such an amazing feat. In order to run such complex neural networks and process so much data and information, you need powerful and efficient processors. But what processors do we use? Which ones are the most efficient?

Central Processing Unit

A CPU is basically the brain of a computer: the Central Processing Unit. It essentially executes and performs all of the instructions (in a program, software, application..etc.) such as logical operations, arithmetic, and I/O (input/output — communication between devices). A long time ago, CPUs were built with one core — which means they could only perform one task at a time. However, due to advancements in technology, we can now build multi-core CPUs — which means they are able to perform a couple more tasks at a time.

A brief CPU model with the yellow rectangles representing the cores. A core consists of an Arithmetic Logic Unit (ALU) for computation, a cache (for memory) and control units.

Graphical Processing Unit

A GPU is called a Graphical Processing Unit, and it's designed differently compared to the CPU. A GPU has many, many more cores, and they are much smaller compared to the ones in the CPU. The cores are designed like this so that parallel, but simple computation (since the cores are smaller) can be performed, and many tasks can my completed simultaneously. GPUs are used a lot in the gaming industry, for image processing and computer graphics (hence the term “Graphics Processing Unit”). In general, the design of the GPU makes algorithms more efficient, compared to the design of a CPU.

A great example of the development of GPUs is NVIDIA’s platform called CUDA! NVIDIA’s CUDA computing platform uses GPUs to make algorithms and computing more efficient and fast for developers.

A simple sketch of a GPU design — a bunch of cores!! Lots of cores, however, in smaller sizes compared to the design of the CPU!

This totally totally depends on what situation you are working on. If you are working with deep learning and ML models, chances are that GPUs are a better fit. This is because ML requires a lot of matrix math-related calculations, which can be really effective if done in parallel. CPUs are better for more complex but sequential, step by step math or logical problems. Also, there is a huge cost factor too. GPUs generally cost more than CPUs, so there are multiple factors to consider before coming to a decision.

Originally published at https://rushiblogs.weebly.com.

--

--

Aarushi Ramesh
Aarushi Ramesh

Written by Aarushi Ramesh

Hello! I’m a student at the University at Texas in Austin. Welcome to my collection of thoughts. I like to write and blog. rushiblogs.weebly.com

No responses yet