Harnessing the Horse power of the GPU for deep learning models

Jonathan Waka Simpungwe
2 min readJun 13, 2020

Graphic processing units (GPU’s) have reached a high level in parallel computing where they allow efficient manipulations of large data. The use of graphic processing units (GPU’s) is more efficient than the central processing units (CPU’s) for algorithms that process mega volumes of data.

Knowing that machine learning and deep learning processes large volumes of data. Therefore, the use of GPU’s becomes a necessity. In simple terms, we can give a practical example of a rope tied to a horse for which the rope is a machine learning/deep learning algorithm used to pull a set of loads and the horse is the graphic processing unit (GPU) which accelerates the workloads. While the loads are the mega volumes of data to be processed. Therefore, for this task to be achieved at a much faster rate. A GPU (horse) is required to accelerate the process.

In a nutshell NVIDIA, AMD and IBM help push the boundaries of innovation in AI and play a major role in the designing and implementation of data pipeline bandwidth and throughput specifically for Artificial intelligence development by integrating specific GPU’s, software stacks and processors. In addition, the use of GPU’s has enhanced and helped accelerate scientific, analytics and engineering applications.

--

--

Jonathan Waka Simpungwe
Jonathan Waka Simpungwe

Written by Jonathan Waka Simpungwe

Deep Learning/Machine Learning Enthusiast/R&D Engineer

No responses yet