4 Myths About GPU Acceleration

Home + Resources + Blog + Posts

Andrew Malinow

Principal Data Scientist | Area Practice Director-Analytics, AI & Machine Learning
August 13, 2019

A graphics processing unit (GPU) is a computer chip specially designed to improve the performance of data-centric processes that require mathematical operations. The GPU has been embraced by the technology community and has helped to fuel a rapidly growing ecosystem of start-ups, products, and open-source libraries dedicated to accelerating compute or data-intensive tasks.

Despite a steady increase in GPU development activities, misunderstandings about the value of using GPUs to accelerate workloads persist. In this post, I debunk four of these myths and explain why I think the technology has broader utility than standard artificial intelligence (AI) use cases might suggest.

Myth #1. GPU acceleration is just for AI, more specifically deep learning.

Mention GPU acceleration and most people think of rendering video game graphics, speeding up AI applications, or processing massive amounts of image data. But the reality is that GPUs also offer an effective way to accelerate the general computing workloads that drive business performance. With hundreds or thousands of specialized cores working in parallel, a simple GPU cluster (or even a single GPU) can improve the efficiency of database and analytics workflows at multiple junctures.

In fact, a single GPU can deliver the same performance as 100 central processing units (CPUs) and can process some types of structured and unstructured data much more quickly.[1] That’s because GPUs are better at performing simple instructions in parallel. Parallelism means they are not subject to the same kind of input/output-to-compute bottleneck as are CPUs, which struggle to ingest and query data simultaneously.

Myth #2. GPU acceleration is a fad.

Korean beauty masks, fidget spinners, male rompers—now those are fads. But the GPU has been around since 2007, when NVIDIA introduced the GeForce 8 series of chips. Equally important, NVIDIA has continued to develop and release new GPUs and improved microarchitectures for generalized computing.

Sales of these GPUs from NVIDIA and other manufacturers are expected to deliver a CAGR of over 35% through 2024.[2] This projected $80 billion in sales suggests that more than a few organizations are investing in GPU power.[3]

Current and expected growth is being sustained by large numbers of dedicated and passionate professional developers. They are using NVIDIA’s parallel computing platform and program model, CUDA, to create GPU-accelerated software layers for general-purpose processing. With these layers, they are able to divide workloads so sequential code runs on CPUs and compute-intensive code runs in parallel on GPU cores.

The CUDA application programming interface also allows the developers to use GPU for general purpose processing. Currently there are over 16,000 CUDA-focused repositories on GitHub, the world’s largest development community. More than 900 of these “repos” are focused on CUDA and Python and allow developers to collaborate on algorithms, optimizations, new network architectures, standards, performance metrics, and other practical challenges using CUDA’s libraries, compilers, development tools, and runtime.[4]

Support for CUDA is nurturing a fairly massive GPU-acceleration ecosystem. This ecosystem includes thousands of applications running on GPUs in embedded systems, mobile phones, gaming devices, and computers.[5] While application-specific integrated circuits may one day become the default technology for compute-intensive processes (a topic for another day), GPUs remain the best option for accelerating operations.

Myth #3. No one really is investing in GPUs.

A quick online search debunks this one. Accenture Labs, Adobe, Audi Business Innovations, Baker Hughes, Clemson University, Honda, Komatsu, NASA Ames, Oak Ridge National Laboratory, and Pinterest are among the many organizations [SK1]that are using GPU servers to solve hard problems and gain competitive advantage.[6]

The $80 billion of market revenue mentioned earlier is being driven by ongoing investment in AI and the IoT. Both strategies involve collecting and processing massive amounts of data, which can be challenging for conventional CPU-based infrastructure.

As the McKinsey & Company survey indicates, many organizations are acting to leverage AI, machine learning, neural networks, and other data-intensive statistical methods. They are investing in GPU acceleration with parallel processing as their best option for digging into immensely complicated problems and rapidly testing and modeling possible solutions.[7]

Myth #4. Only cutting-edge organizations benefit from GPU acceleration.

Like other brick-and-mortar companies, retail giant Walmart is investing in GPU acceleration because it needs more computing power to enable digital transformation and general computing workloads. It is using neural networks to further optimize its supply chain and its ability to forecast consumer demand. As the company’s chief data scientist, Bill Groves, puts it, “More compute power allows us to bring in more data and get better faster.”[8]

Other mainline brands, including Lowes, Ocado, Jet.com, Sephora, and Stitch Fix, are also investing in GPU acceleration.[9] Just like Walmart, they have decided that fast processing is the key to reducing inventory shrinkage and delivering in-store and online experiences that align with customer expectations.

Smaller organizations with modest budgets can benefit from GPU acceleration as well. With smart investment, they can add a GPU layer to their Hadoop databases, increase performance significantly, and deliver real-time actionable insights as part of an Internet of Things (IoT) or digital transformation strategy.

Computers-as-a-service models and cloud-based instances can deliver these resources quickly and affordably. Parsec, an online streaming service for games, is using cloud GPU to ramp up quickly while limiting its infrastructure costs, for example.[10]

This post is by no means the final word on GPU myths, but I hope it has given you some new information and a better understanding of how GPU acceleration can provide practical solutions for a range of processing challenges. If you would like to learn more, I encourage you to check out Github or one of the free GPU instances offered by providers such as Google or NVIDIA.

If you have a technical question or suggestion for a future post and would like issue discuss outside this public forum, please email me at Andrew.Malinow@eplus.com.


[1] Eric Mizell and Biery, Roger, “Introduction to GPUs for Data Analytics,” Kinetica DB, Inc., 2017, p. 5.

[2] Ankita Ghutani and Wadhwani, Preet, “GPU Market Size by Component,” Global Market Insights, January 2019.

[3](Graphic Processing Unit) GPU Market to Cross $80bn by 2024,” Global Market Insights, Inc., January 29, 2019.

[4] GitHub Search conducted August 5th, 2019.  https://github.com/search?l=Python&p=2&q=cuda&type=Repositories

[5]Graphics Process Unit,” Wikipedia, July 29, 2019.

[6]Deep Learning Success Stories,” A collection of case studies from NVIDIA, 2019; “Design and Visualization Success Stories,” A collection of case studies from NVIDIA, 2019.

[7] Hayley Dunning, “Supercomputers Use Graphics Processors to Solve Longstanding Turbulence Question,” Science X Daily, July 25, 2019.

[8] Brian Caulfield, “Walmart, NVIDIA Discuss How They’re Working Together to Transform Retail,” NVIDIA blog, July 11, 2019.

[9] Zeus Kerravala, “More Artificial Intelligence Options Coming to Google Cloud,” CIO, July 30, 2018; Eric Thorsen, “Revolutionizing Retail with Artificial Intelligence,” PowerPoint Presentation, March 30, 2018.


 [SK1]All the names have hyperlinks to the relevant case studies. These links should be preserved when the blog is posted online. However, I have inserted a reference with a hyperlink back to the NVIDIA success story overview page.

Ready to learn more?

Preparation and success go hand in hand.
Connect with us or use the form.
+1 888-482-1122