Deep Learning and AI

Memory Requirements for Deep Learning and Machine Learning

February 8, 2024 • 7 min read


Deep Learning and Machine Learning Memory Requirements

Building a GPU workstation for Deep Learning and Machine Learning can be daunting especially choosing the right hardware for target workload requirements. There are a lot of moving parts based on the types of projects you plan to run. Understanding machine learning memory requirements is a critical part of the building process. Sometimes, though, it is easy to overlook.

A GPU is typically understood to be a "must-have", but thinking through the machine learning memory requirements probably doesn’t weigh into that purchase. However, it can make or break your application performance.

This article will walk you through how much RAM is needed for a machine learning, which storage to consider, SSD or HDD, and the need for a GPU. We will guide you on the various memory requirements you need to know for configuring a purpose build machine learning workstation.

If you have any question or special orders on graphics cards, contact our team.

Deep Learning vs Machine Learning

Before diving in, let’s first separate out deep learning and machine learning. 

Deep learning is going to include projects where we feed a dataset to encourage and train AI model to think. This includes training AI via neural networks where the model will need to process and interpret data and come up with unique solutions. Deep learning is a specific type of machine learning.

Machine learning, on the other hand, is less about an AI program learning to think on its own and create unique solutions but more on being able to process data and generate solutions that are more predetermined or expected. Machine learning can be interpreted as a complex algorithm for predicting outcomes. There is a lot more human involvement with machine learning and memory can be moved and manipulated as needed.

Should You Use SSD or HDD For Machine Learning?


For machine learning projects and training AI, data is constantly moving as datasets are referenced. So, we should first address the need for an SSD or HDD since this is where the data will reside before processing.

SSDs are a type of flash storage that is fast to access via the faster PCIe lanes usually coming in at a higher per GB cost. HDDs are a physical storage that is slower, limited by the physical speeds, but has generally higher capacity and has a more affordable per GB cost.

When the datasets will be referenced extensively, an NVMe SSD will let the user quickly read and write data as needed. However, for data that won’t be moved frequently or will eventually land in a permanent cold storage situation, HDD is going to work just fine and be far cheaper. Data storage, especially NVMe SSDs, have come down in price and have become a no-brainer upgrade for anyone still stuck on the SATA interface. the speed of these SSDs reduces the total runtime, saving the most valuable resource: time.

If you want to know more about the differences between SSD or HDD, we have a great blog post that better explores these differences.

GPU Memory & CPU RAM for Deep Learning and Machine Learning?


With that in mind, this next question is difficult to tackle. How much VRAM for deep learning is even necessary?

Depending on the project the optimal GPU is a couple of different answers. For deep learning project that will heavily depend on massive amounts of data being input and processed, then that will ultimately require a heavier memory load and thus require a GPU with more memory.

Processing more data would benefit from more GPU memory too, but going overkill in the memory department can leave performance on the table. It all boils down to the type of model and scale. Models like ChatGPT require massive amounts of memory and hundreds of GPUs to run their services to the masses, whereas a local machine learning enthusiast might need only 2 or 4 GPUs for local deep learning deployments. Some small businesses can leverage one or a couple compute nodes.

Great GPUs to for machine learning and deep learning include the high-end NVIDIA GeForce RTX GPUs. NVIDIA consumer GPUs are expensive, but not as expensive as their professional line of GPUs that prioritize enterprise support, scalability, and form factor. If your workload isn’t too intensive, one or two (if you can fit it) RTX 4090 (24GB) or RTX 4080 (16GB) is more than enough.

However, if scalability and future upgrading is a consideration, NVIDIA RTX GPUs like the RTX 6000 Ada (48GB VRAM), RTX 5000 Ada (32GB VRAM) are dual slot GPUs that can be easily fit up to 4 GPUs in a single workstation or up to 8 in a server. Talk to our team to configure a mulit-GPU SabrePC workstation for machine and deep learning today.

How Much GPU Memory & RAM Is Needed for Deep Learning?

A general rule of thumb for RAM for deep learning is to have at least as much RAM capacity as you have GPU memory and plus 25% more. While there isn’t a preferred overarching amount of RAM necessary for deep learning projects this is an easy way to stay ahead of any issues and having to worry about scaling in the immediate future. Depending on if you are using a data intensive visual component for training your deep learning program, though, you may need more than you think. Keep in mind the dataset size you plan to train your model on and any future considerations in increasing the dataset size.

  • Visual Projects - If your deep learning program is going to be taking in lots of visual data - from live feeds to process, then it is going to be running and storing large amounts of data and/or large file sizes. Consider more RAM and GPU memory.
  • Text/Speech Projects - for developing a model for process, interpret, and produce text on a small scale, since the file sizes may be smaller, you can opt for less. However, the dataset size does play a part, so larger datasets and project for processing audio will require more.

How Much GPU Memory & RAM Is Needed for Machine Learning?

Machine learning memory requirements are going to function similarly to deep learning, but with a lesser workload and amount of RAM and memory required. As we stated before, machine learning is going to have a higher amount of human interaction, so there should be less requirement for massive amounts of memory.

In general, though, you will still want to follow the rule for deep learning and have at least as much RAM as you have GPU memory (and add a 25% cushion). We still recommend the NVIDIA RTX 4090 or 4080 for many machine learning projects as it can handle a majority of workloads without any trouble, just to be safe and cover all your bases.

  • Image Based Projects - If your deep learning program is going to be taking in lots of visual data - even if your video or images are not from a live feed, you’ll be accessing and reading these large file sizes. Consider more RAM and GPU memory.
  • Text Projects - for developing a model for process, interpret, and produce text on a small scale, since the file sizes may be smaller, you can opt for less. However, the dataset size does play a part, so larger datasets and project for processing audio will require more.
  • Analytics - for data analytics for organized data, there are instances where even Google Colab can provide enough compute which doesn’t even have a GPU. You may be able to run analytic based machine learning using an SSD and a basic GPU. It's always safe to configure with higher end hardware.

Have Any Questions?

We hope you found this guide to be helpful in learning about the differences between deep learning and machine learning and their memory requirements. Please comment below if you have found something different that works better for your own situation, or if you have any questions, please feel free to contact us today.

Otherwise, you can browse other articles on the SabrePC blog. Keep a lookout for more helpful articles on the way soon!


machine learning

deep learning






rtx 3090

rtx 3080




Related Content