Analysis Of GPU And CPU For Machine Learning

This topic contains 0 replies, has 1 voice, and was last updated by Mehar Saleemi Mehar Saleemi 1 year, 5 months ago.

Viewing 1 post (of 1 total)
  • Author
  • #100000449
    Mehar Saleemi
    Mehar Saleemi

    Time and cost are important for teams training complex machine learning models. In the cloud, different instance types can be employed to reduce the time required to process data and train models.
    Graphics Processing Units (GPUs) offer a lot of advantages over CPUs when it comes to quickly processing large amounts of data typical in machine learning projects. However, it’s important for users to know how to monitor utilization to make sure you are not over- or under- provisioning compute resources (and that you aren’t paying too much for instances).
    AWS offers several GPU instance types for its Elastic Compute Cloud (EC2) that are aimed at supporting applications that are both compute-intensive and require faster performance speed. For example, in using AWS’s newest GPU instance, P3, Airbnb has been able to iterate faster and reduce costs for its machine learning models that use multiple types of data sources.
    You will take control of a P2 instance to analyze CPU vs. GPU performance, and you will learn how to use the AWS Deep Learning AMI to start a Jupyter Notebook server, which can be used to share data and machine learning experiments.

Viewing 1 post (of 1 total)

You must be logged in to reply to this topic.

Translate »