10.26.17
Today, Amazon Web Services, Inc. (AWS), an Amazon.com company (NASDAQ: AMZN), announced P3 instances, the next generation of Amazon Elastic Compute Cloud (Amazon EC2) GPU instances designed for compute-intensive applications that require massive parallel floating point performance, including machine learning, computational fluid dynamics, computational finance, seismic analysis, molecular modeling, genomics, and autonomous vehicle systems. The first instances to include NVIDIA Tesla V100 GPUs, P3 instances are the most powerful GPU instances available in the cloud.
P3 instances allow customers to build and deploy advanced applications with up to 14 times better performance than previous-generation Amazon EC2 GPU compute instances, and reduce training of machine learning applications from days to hours. With up to eight NVIDIA Tesla V100 GPUs, P3 instances provide up to one petaflop of mixed-precision, 125 teraflops of single-precision, and 62 teraflops of double-precision floating point performance, as well as a 300 GB/s second-generation NVIDIA NVLink interconnect that enables high-speed, low-latency GPU-to-GPU communication. P3 instances also feature up to 64 vCPUs based on custom Intel Xeon E5 (Broadwell) processors, 488 GB of DRAM, and 25 Gbps of dedicated aggregate network bandwidth using the Elastic Network Adapter (ENA).
“When we launched our P2 instances last year, we couldn’t believe how quickly people adopted them,” said Matt Garman, Vice President of Amazon EC2. “Most of the machine learning in the cloud today is done on P2 instances, yet customers continue to be hungry for more powerful instances. By offering up to 14 times better performance than P2 instances, P3 instances will significantly reduce the time involved in training machine learning models, providing agility for developers to experiment, and optimizing machine learning without requiring large investments in on-premises GPU clusters. In addition, high performance computing applications will benefit from up to 2.7 times improvement in double-precision floating point performance.”
Airbnb’s community marketplace provides access to millions of unique accommodations and local experiences in more than 65,000 cities and 191 countries. “At Airbnb, we’re using machine learning to optimize search recommendations and improve dynamic pricing guidance for hosts, both of which translate to increased booking conversions. These use-cases are highly specific to our industry and require machine learning models that use several different types of data sources, such as guest and host preferences, listing location and condition, seasonality, and price,” said Nick Handel at Airbnb. “With Amazon EC2 P3 instances, we have the ability to run training workloads faster, enabling us to iterate more, build better machine learning models and reduce cost.”
Schrödinger’s mission is to improve human health and quality of life by developing advanced computational methods that transform the way scientists design therapeutics and materials. “Our industry has a pressing need for performant, accurate, and predictive models to extend the scale of discovery and optimization, complementing and going beyond the traditional experimental approach,” said Robert Abel, Senior Vice President of Science at Schrödinger. “Amazon EC2 P3 instances with their high performance GPUs allow us to perform four times as many simulations in a day as we could with P2 instances. This performance increase, coupled with the ability to quickly scale in response to new compound ideas, gives our customers the ability to bring lifesaving drugs to market more quickly.”
AWS Deep Learning Machine Images (AMIs) are available in AWS Marketplace to help customers get started within minutes. The Deep Learning AMI comes preinstalled with the latest releases of Apache MXNet, Caffe2 and TensorFlow with support for Tesla V100 GPUs, and will be updated to support P3 instances with other machine learning frameworks such as Microsoft Cognitive Toolkit and PyTorch as soon as these frameworks release support for Tesla V100 GPUs. Customers can also use the NVIDIA Volta Deep Learning AMI that integrates deep learning framework containers from NVIDIA GPU Cloud, or start with AMIs for Amazon Linux, Ubuntu 16.04, Windows Server 2012 R2, or Windows Server 2016.
With P3 instances, customers have the freedom to choose the optimal framework for their application. “We are excited to support Caffe2 on the new Amazon EC2 P3 instances. The unparalleled power and capability of P3 instances allow developers to train and run models very efficiently at high scale,” said Yangqing Jia, Research Scientist Manager at Facebook. “It will help new innovations get to customers in hours instead of days by taking advantage of the speed with P3 and our modular, scalable deep learning framework with Caffe2.”
Customers can launch P3 instances using the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. Amazon EC2 P3 Instances are generally available in the US East (N. Virginia), US West (Oregon), EU West (Ireland), and Asia Pacific (Tokyo) regions with support for additional regions coming soon. They are available in three sizes, with one, four, and eight GPUs, and can be purchased On-demand, Reserved or Spot instances.