Building Scalable End-To-End Deep Learning Pipeline in the Cloud (DeveloperWeek Global 2020)

Build a scalable end-to-end deep learning pipeline in the cloud by leveraging Cloud Native Orchestrators and a combination of AWS services, including Lambda, Batch, and SageMaker.

Key takeaways
  • The main challenge of GPUs is their high cost and the need to maintain a cluster.
  • Cloud Native Orchestrators provide a convenient way to build scalable end-to-end deep learning pipelines.
  • Batch processing is useful for processing large amounts of data, as it allows for parallel execution.
  • The speaker recommends using AWS Lambda for short processing tasks, AWS Batch for multiple tasks, and SageMaker for GPU training jobs.
  • Serverless infrastructure is a good option for inference, but it may not be the best choice for training due to the cold start problem.
  • The speaker suggests using a combination of services to build a scalable deep learning pipeline.
  • The cost of prediction is a key consideration, and the speaker recommends using Inferentia instances for inference.
  • The speaker recommends using a combination of services, including Lambda, Batch, and SageMaker, to build a scalable deep learning pipeline.
  • The speaker also suggests using a repository to store the model and its versions.
  • The speaker recommends implementing a feedback loop to improve the model over time.
  • The speaker suggests using a combination of services, including Lambda, Batch, and SageMaker, to build a scalable deep learning pipeline.
  • The speaker recommends using a combination of services to build a scalable deep learning pipeline.
  • The speaker suggests using a combination of services, including Lambda, Batch, and SageMaker, to build a scalable deep learning pipeline.
  • The speaker recommends using a combination of services to build a scalable deep learning pipeline.
  • The speaker suggests using a combination of services to build a scalable deep learning pipeline.
  • The speaker recommends using a combination of services to build a scalable deep learning pipeline.