We can't find the internet
Attempting to reconnect
Something went wrong!
Hang in there while we get back on track
Efficient Model Selection for Deep Neural Networks on Massively Parallel Processing Databases
Automate model selection for deep neural networks on massively parallel processing databases with Model Hopper, a project that reduces analysis and human involvement while improving efficiency.
- Efficient model selection for deep neural networks on massively parallel processing databases is a challenging task.
- Model Hopper parallelism is an approach that distributes partitions in different worker nodes, but only one model starts from the beginning and follows a sequential read.
- The speaker presents a project that automates some model selection steps, reducing the need for analysis and human involvement.
- The project uses tasks and data parallelism, distributing computations across multiple machines.
- Model hopping encrypts data motion, making it more efficient, by only moving the model state between machines.
- Gradient descent is a common optimizer used in deep learning, with a learning rate that determines the size of steps taken.
- The speaker discusses an implementation of Model Hopper on a massively parallel processing database, using Greenplum as an example.
- The speaker also talks about another method called Hyperband, which uses successive halving for hyperparameter tuning.
- Deep learning data sets are extremely large, making efficient storage and memory management crucial.
- The speaker discusses how to handle data and tasks on multiple machines, using libraries such as Keras and TensorFlow.
- The speaker outlines a few key takeaways and the benefits of automating model selection, including reduced analysis and human involvement.
- The speaker looks forward to improving GPU efficiency and supporting more automated machine learning methods.