This article reviews several research papers that utilize ARIMA-LSTM models for GPU resource scheduling in deep learning applications. These models offer a dynamic approach to resource allocation, aiming to enhance system performance and reduce energy consumption.

  1. 'Real-Time GPU Resource Allocation for Deep Learning Applications' by H. Wang et al. This paper introduces a method for dynamically allocating GPU resources to deep learning applications through prediction models based on ARIMA and LSTM. The authors demonstrate that their approach can improve the overall system performance by reducing resource contention and improving resource utilization.

  2. 'ARIMA-LSTM Based Resource Management for GPU Cloud Computing' by Y. Wang et al. This paper proposes an ARIMA-LSTM based resource management system designed for GPU cloud computing environments. The authors show that their system can effectively predict future resource demands and allocate resources accordingly, resulting in improved performance and reduced energy consumption.

  3. 'GPU Resource Management Using ARIMA and LSTM Time Series Models' by S. Biswas et al. This paper presents a GPU resource management system that leverages ARIMA and LSTM models to predict future resource demands. The authors demonstrate that their system can effectively allocate resources to different applications based on their predicted demands, leading to improved performance and reduced waiting times.

In conclusion, these papers showcase the potential of employing ARIMA-LSTM models for GPU resource management in deep learning applications. They emphasize the benefits of dynamic resource allocation in improving system performance and reducing energy consumption.

ARIMA-LSTM for GPU Resource Scheduling: A Review of Research Papers

原文地址: https://www.cveoy.top/t/topic/m00b 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录