英文:
How to speed up Tensorflow-gpu with using CUDA code simultaneoulsy
问题
I only have one GPU(GTX 1070, 8GB VRAM) and I would like to using tensorflow-gpu with another CUDA code simultaneously, on the same GPU.
But, using CUDA code and tensorflow-gpu at the same time slows tensorflow-gpu down about twice time.
Is there any solutions to speed up when tensorflow-gpu and CUDA code are used together?
英文:
I only have one GPU(GTX 1070, 8GB VRAM) and I would like to using tensorflow-gpu with another CUDA code simultaneously, on the same GPU.
But, using CUDA code and tensorflow-gpu at the same time slows tensorflow-gpu down about twice time.
Is there any solutions to speed up when tensorflow-gpu and CUDA code are used together?
答案1
得分: 1
GPUs资源有限,优化GPU代码的目标之一是确保尽可能高效地使用所有计算单元。如果已经优化,当运行其他GPU密集型应用程序或共享资源时,性能可能下降。解决方法包括:
-
获取第二块更快的GPU。
-
优化CUDA核心以减少资源需求和简化TensorFlow。
-
不要同时运行这些任务,可能比当前的时间切片方式稍快。
英文:
A slightly longer version of @talonmies comment:
GPUs are awesome, but they still have finite resources. Any competently-built application that uses the GPU will do its best to saturate the device, leaving few resources for other applications. In fact, one of the goals and challenges of optimizing GPU code - whether it be a shader, CUDA or CL kernel - is making sure that all CUs are used as efficiently as possible.
Assuming that TF is already doing that: When running another GPU-heavy application, or you're sharing a resource that's already running full-tilt. So, things slow down.
Some options are:
-
Get a second, or faster, GPU.
-
Optimize your CUDA kernels to reduce requirements and simplify your TF stuff. While this is always important to keep in mind when developing for GPGPU, it's unlikely to help with your current problem.
-
Don't run these things at the same time. This may turn out to be slightly faster than this quasi time-slicing situation that you currently have.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论