Allowing GPU memory growth
By default, TensorFlow maps nearly all of the GPU memory of all GPUs (subject to CUDA_VISIBLE_DEVICES
) visible to the process. This is done to more efficiently use the relatively precious GPU memory resources on the devices by reducing memory fragmentation.
In some cases it is desirable for the process to only allocate a subset of the available memory, or to only grow the memory usage as is needed by the process. TensorFlow provides two Config options on the Session to control this.
The first is the allow_growth
option, which attempts to allocate only as much GPU memory based on runtime allocations: it starts out allocating very little memory, and as Sessions get run and more GPU memory is needed, we extend the GPU memory region needed by the TensorFlow process. Note that we do not release memory, since that can lead to even worse memory fragmentation. To turn this option on, set the option in the ConfigProto by:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)
The second method is the per_process_gpu_memory_fraction
option, which determines the fraction of the overall amount of memory that each visible GPU should be allocated. For example, you can tell TensorFlow to only allocate 40% of the total memory of each GPU by:
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
session = tf.Session(config=config, ...)
This is useful if you want to truly bound the amount of GPU memory available to the TensorFlow process.
Allowing GPU memory growth的更多相关文章
- Reducing and Profiling GPU Memory Usage in Keras with TensorFlow Backend
keras 自适应分配显存 & 清理不用的变量释放 GPU 显存 Intro Are you running out of GPU memory when using keras or ten ...
- GPU Memory Usage占满而GPU-Util却为0的调试
最近使用github上的一个开源项目训练基于CNN的翻译模型,使用THEANO_FLAGS='floatX=float32,device=gpu2,lib.cnmem=1' python run_nn ...
- 重置GPU显存 Reset GPU memory after CUDA errors
Sometimes CUDA program crashed during execution, before memory was flushed. As a result, device memo ...
- tensorflow 运行效率 GPU memory leak 问题解决
问题描述: Tensorflow 训练时运行越来越慢,重启后又变好. 用的是Tensorflow-GPU 1.2版本,在GPU上跑,大概就是才开始训练的时候每个batch的时间很低,然后随着训练的推进 ...
- Jupyter notebook Tensorflow GPU Memory 释放
Jupyter notebook 每次运行完tensorflow的程序,占着显存不释放.而又因为tensorflow是默认申请可使用的全部显存,就会使得后续程序难以运行.暂时还没有找到在jupyter ...
- Gradient Boosting, Decision Trees and XGBoost with CUDA ——GPU加速5-6倍
xgboost的可以参考:https://xgboost.readthedocs.io/en/latest/gpu/index.html 整体看加速5-6倍的样子. Gradient Boosting ...
- NVIDIA GPU Pascal架构简述
NVIDIA GPU Pascal架构简述 本文摘抄自英伟达Pascal架构官方白皮书:https://www.nvidia.com/en-us/data-center/resources/pasca ...
- Tensorflow2对GPU内存的分配策略
一.问题源起 从以下的异常堆栈可以看到是BLAS程序集初始化失败,可以看到是执行MatMul的时候发生的异常,基本可以断定可能数据集太大导致memory不够用了. 2021-08-10 16:38:0 ...
- GPU keylogger && GPU Based rootkit(Jellyfish rootkit)
catalog . OpenCL . Linux DMA(Direct Memory Access) . GPU rootkit PoC by Team Jellyfish . GPU keylogg ...
随机推荐
- oracle中查询表是否存在
select count(*) from user_tables where table_name='表名' 或者 select 1 from user_tables where table_name ...
- ssm学习的第一个demo---crm(4)
(1)在crm系统中点击修改弹出了修改的框,这个使用bootstrap做的,然后看jsp代码,找到 用editCustomer,按Ctrl+k找到了次函数,edit.action没有,创建 (2)去持 ...
- 01.hadoop集群环境搭建
hadoop集群搭建的步骤 1.安装jdk2修改ip地址3.关闭防火墙4.修改hostname5.设置ssh自动登陆6.安装hadoop-------------------------------- ...
- svn2
ubuntu下安装subversion客户端: sudo apt-get install subversion subversion-tools 详细请看 http://www.subversion. ...
- jquery元素使用
特殊用法: var formFields = $([]).add(_ele1).add(_ele2); 可将多个元素整合到一个集合中 1.has方法 has()方法查找自己,has为子集条件,即包含 ...
- 在Ubuntu下利用Eclipse调试FFmpeg《转》
参考原贴,其中编译命令有略微改动. 第一步:准备编译环境 #sudoapt-get update #-dev libspeex-dev libtheora-dev libtool libva-dev ...
- babel 基本
babel的大概知识点 . babel常用的转译器是babel-preset-env. 常用的配置选项是plugins和presets 常用的使用场景是在webpack中 https://www.cn ...
- python中求两个List的交集、并集和差集
直接上代码,有三种方法,第三种调用库函数效率最高 # ! /usr/bin/env python # encoding:utf-8 if __name__ == '__main__': a = [1, ...
- 使用__future__实现从python2.7到python3.x的过渡
参考链接:http://www.liaoxuefeng.com/wiki/001374738125095c955c1e6d8bb493182103fac9270762a000/001386820023 ...
- 微服务之间的调用(Ribbon与Feign)
来源:https://blog.csdn.net/jrn1012/article/details/77837658 使用Eureka作为服务注册中心,在服务启动后,各个微服务会将自己注册到Eureka ...