Test with:

Keras: 2.2.4
Python: 3.6.9
Tensorflow: 1.12.0

==================

Problem:

Using code from https://github.com/matterport/Mask_RCNN

When setting GPU_COUNT > 1

enconter this error:

RuntimeError: It looks like you are subclassing `Model` and you forgot to call `super(YourClass, self).__init__()`. Always start with this line.
Traceback (most recent call last):
File "D:\Anaconda33\lib\site-packages\keras\engine\network.py", line 313, in __setattr__
is_graph_network = self._is_graph_network
File "parallel_model.py", line 46, in __getattribute__
return super(ParallelModel, self).__getattribute__(attrname)
AttributeError: 'ParallelModel' object has no attribute '_is_graph_network' During handling of the above exception, another exception occurred: Traceback (most recent call last):
File "parallel_model.py", line 159, in <module>
model = ParallelModel(model, GPU_COUNT)
File "parallel_model.py", line 35, in __init__
self.inner_model = keras_model
File "D:\Anaconda33\lib\site-packages\keras\engine\network.py", line 316, in __setattr__
'It looks like you are subclassing `Model` and you '
RuntimeError: It looks like you are subclassing `Model` and you forgot to call `super(YourClass, self).__init__()`. Always start with this line.

Solution 1:

changing code in mrcnn/parallel_model.py as the following:

class ParallelModel(KM.Model):
def __init__(self, keras_model, gpu_count):
"""Class constructor.
keras_model: The Keras model to parallelize
gpu_count: Number of GPUs. Must be > 1
"""
super(ParallelModel, self).__init__()
self.inner_model = keras_model
self.gpu_count = gpu_count
merged_outputs = self.make_parallel()
super(ParallelModel, self).__init__(inputs=self.inner_model.inputs,
outputs=merged_outputs)

When getting this error:

asking for two arguments: inputs and outputs

Just upgrade your Keras to 2.2.4

When getting this error:

No node-device colocations were active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation.
Device assignments active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation:
with tf.device(/gpu:1): <M:\new\mrcnn\parallel_model.py:70>

No node-device colocations were active during op 'anchors/Variable' creation.
No device assignments were active during op 'anchors/Variable' creation.

Traceback (most recent call last):
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1334, in _do_call
return fn(*args)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1317, in _run_fn
self._extend_graph()
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1352, in _extend_graph
tf_session.ExtendSession(self._session)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot colocate nodes {{colocation_node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0}} and {{colocation_node anchors/Variable}}: Cannot merge devices with incompatible ids: '/device:GPU:0' and '/device:GPU:1'
[[{{node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0}} = Identity[T=DT_FLOAT, _class=["loc:@anchors/Variable"], _device="/device:GPU:1"](tower_1/mask_rcnn/anchors/Variable/cond/Merge)]] During handling of the above exception, another exception occurred: Traceback (most recent call last):
File "train_mul.py", line 448, in <module>
"mrcnn_bbox", "mrcnn_mask"])
File "M:\new\mrcnn\model.py", line 2132, in load_weights
saving.load_weights_from_hdf5_group_by_name(f, layers)
File "D:\Anaconda33\lib\site-packages\keras\engine\saving.py", line 1022, in load_weights_from_hdf5_group_by_name
K.batch_set_value(weight_value_tuples)
File "D:\Anaconda33\lib\site-packages\keras\backend\tensorflow_backend.py", line 2440, in batch_set_value
get_session().run(assign_ops, feed_dict=feed_dict)
File "D:\Anaconda33\lib\site-packages\keras\backend\tensorflow_backend.py", line 197, in get_session
[tf.is_variable_initialized(v) for v in candidate_vars])
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 929, in run
run_metadata_ptr)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1328, in _do_run
run_metadata)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\client\session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Cannot colocate nodes node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0 (defined at M:\new\mrcnn\model.py:1936) having device Device assignments active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation:
with tf.device(/gpu:1): <M:\new\mrcnn\parallel_model.py:70> and node anchors/Variable (defined at M:\new\mrcnn\model.py:1936) having device No device assignments were active during op 'anchors/Variable' creation. : Cannot merge devices with incompatible ids: '/device:GPU:0' and '/device:GPU:1'
[[node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0 (defined at M:\new\mrcnn\model.py:1936) = Identity[T=DT_FLOAT, _class=["loc:@anchors/Variable"], _device="/device:GPU:1"](tower_1/mask_rcnn/anchors/Variable/cond/Merge)]] No node-device colocations were active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation.
Device assignments active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation:
with tf.device(/gpu:1): <M:\new\mrcnn\parallel_model.py:70> No node-device colocations were active during op 'anchors/Variable' creation.
No device assignments were active during op 'anchors/Variable' creation. Caused by op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0', defined at:
File "train_mul.py", line 417, in <module>
model_dir=MODEL_DIR)
File "M:\new\mrcnn\model.py", line 1839, in __init__
self.keras_model = self.build(mode=mode, config=config)
File "M:\new\mrcnn\model.py", line 2064, in build
model = ParallelModel(model, config.GPU_COUNT)
File "M:\new\mrcnn\parallel_model.py", line 36, in __init__
merged_outputs = self.make_parallel()
File "M:\new\mrcnn\parallel_model.py", line 80, in make_parallel
outputs = self.inner_model(inputs)
File "D:\Anaconda33\lib\site-packages\keras\engine\base_layer.py", line 457, in __call__
output = self.call(inputs, **kwargs)
File "D:\Anaconda33\lib\site-packages\keras\engine\network.py", line 570, in call
output_tensors, _, _ = self.run_internal_graph(inputs, masks)
File "D:\Anaconda33\lib\site-packages\keras\engine\network.py", line 724, in run_internal_graph
output_tensors = to_list(layer.call(computed_tensor, **kwargs))
File "D:\Anaconda33\lib\site-packages\keras\layers\core.py", line 682, in call
return self.function(inputs, **arguments)
File "M:\new\mrcnn\model.py", line 1936, in <lambda>
anchors = KL.Lambda(lambda x: tf.Variable(anchors), name="anchors")(input_image)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 183, in __call__
return cls._variable_v1_call(*args, **kwargs)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 146, in _variable_v1_call
aggregation=aggregation)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 125, in <lambda>
previous_getter = lambda **kwargs: default_variable_creator(None, **kwargs)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variable_scope.py", line 2444, in default_variable_creator
expected_shape=expected_shape, import_scope=import_scope)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 187, in __call__
return super(VariableMetaclass, cls).__call__(*args, **kwargs)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 1329, in __init__
constraint=constraint)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 1480, in _init_from_args
self._initial_value),
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 2177, in _try_guard_against_uninitialized_dependencies
return self._safe_initial_value_from_tensor(initial_value, op_cache={})
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 2195, in _safe_initial_value_from_tensor
new_op = self._safe_initial_value_from_op(op, op_cache)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\ops\variables.py", line 2241, in _safe_initial_value_from_op
name=new_op_name, attrs=op.node_def.attr)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\util\deprecation.py", line 488, in new_func
return func(*args, **kwargs)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\framework\ops.py", line 3274, in create_op
op_def=op_def)
File "D:\Anaconda33\lib\site-packages\tensorflow\python\framework\ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack() InvalidArgumentError (see above for traceback): Cannot colocate nodes node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0 (defined at M:\new\mrcnn\model.py:1936) having device Device assignments active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation:
with tf.device(/gpu:1): <M:\new\mrcnn\parallel_model.py:70> and node anchors/Variable (defined at M:\new\mrcnn\model.py:1936) having device No device assignments were active during op 'anchors/Variable' creation. : Cannot merge devices with incompatible ids: '/device:GPU:0' and '/device:GPU:1'
[[node tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0 (defined at M:\new\mrcnn\model.py:1936) = Identity[T=DT_FLOAT, _class=["loc:@anchors/Variable"], _device="/device:GPU:1"](tower_1/mask_rcnn/anchors/Variable/cond/Merge)]] No node-device colocations were active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation.
Device assignments active during op 'tower_1/mask_rcnn/anchors/Variable/anchors/Variable/read_tower_1/mask_rcnn/anchors/Variable_0' creation:
with tf.device(/gpu:1): <M:\new\mrcnn\parallel_model.py:70> No node-device colocations were active during op 'anchors/Variable' creation.
No device assignments were active during op 'anchors/Variable' creation.

Adding this line:

import keras.backend.tensorflow_backend as KTF

config = tf.ConfigProto()
config.allow_soft_placement=True
session = tf.Session(config=config)
KTF.set_session(session)

Solution 2:(not recommended)

downgrade Keras to 2.1.3:

conda install keras=2.1.3

(this works for someone but not works for me)

Reference:

https://github.com/matterport/Mask_RCNN/issues/921

https://github.com/tensorflow/tensorflow/issues/2285

Fix multiple GPUs fails in training Mask_RCNN的更多相关文章

  1. HDU 4913 Least common multiple(2014 Multi-University Training Contest 5)

    题意:求所有自己的最小公倍数的和. 该集合是  2^ai  * 3^bi 思路:线段树. 线段树中存的是  [3^b * f(b)]   f(b)表示 因子3 的最小公倍数3的部分  为 3^b的个数 ...

  2. Stochastic Multiple Choice Learning for Training Diverse Deep Ensembles

    作者提出的方法是Algotithm 2.简单来说就是,训练的时候,在几个模型中,选取预测最准确的(也就是loss最低的)模型进行权重更新.

  3. CatBoost使用GPU实现决策树的快速梯度提升CatBoost Enables Fast Gradient Boosting on Decision Trees Using GPUs

    python机器学习-乳腺癌细胞挖掘(博主亲自录制视频)https://study.163.com/course/introduction.htm?courseId=1005269003&ut ...

  4. Training a classifier

    你已经学习了如何定义神经网络,计算损失和执行网络权重的更新. 现在你或许在思考. What about data? 通常当你需要处理图像,文本,音频,视频数据,你能够使用标准的python包将数据加载 ...

  5. 用matlab训练数字分类的深度神经网络Training a Deep Neural Network for Digit Classification

    This example shows how to use Neural Network Toolbox™ to train a deep neural network to classify ima ...

  6. PatentTips - Hierarchical RAID system including multiple RAIDs

    BACKGROUND OF THE INVENTION The present invention relates to a storage system offering large capacit ...

  7. [C4] Andrew Ng - Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization

    About this Course This course will teach you the "magic" of getting deep learning to work ...

  8. Deep Learning with Torch

    原文地址:https://github.com/soumith/cvpr2015/blob/master/Deep%20Learning%20with%20Torch.ipynb Deep Learn ...

  9. VGGNet论文翻译-Very Deep Convolutional Networks for Large-Scale Image Recognition

    Very Deep Convolutional Networks for Large-Scale Image Recognition Karen Simonyan[‡] & Andrew Zi ...

随机推荐

  1. Mybatis 复习

    概述 mybatis 是一个用java编写的持久层框架, 它封装了jdbc操作的很多细节,使开发者只需要关注sql语句本身,而无需关注注册驱动,创建连接等繁杂过程,它使用了ORM思想实现了结果 集的封 ...

  2. [ROR] 如何在mixin模块中定义类方法(Howto define class methods in a mixin module)

    方法一: 修改模块的include方法 module Bbq def self.included(base) base.send :include, InstanceMethods base.exte ...

  3. C++ GDI图形设备接口

    一.概念 1. GDI:(Graphics Device Interfase)图形设备接口,是一个应用程序与输出设备之间的中介. 一方面,GDI向应用程序提供一个与设备无关的编程环境,另一方面,它又以 ...

  4. mybatis关联映射一对一

    在项目开发中,会存在一对一的关系,比如一个人只有一个身份证,一个身份证只能给一个人使用,这就是一对一关系.一对一关系使用主外键关联. table.sql,在数据库中创建如下两个表并插入数据 CREAT ...

  5. Beego 学习笔记9:Boostrap使用介绍

    BootStrap布局 1>     下载地址: http://v3.bootcss.com/getting-started/#download 根据自己的需要,下载不同的版本.我这里使用的是1 ...

  6. JavaScript 流程控制(二)循环结构

    一.while 语句 语法结构: 声明循环变量:while (循环条件) { //循环体 // 迭代条件 } 当循环条件为 true 时,执行循环体:当循环条件为false时,结束循环. 二.do.. ...

  7. 【译】itertools

    1.Itertools模块迭代器的种类 1.1  无限迭代器: 迭代器 参数 结果 示例 count() start, [step] start, start+step, start+2*step, ...

  8. mysql-luster没有data目录

    mysqld --initialize-insecure --user=mysql 直接复制上面这条命令 然后cmd进入到 mysql解压出来bin的目录中: 等待一会  就发发现data的这个目录了 ...

  9. vue响应式原理,去掉优化,只看核心

    Vue响应式原理 作为写业务的码农,几乎不必知道原理.但是当你去找工作的时候,可是需要造原子弹的,什么都得知道一些才行.所以找工作之前可以先复习下,只要是关于vue的,必定会问响应式原理. 核心: / ...

  10. ubuntu17升级到18.04

    问题描述: ubuntu不是LTS长期支持的版本在支持期过了,没有apt源提供支持.所以需要升级到对应的LTS版本 问题解决: 实验环境: ubuntu17.10--->ubuntu18.04. ...