修改并合并使用TensorFlow对象检测API生成的两个不同的冻结图,用于推理。

huangapple go评论63阅读模式
英文:

Modify and combine two different frozen graphs generated using tensorflow object detection API for inference

问题

我正在使用TensorFlow目标检测API,我已经针对我的用例训练了两个不同的模型(SSD-mobilenet和FRCNN-inception-v2)。目前,我的工作流程如下:

  1. 获取输入图像,使用SSD mobilenet检测特定对象。
  2. 使用从步骤1生成的边界框裁剪输入图像,然后将其调整为固定大小(例如200 x 300)。
  3. 将这个裁剪并调整大小的图像提供给FRCNN-inception-V2,以检测ROI内的较小对象。

目前,在推理时,当我加载两个单独的冻结图并按照这些步骤操作时,我可以获得我期望的结果。但由于部署需求,我只需要一个冻结图。我是TensorFlow的新手,想要将两个图与裁剪和调整大小的过程结合起来,请帮助我。

英文:

I am working with TensorFlow object detection API, I have trained two different(SSD-mobilenet and FRCNN-inception-v2) models for my use case. Currently, my workflow is like this:

  1. Take an input image, detect one particular object using SSD
    mobilenet.
  2. Crop the input image with the bounding box generated from
    step 1 and then resize it to a fixed size(e.g. 200 X 300).
  3. Feed this cropped and resized image to FRCNN-inception-V2 for detecting
    smaller objects inside the ROI.

Currently at the time of inferencing, when I load two separate frozen graphs and follow the steps, I am getting my desired results. But I need only a single frozen graph because of my deployment requirement. I am new to TensorFlow and wanted to combine both graphs with crop and resizing process in between them.

答案1

得分: 3

感谢@matt和@Vedanshu的回应,以下是适合我的需求的更新代码,请提出建议,如果需要改进的话,因为我仍在学习它。

# 依赖项
import tensorflow as tf
import numpy as np


# 使用pb文件路径加载图形
def load_graph(pb_file):
    graph = tf.Graph()
    with graph.as_default():
        od_graph_def = tf.GraphDef()
        with tf.gfile.GFile(pb_file, 'rb') as fid:
            serialized_graph = fid.read()
            od_graph_def.ParseFromString(serialized_graph)
            tf.import_graph_def(od_graph_def, name='')
    return graph


# 从图中返回张量字典
def get_inference(graph, count=0):
    with graph.as_default():
        ops = tf.get_default_graph().get_operations()
        all_tensor_names = {output.name for op in ops for output in op.outputs}
        tensor_dict = {}
        for key in ['num_detections', 'detection_boxes', 'detection_scores',
                    'detection_classes', 'detection_masks', 'image_tensor']:
            tensor_name = key + ':0' if count == 0 else '_{}:0'.format(count)
            if tensor_name in all_tensor_names:
                tensor_dict[key] = tf.get_default_graph().\
                                        get_tensor_by_name(tensor_name)
        return tensor_dict


# 重命名frame_name,因为每个图形都有一个while函数
# 在https://github.com/tensorflow/tensorflow/issues/22162上有一个问题
def rename_frame_name(graphdef, suffix):
    for n in graphdef.node:
        if "while" in n.name:
            if "frame_name" in n.attr:
                n.attr["frame_name"].s = str(n.attr["frame_name"]).replace("while_context",
                                                                           "while_context" + suffix).encode('utf-8')


if __name__ == '__main__':

    # 您的pb文件路径
    frozenGraphPath1 = '...replace_with_your_path/some_frozen_graph.pb'
    frozenGraphPath2 = '...replace_with_your_path/some_frozen_graph.pb'

    # 保存合并模型的新文件名
    combinedFrozenGraph = 'combined_frozen_inference_graph.pb'

    # 加载两个图形
    graph1 = load_graph(frozenGraphPath1)
    graph2 = load_graph(frozenGraphPath2)

    # 从第一个图中获取张量名称
    tensor_dict1 = get_inference(graph1)

    with graph1.as_default():

        # 获取用于添加裁剪和调整大小步骤的张量
        image_tensor = tensor_dict1['image_tensor']
        scores = tensor_dict1['detection_scores'][0]
        num_detections = tf.cast(tensor_dict1['num_detections'][0], tf.int32)
        detection_boxes = tensor_dict1['detection_boxes'][0]

        # 我必须添加NMS,因为我的ssd模型输出100个检测结果,因此由于巨大的张量形状而耗尽内存
        selected_indices = tf.image.non_max_suppression(detection_boxes, scores, 5, iou_threshold=0.5)
        selected_boxes = tf.gather(detection_boxes, selected_indices)

        # 中间裁剪和调整大小步骤,将作为第二个模型(FRCNN)的输入
        cropped_img = tf.image.crop_and_resize(image_tensor,
                                               selected_boxes,
                                               tf.zeros(tf.shape(selected_indices), dtype=tf.int32),
                                               [300, 60]  # 调整大小为300 X 60
                                               )
        cropped_img = tf.cast(cropped_img, tf.uint8, name='cropped_img')

    gdef1 = graph1.as_graph_def()
    gdef2 = graph2.as_graph_def()

    g1name = "graph1"
    g2name = "graph2"

    # 重命名两个图中的while_context
    rename_frame_name(gdef1, g1name)
    rename_frame_name(gdef2, g2name)

    # 这将合并两个模型并将其保存为一个模型
    with tf.Graph().as_default() as g_combined:

        x, y = tf.import_graph_def(gdef1, return_elements=['image_tensor:0', 'cropped_img:0'])

        z, = tf.import_graph_def(gdef2, input_map={"image_tensor:0": y}, return_elements=['detection_boxes:0'])

        tf.train.write_graph(g_combined, "./", combinedFrozenGraph, as_text=False)

希望这可以帮助您!如果您有任何其他问题,请随时提出。

英文:

Thanks, @matt and @Vedanshu for responding, Here is the updated code that works fine for my requirement, Please give suggestions, if it needs any improvement as I am still learning it.

# Dependencies
import tensorflow as tf
import numpy as np
# load graphs using pb file path
def load_graph(pb_file):
graph = tf.Graph()
with graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(pb_file, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='') 
return graph
# returns tensor dictionaries from graph
def get_inference(graph, count=0):
with graph.as_default():
ops = tf.get_default_graph().get_operations()
all_tensor_names = {output.name for op in ops for output in op.outputs}
tensor_dict = {}
for key in ['num_detections', 'detection_boxes', 'detection_scores',
'detection_classes', 'detection_masks', 'image_tensor']:
tensor_name = key + ':0' if count == 0 else '_{}:0'.format(count)
if tensor_name in all_tensor_names:
tensor_dict[key] = tf.get_default_graph().\
get_tensor_by_name(tensor_name)
return tensor_dict
# renames while_context because there is one while function for every graph
# open issue at https://github.com/tensorflow/tensorflow/issues/22162  
def rename_frame_name(graphdef, suffix):
for n in graphdef.node:
if "while" in n.name:
if "frame_name" in n.attr:
n.attr["frame_name"].s = str(n.attr["frame_name"]).replace("while_context",
"while_context" + suffix).encode('utf-8')
if __name__ == '__main__':
# your pb file paths
frozenGraphPath1 = '...replace_with_your_path/some_frozen_graph.pb'
frozenGraphPath2 = '...replace_with_your_path/some_frozen_graph.pb'
# new file name to save combined model
combinedFrozenGraph = 'combined_frozen_inference_graph.pb'
# loads both graphs
graph1 = load_graph(frozenGraphPath1)
graph2 = load_graph(frozenGraphPath2)
# get tensor names from first graph
tensor_dict1 = get_inference(graph1)
with graph1.as_default():
# getting tensors to add crop and resize step
image_tensor = tensor_dict1['image_tensor']
scores = tensor_dict1['detection_scores'][0]
num_detections = tf.cast(tensor_dict1['num_detections'][0], tf.int32)
detection_boxes = tensor_dict1['detection_boxes'][0]
# I had to add NMS becuase my ssd model outputs 100 detections and hence it runs out of memory becuase of huge tensor shape
selected_indices = tf.image.non_max_suppression(detection_boxes, scores, 5, iou_threshold=0.5)
selected_boxes = tf.gather(detection_boxes, selected_indices)
# intermediate crop and resize step, which will be input for second model(FRCNN)
cropped_img = tf.image.crop_and_resize(image_tensor,
selected_boxes,
tf.zeros(tf.shape(selected_indices), dtype=tf.int32),
[300, 60] # resize to 300 X 60
)
cropped_img = tf.cast(cropped_img, tf.uint8, name='cropped_img')
gdef1 = graph1.as_graph_def()
gdef2 = graph2.as_graph_def()
g1name = "graph1"
g2name = "graph2"
# renaming while_context in both graphs
rename_frame_name(gdef1, g1name)
rename_frame_name(gdef2, g2name)
# This combines both models and save it as one
with tf.Graph().as_default() as g_combined:
x, y = tf.import_graph_def(gdef1, return_elements=['image_tensor:0', 'cropped_img:0'])
z, = tf.import_graph_def(gdef2, input_map={"image_tensor:0": y}, return_elements=['detection_boxes:0'])
tf.train.write_graph(g_combined, "./", combinedFrozenGraph, as_text=False)

答案2

得分: 1

你可以使用import_graph_def中的input_map将一个图的输出加载到另一个图中。此外,你必须重新命名while_context,因为每个图都有一个while函数。类似这样:

def get_frozen_graph(graph_file):
    """从磁盘上读取Frozen Graph文件。"""
    with tf.gfile.GFile(graph_file, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
    return graph_def

def rename_frame_name(graphdef, suffix):
    # 在https://github.com/tensorflow/tensorflow/issues/22162#issuecomment-428091121上报告的错误
    for n in graphdef.node:
        if "while" in n.name:
            if "frame_name" in n.attr:
                n.attr["frame_name"].s = str(n.attr["frame_name"]).replace("while_context",
                                                                       "while_context" + suffix).encode('utf-8')
...

l1_graph = tf.Graph()
with l1_graph.as_default():
    trt_graph1 = get_frozen_graph(pb_fname1)
    [tf_input1, tf_scores1, tf_boxes1, tf_classes1, tf_num_detections1] = tf.import_graph_def(trt_graph1, 
            return_elements=['image_tensor:0', 'detection_scores:0', 'detection_boxes:0', 'detection_classes:0','num_detections:0'])

    input1 = tf.identity(tf_input1, name="l1_input")
    boxes1 = tf.identity(tf_boxes1[0], name="l1_boxes")  # 通过索引0来删除批次维度
    scores1 = tf.identity(tf_scores1[0], name="l1_scores")
    classes1 = tf.identity(tf_classes1[0], name="l1_classes")
    num_detections1 = tf.identity(tf.dtypes.cast(tf_num_detections1[0], tf.int32), name="l1_num_detections")

...
# 创建你的输出张量
tf_out = # 你的输出张量(在这里,使用从步骤1生成的边界框裁剪输入图像,然后将其调整为固定大小(例如200 x 300))
...

connected_graph = tf.Graph()

with connected_graph.as_default():
    l1_graph_def = l1_graph.as_graph_def()
    g1name = 'ved'
    rename_frame_name(l1_graph_def, g1name)
    tf.import_graph_def(l1_graph_def, name=g1name)

    ...

    trt_graph2 = get_frozen_graph(pb_fname2)
    g2name = 'level2'
    rename_frame_name(trt_graph2, g2name)
    [tf_scores, tf_boxes, tf_classes, tf_num_detections] = tf.import_graph_def(trt_graph2,
            input_map={'image_tensor': tf_out},
            return_elements=['detection_scores:0', 'detection_boxes:0', 'detection_classes:0','num_detections:0'])

#######
# 导出图

with connected_graph.as_default():
    print('\nSaving...')
    cwd = os.getcwd()
    path = os.path.join(cwd, 'saved_model')
    shutil.rmtree(path, ignore_errors=True)
    inputs_dict = {
        "image_tensor": tf_input
    }
    outputs_dict = {
        "detection_boxes_l1": tf_boxes_l1,
        "detection_scores_l1": tf_scores_l1,
        "detection_classes_l1": tf_classes_l1,
        "max_num_detection": tf_max_num_detection,
        "detection_boxes_l2": tf_boxes_l2,
        "detection_scores_l2": tf_scores_l2,
        "detection_classes_l2": tf_classes_l2
    }
    tf.saved_model.simple_save(
        tf_sess_main, path, inputs_dict, outputs_dict
    )
    print('Ok')
英文:

You can load output of one graph into another using input_map in import_graph_def. Also you have to rename the while_context because there is one while function for every graph. Something like this:

def get_frozen_graph(graph_file):
"""Read Frozen Graph file from disk."""
with tf.gfile.GFile(graph_file, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
return graph_def
def rename_frame_name(graphdef, suffix):
# Bug reported at https://github.com/tensorflow/tensorflow/issues/22162#issuecomment-428091121
for n in graphdef.node:
if "while" in n.name:
if "frame_name" in n.attr:
n.attr["frame_name"].s = str(n.attr["frame_name"]).replace("while_context",
"while_context" + suffix).encode('utf-8')
...
l1_graph = tf.Graph()
with l1_graph.as_default():
trt_graph1 = get_frozen_graph(pb_fname1)
[tf_input1, tf_scores1, tf_boxes1, tf_classes1, tf_num_detections1] = tf.import_graph_def(trt_graph1, 
return_elements=['image_tensor:0', 'detection_scores:0', 'detection_boxes:0', 'detection_classes:0','num_detections:0'])
input1 = tf.identity(tf_input1, name="l1_input")
boxes1 = tf.identity(tf_boxes1[0], name="l1_boxes")  # index by 0 to remove batch dimension
scores1 = tf.identity(tf_scores1[0], name="l1_scores")
classes1 = tf.identity(tf_classes1[0], name="l1_classes")
num_detections1 = tf.identity(tf.dtypes.cast(tf_num_detections1[0], tf.int32), name="l1_num_detections")
...
# Make your output tensor 
tf_out = # your output tensor (here, crop the input image with the bounding box generated from step 1 and then resize it to a fixed size(e.g. 200 X 300).)
...
connected_graph = tf.Graph()
with connected_graph.as_default():
l1_graph_def = l1_graph.as_graph_def()
g1name = 'ved'
rename_frame_name(l1_graph_def, g1name)
tf.import_graph_def(l1_graph_def, name=g1name)
...
trt_graph2 = get_frozen_graph(pb_fname2)
g2name = 'level2'
rename_frame_name(trt_graph2, g2name)
[tf_scores, tf_boxes, tf_classes, tf_num_detections] = tf.import_graph_def(trt_graph2,
input_map={'image_tensor': tf_out},
return_elements=['detection_scores:0', 'detection_boxes:0', 'detection_classes:0','num_detections:0'])
#######
# Export the graph
with connected_graph.as_default():
print('\nSaving...')
cwd = os.getcwd()
path = os.path.join(cwd, 'saved_model')
shutil.rmtree(path, ignore_errors=True)
inputs_dict = {
"image_tensor": tf_input
}
outputs_dict = {
"detection_boxes_l1": tf_boxes_l1,
"detection_scores_l1": tf_scores_l1,
"detection_classes_l1": tf_classes_l1,
"max_num_detection": tf_max_num_detection,
"detection_boxes_l2": tf_boxes_l2,
"detection_scores_l2": tf_scores_l2,
"detection_classes_l2": tf_classes_l2
}
tf.saved_model.simple_save(
tf_sess_main, path, inputs_dict, outputs_dict
)
print('Ok')

huangapple
  • 本文由 发表于 2020年1月3日 13:39:46
  • 转载请务必保留本文链接:https://go.coder-hub.com/59573686.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定