英文:
Tensorflow-GNN model.fit() error while training a skeleton-based GNN (index error)
问题
以下是翻译好的内容:
错误信息:
Node: 'while/model_1/graph_update_4/node_set_update_4/simple_conv_4/UnsortedSegmentMean/UnsortedSegmentSum'
segment_ids[44] = 25 is out of range [0, 25)
[[{{node while/model_1/graph_update_4/node_set_update_4/simple_conv_4/UnsortedSegmentMean/UnsortedSegmentSum}}]] [Op:__inference_train_function_10449]
节点集 body
的维度是 25,而边集 bones
的维度是 24。
我已经尽力翻译代码部分,如果您需要更多帮助,请随时提问。
英文:
I am using TFGNN library to build a skeleton based graph neural network for action recognition and while running a simple model I keep getting the following error. The model is simple and it is adapted from the official colab
The input GraphSchema
is the following:
GraphTensorSpec({'context': ContextSpec({'features': {}, 'sizes': TensorSpec(shape=(1,), dtype=tf.int32, name=None)}, TensorShape([]), tf.int32, None), 'node_sets': {'body': NodeSetSpec({'features': {'x_dim': TensorSpec(shape=(None, 1), dtype=tf.float32, name=None), 'z_dim': TensorSpec(shape=(None, 1), dtype=tf.float32, name=None), 'y_dim': TensorSpec(shape=(None, 1), dtype=tf.float32, name=None)}, 'sizes': TensorSpec(shape=(1,), dtype=tf.int32, name=None)}, TensorShape([]), tf.int32, None)}, 'edge_sets': {'bones': EdgeSetSpec({'features': {}, 'sizes': TensorSpec(shape=(1,), dtype=tf.int32, name=None), 'adjacency': AdjacencySpec({'#index.0': TensorSpec(shape=(None,), dtype=tf.int32, name=None), '#index.1': TensorSpec(shape=(None,), dtype=tf.int32, name=None)}, TensorShape([]), tf.int32, {'#index.0': 'body', '#index.1': 'body'})}, TensorShape([]), tf.int32, None)}}, TensorShape([]), tf.int32, None)
The model is the following:
def _build_model(
# To be called with the build_model_graph_tensor_spec from above.
graph_tensor_spec,
# Dimensions of initial states.
node_dim=128,
# Dimensions for message passing.
message_dim=128,
next_state_dim=128,
# Dimension for the logits.
num_classes=3,
# Other hyperparameters.
l2_regularization=6e-6,
dropout_rate=0.2,
use_layer_normalization=True,
):
# Model building with Keras's Functional API starts with an input object
# (a placeholder for future inputs). This works for composite tensors, too.
graph = input_graph = tf.keras.layers.Input(type_spec=graph_tensor_spec)
graph = graph.merge_batch_to_components()
def set_initial_node_state(node_set, node_set_name):
if node_set_name == "body":
feature_x_embedding = tf.keras.layers.Dense(node_dim, activation="relu")
feature_y_embedding = tf.keras.layers.Dense(node_dim, activation="relu")
feature_z_embedding = tf.keras.layers.Dense(node_dim, activation="relu")
concatenated_features = tf.keras.layers.Concatenate()(
[feature_x_embedding(node_set["x_dim"]),
feature_y_embedding(node_set["y_dim"]),
feature_z_embedding(node_set["z_dim"])])
return concatenated_features
graph = tfgnn.keras.layers.MapFeatures(
node_sets_fn=set_initial_node_state, name="init_states")(graph)
# Abbreviations for repeated building blocks in the GNN.
def dense(units, *, use_layer_normalization=False):
"""A Dense layer with regularization (L2 and Dropout) and normalization."""
regularizer = tf.keras.regularizers.l2(l2_regularization)
result = tf.keras.Sequential([
tf.keras.layers.Dense(
units,
activation="relu",
use_bias=True,
kernel_regularizer=regularizer,
bias_regularizer=regularizer),
tf.keras.layers.Dropout(dropout_rate)])
if use_layer_normalization:
result.add(tf.keras.layers.LayerNormalization())
return result
for i in range(4):
graph = tfgnn.keras.layers.GraphUpdate(
node_sets={
"body": tfgnn.keras.layers.NodeSetUpdate(
{"bones": tfgnn.keras.layers.SimpleConv(
tf.keras.layers.Dense(128, "relu"),
"mean",
receiver_tag=tfgnn.TARGET)},
tfgnn.keras.layers.NextStateFromConcat(tf.keras.layers.Dense(128)))
}
)(graph)
root_states = tfgnn.keras.layers.ReadoutFirstNode(node_set_name="body")(graph)
logits = tf.keras.layers.Dense(num_classes)(root_states)
return tf.keras.Model(input_graph, logits)
It returns the following error when running on the dataset
Node: 'while/model_1/graph_update_4/node_set_update_4/simple_conv_4/UnsortedSegmentMean/UnsortedSegmentSum'
segment_ids[44] = 25 is out of range [0, 25)
[[{{node while/model_1/graph_update_4/node_set_update_4/simple_conv_4/UnsortedSegmentMean/UnsortedSegmentSum}}]] [Op:__inference_train_function_10449]
The dimension of the node_set body is 25, while the dimension of the edge_set bones is 24.
I have tried re-modelling the graph structure and changing the layers of the graph update.
答案1
得分: 2
失败并不特定于模型,而是特定于数据。
特别是,您的边集 "bones"
具有比在同一 GraphTensor
中提供的可用节点数("body"
)更大的索引(例如,在 graph.edge_sets['bones'].adjacency.source
或 graph.edge_sets['bones'].adjacency.target
中)。
英文:
The failure is not model specific: it is data-specific.
In particular, your edge set "bones"
has indices (e.g., in graph.edge_sets['bones'].adjacency.source
or in graph.edge_sets['bones'].adjacency.target
) that are larger than the number of available ("body"
) nodes given in the same GraphTensor
.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论