应用映射到 TensorFlow 数据集。

huangapple go评论75阅读模式
英文:

apply map to tf dataset

问题

你可以在上面的数据集上应用 map 函数吗?

(Note: I have provided the translation for the question only, as per your request.)

英文:
import numpy as np
import tensorflow as tf

def scale(X,  dtype='float32'):
    a=-1
    b=1
    xmin = tf.cast(tf.math.reduce_min(X), dtype=dtype)
    xmax = tf.cast(tf.math.reduce_max(X), dtype=dtype)
    X = (X - xmin) / (xmax - xmin)
    scaled = X * (b - a) + a
    return scaled, xmin, xmax

a = np.random.random((20, 4, 4, 2)).astype('float32')
b = np.random.random((20, 16, 16, 2)).astype('float32')

dataset_a = tf.data.Dataset.from_tensor_slices(a)
dataset_b = tf.data.Dataset.from_tensor_slices(b)

dataset_ones = tf.data.Dataset.from_tensor_slices(tf.ones((len(b), 4, 4, 1)))   

dataset = tf.data.Dataset.zip((dataset_a, (dataset_b, dataset_ones)))

dataset = dataset.map(scale)

Can I somehow apply map to the above dataset?

答案1

得分: 1

当压缩多个数据集时,生成的数据集将包含元组作为元素。然而,scale 函数期望以单个张量作为输入,而不是元组。

要解决这个问题,您需要修改代码以正确处理元组元素。

import numpy as np
import tensorflow as tf

def scale(X, dtype='float32'):
    a = -1
    b = 1
    xmin = tf.cast(tf.math.reduce_min(X), dtype=dtype)
    xmax = tf.cast(tf.math.reduce_max(X), dtype=dtype)
    X = (X - xmin) / (xmax - xmin)
    scaled = X * (b - a) + a
    return scaled, xmin, xmax

a = np.random.random((20, 4, 4, 2)).astype('float32')
b = np.random.random((20, 16, 16, 2)).astype('float32')

dataset_a = tf.data.Dataset.from_tensor_slices(a)
dataset_b = tf.data.Dataset.from_tensor_slices(b)
dataset_ones = tf.data.Dataset.from_tensor_slices(tf.ones((len(b), 4, 4, 1)))

dataset = tf.data.Dataset.zip((dataset_a, dataset_b, dataset_ones))
dataset = dataset.map(lambda x, y, z: (scale(x), scale(y), z))

在上面的代码中,数据集 dataset_adataset_bdataset_ones 使用 tf.data.Dataset.zip() 进行了压缩。然后,使用 lambda 函数将 scale() 函数应用于数据集中的每个元素。Lambda 函数解压元组元素 (x, y, z),对 x 和 y 应用 scale() 函数,保持 z 不变。

现在,dataset.map() 操作应该可以正确工作,不会引发类型转换错误。

英文:

When zipping multiple datasets, the resulting dataset will have elements as tuples. However, the scale function is expecting a single tensor as input, not a tuple.

To fix the issue, you need to modify the code to handle the tuple elements correctly.

import numpy as np
import tensorflow as tf

def scale(X, dtype='float32'):
    a = -1
    b = 1
    xmin = tf.cast(tf.math.reduce_min(X), dtype=dtype)
    xmax = tf.cast(tf.math.reduce_max(X), dtype=dtype)
    X = (X - xmin) / (xmax - xmin)
    scaled = X * (b - a) + a
    return scaled, xmin, xmax

a = np.random.random((20, 4, 4, 2)).astype('float32')
b = np.random.random((20, 16, 16, 2)).astype('float32')

dataset_a = tf.data.Dataset.from_tensor_slices(a)
dataset_b = tf.data.Dataset.from_tensor_slices(b)
dataset_ones = tf.data.Dataset.from_tensor_slices(tf.ones((len(b), 4, 4, 1)))

dataset = tf.data.Dataset.zip((dataset_a, dataset_b, dataset_ones))
dataset = dataset.map(lambda x, y, z: (scale(x), scale(y), z))

In the above code, the datasets dataset_a, dataset_b, and dataset_ones are zipped together using tf.data.Dataset.zip(). Then, the map() function is used with a lambda function to apply the scale() function to each element in the dataset. The lambda function unpacks the tuple elements (x, y, z), applies the scale() function to x and y, and keeps z unchanged.

Now, the dataset.map() operation should work correctly without raising the type conversion error.

huangapple
  • 本文由 发表于 2023年5月21日 02:43:50
  • 转载请务必保留本文链接:https://go.coder-hub.com/76296840.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定