`tf.multiply`和`*`之间的区别是什么?

huangapple go评论80阅读模式
英文:

what is the difference between `tf.multiply` and `*`?

问题

在代码中,可以用 * 替换 tf.multiply 吗?可以用 1/x 替换 K.pow(x, -1) 吗?(根据 TensorFlow 文档,我知道 tf.powK.pow 之间的区别:tf.pow(x, y) 接受两个张量来计算对应元素的 x^y,而 K.pow(x, a) 接受一个张量 x 和一个整数 a 来计算 x^a。但我不知道为什么在上面的代码中,K.pow 接受一个浮点数 1.0 仍然可以正常工作。)

在代码中,你可以用 * 替换 tf.multiply,并且可以用 1/x 替换 K.pow(x, -1)。在这种情况下,* 将执行元素级别的乘法操作,而 1/x 将执行元素级别的倒数操作,与 K.pow(x, -1) 的效果相同。

关于为什么在上面的代码中,K.pow 接受一个浮点数 1.0 仍然可以正常工作,这可能是因为 TensorFlow/Keras 允许在一定程度上自动广播操作,使其适用于不同的张量形状。在这种情况下,将浮点数 1.0 视为一个标量,会自动广播到与 x 具有相同形状的张量,以执行元素级别的求幂操作。但请注意,这种行为可能会因 TensorFlow/Keras 的版本和配置而有所不同,因此最好在实际使用时进行测试以确保兼容性。

英文:

After import tensorflow.kera.backend as K

what is the difference between tf.multiply and *?

Similarly, What is the difference between K.pow(x, -1) and 1/x??

I write the following codes of a customized metrics function based on some other's codes.

def dice_coef_weight_sub(y_true, y_pred):
    """
    Returns the product of dice coefficient for each class
    """
    y_true_f = (Lambda(lambda y_true: y_true[:, :, :, :, 0:])(y_true))
    y_pred_f = (Lambda(lambda y_pred: y_pred[:, :, :, :, 0:])(y_pred))

    product = tf.multiply([y_true_f, y_pred_f]) # multiply should be import from tf or tf.math

    red_y_true = K.sum(y_true_f, axis=[0, 1, 2, 3]) # shape [None, nb_class]
    red_y_pred = K.sum(y_pred_f, axis=[0, 1, 2, 3])
    red_product = K.sum(product, axis=[0, 1, 2, 3])

    smooth = 0.001
    dices = (2. * red_product + smooth) / (red_y_true + red_y_pred + smooth)

    ratio = red_y_true / (K.sum(red_y_true) + smooth)
    ratio = 1.0 - ratio
    # ratio =  K.pow(ratio + smooth, -1.0) # different method to get ratio

    return K.sum(multiply([dices, ratio]))

In the codes, can I replace tf.multiply by *? Can I replace K.pow(x,-1) by 1/x??

(From tensorflow's document, I know the difference between tf.pow and K.pow: tf.pow(x,y) receives 2 tensors to compute x^y for corresponding elements in x and y, while K.pow(x,a) receives a tensor x and a integer a to compute x^a. But I do not know why in the above code K.pow receives a float number 1.0 and it still works norally)

答案1

得分: 3

假设*的两个操作数都是tf.Tensor而不是tf.sparse.SparseTensor,则*运算符与tf.multiply相同,即支持广播的逐元素乘法。

如果您有兴趣研究执行运算符重载的源代码,关键部分如下:

  1. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py#L891
  2. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py#L1225
  3. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py#L1201

对于tf.sparse.SparseTensor*被重载为稀疏张量特定的乘法操作。

假设您正在使用Python3,/运算符被重载为tf.math.truediv(即浮点除法,对应于TensorFlow的RealDiv操作)。

在Python2中,/运算符可能执行整数除法,在这种情况下,它会根据数据类型进行重载。对于浮点数据类型,它是tf.math.truediv,对于整数数据类型,它是tf.math.floordiv(整数地板除法)。

tf.pow()使用不同的运算符(即Pow运算符)。但假设您的所有数据类型都是浮点数,1 / xtf.pow(x, -1.0)应该是等价的。

英文:

Assuming the two operands of * are both tf.Tensors and not tf.sparse.SparseTensors , the * operator is the same as tf.multiply, i.e., elementwise multiplication with broadcasting support.

If you are interested in studying the source code that performs the operator overloading, the key parts are:

  1. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py#L891
  2. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py#L1225
  3. https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/math_ops.py#L1201

For tf.sparse.SparseTensors, * is overloaded with sparse tensor-specific multiplication ops.

Assuming you're using Python3, the / operator is overloaded to the tf.math.truediv (i.e., floating-point division, which corresponds to the RealDiv op of TensorFlow).

In Python2, the / operator may be doing integer division, in which case it's overloaded in a dtype-dependent way. For floating dtypes, it's tf.math.truediv, for integer dtypes, it's tf.math.floordiv (integer floor division).

tf.pow() uses a different operator (i.e., the Pow) operator. But assuming all your dtypes are floating-point, 1 / x and tf.pow(x, -1.0) should be equivalent.

huangapple
  • 本文由 发表于 2020年1月3日 20:39:32
  • 转载请务必保留本文链接:https://go.coder-hub.com/59578808.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定