Keras风格的无损三元组损失

huangapple go评论79阅读模式
英文:

Keras style lossless triplet loss

问题

Sure, here is the translated part of your text:

我正在尝试复制无损三元组损失,但是要使用"K."语法,就像我的三元组损失代码中所示:

我的代码

  1. def triplet_loss_01(y_true, y_pred, alpha = 0.2):
  2. total_lenght = y_pred.shape.as_list()[-1]
  3. print("triplet_loss.total_lenght: ", total_lenght)
  4. anchor = y_pred[:,0:int(total_lenght*1/3)]
  5. positive = y_pred[:,int(total_lenght*1/3):int(total_lenght*2/3)]
  6. negative = y_pred[:,int(total_lenght*2/3):int(total_lenght*3/3)]
  7. pos_dist = K.sum(K.square(anchor-positive),axis=1)
  8. neg_dist = K.sum(K.square(anchor-negative),axis=1)
  9. basic_loss = pos_dist-neg_dist+alpha
  10. loss = K.maximum(basic_loss,0.0)
  11. return loss

文章中的代码

  1. def lossless_triplet_loss(y_true, y_pred, N = 3, beta=N, epsilon=1e-8):
  2. anchor = tf.convert_to_tensor(y_pred[:,0:N])
  3. positive = tf.convert_to_tensor(y_pred[:,N:N*2])
  4. negative = tf.convert_to_tensor(y_pred[:,N*2:N*3])
  5. # 锚点和正样本之间的距离
  6. pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,positive)),1)
  7. # 锚点和负样本之间的距离
  8. neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,negative)),1)
  9. # 非线性值
  10. # -ln(-x/N+1)
  11. pos_dist = -tf.log(-tf.divide((pos_dist),beta)+1+epsilon)
  12. neg_dist = -tf.log(-tf.divide((N-neg_dist),beta)+1+epsilon)
  13. # 计算损失
  14. loss = neg_dist + pos_dist
  15. return loss

根据我的理解,你只需将以下这两行代码插入到你的代码中:

  1. pos_dist = -tf.log(-tf.divide((pos_dist),beta)+1+epsilon)
  2. neg_dist = -tf.log(-tf.divide((N-neg_dist),beta)+1+epsilon)

这两行代码从"tf."风格转换为"K."风格。

谢谢。

英文:

I am trying to replicate the lossless tripler loss, but using the "K." syntax, like in my triplet loss below:

My code

  1. def triplet_loss_01(y_true, y_pred, alpha = 0.2):
  2. total_lenght = y_pred.shape.as_list()[-1]
  3. print("triplet_loss.total_lenght: ", total_lenght)
  4. anchor = y_pred[:,0:int(total_lenght*1/3)]
  5. positive = y_pred[:,int(total_lenght*1/3):int(total_lenght*2/3)]
  6. negative = y_pred[:,int(total_lenght*2/3):int(total_lenght*3/3)]
  7. pos_dist = K.sum(K.square(anchor-positive),axis=1)
  8. neg_dist = K.sum(K.square(anchor-negative),axis=1)
  9. basic_loss = pos_dist-neg_dist+alpha
  10. loss = K.maximum(basic_loss,0.0)
  11. return loss

Code from the article

  1. def lossless_triplet_loss(y_true, y_pred, N = 3, beta=N, epsilon=1e-8):
  2. anchor = tf.convert_to_tensor(y_pred[:,0:N])
  3. positive = tf.convert_to_tensor(y_pred[:,N:N*2])
  4. negative = tf.convert_to_tensor(y_pred[:,N*2:N*3])
  5. # distance between the anchor and the positive
  6. pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,positive)),1)
  7. # distance between the anchor and the negative
  8. neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,negative)),1)
  9. #Non Linear Values
  10. # -ln(-x/N+1)
  11. pos_dist = -tf.log(-tf.divide((pos_dist),beta)+1+epsilon)
  12. neg_dist = -tf.log(-tf.divide((N-neg_dist),beta)+1+epsilon)
  13. # compute loss
  14. loss = neg_dist + pos_dist
  15. return loss

As I unterstand, all I have to do is to insert

  1. pos_dist = -tf.log(-tf.divide((pos_dist),beta)+1+epsilon)
  2. neg_dist = -tf.log(-tf.divide((N-neg_dist),beta)+1+epsilon)

in my code. Is there a "translation" from "tf." style to "K." style for these lines?

Thank you.

答案1

得分: 2

这是一种方法:

  1. VECTOR_SIZE = 10 # 根据您的模型设置该值
  2. def lossless_triplet_loss(y_true, y_pred, N=VECTOR_SIZE, beta=VECTOR_SIZE, epsilon=1e-8):
  3. anchor = y_pred[:, 0:N]
  4. positive = y_pred[:, N:2*N]
  5. negative = y_pred[:, 2*N:]
  6. # 锚点与正样本之间的距离
  7. pos_dist = K.sum(K.square(anchor - positive), axis=1)
  8. # 锚点与负样本之间的距离
  9. neg_dist = K.sum(K.square(anchor - negative), axis=1)
  10. # 一些魔法
  11. pos_dist = -K.log(-((pos_dist) / beta) + 1 + epsilon)
  12. neg_dist = -K.log(-((N - neg_dist) / beta) + 1 + epsilon)
  13. loss = neg_dist + pos_dist
  14. return loss
英文:

Here's a way you can do:

  1. VECTOR_SIZE = 10 # set it to value based on your model
  2. def lossless_triplet_loss(y_true, y_pred, N = VECTOR_SIZE, beta=VECTOR_SIZE, epsilon=1e-8):
  3. anchor = y_pred[:,0:N]
  4. positive = y_pred[:,N:2*N]
  5. negative = y_pred[:,2*N:]
  6. # distance between the anchor and the positive
  7. pos_dist = K.sum(K.square(anchor - positive),axis=1)
  8. # distance between the anchor and the negative
  9. neg_dist = K.sum(K.square(anchor - negative),axis=1)
  10. # some magic
  11. pos_dist = -K.log(-((pos_dist) / beta)+1+epsilon)
  12. neg_dist = -K.log(-((N-neg_dist) / beta)+1+epsilon)
  13. loss = neg_dist + pos_dist
  14. return loss

huangapple
  • 本文由 发表于 2020年1月6日 23:51:32
  • 转载请务必保留本文链接:https://go.coder-hub.com/59615096.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定