英文:
Keras style lossless triplet loss
问题
Sure, here is the translated part of your text:
我正在尝试复制无损三元组损失,但是要使用"K."语法,就像我的三元组损失代码中所示:
我的代码
def triplet_loss_01(y_true, y_pred, alpha = 0.2):
total_lenght = y_pred.shape.as_list()[-1]
print("triplet_loss.total_lenght: ", total_lenght)
anchor = y_pred[:,0:int(total_lenght*1/3)]
positive = y_pred[:,int(total_lenght*1/3):int(total_lenght*2/3)]
negative = y_pred[:,int(total_lenght*2/3):int(total_lenght*3/3)]
pos_dist = K.sum(K.square(anchor-positive),axis=1)
neg_dist = K.sum(K.square(anchor-negative),axis=1)
basic_loss = pos_dist-neg_dist+alpha
loss = K.maximum(basic_loss,0.0)
return loss
文章中的代码
def lossless_triplet_loss(y_true, y_pred, N = 3, beta=N, epsilon=1e-8):
anchor = tf.convert_to_tensor(y_pred[:,0:N])
positive = tf.convert_to_tensor(y_pred[:,N:N*2])
negative = tf.convert_to_tensor(y_pred[:,N*2:N*3])
# 锚点和正样本之间的距离
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,positive)),1)
# 锚点和负样本之间的距离
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,negative)),1)
# 非线性值
# -ln(-x/N+1)
pos_dist = -tf.log(-tf.divide((pos_dist),beta)+1+epsilon)
neg_dist = -tf.log(-tf.divide((N-neg_dist),beta)+1+epsilon)
# 计算损失
loss = neg_dist + pos_dist
return loss
根据我的理解,你只需将以下这两行代码插入到你的代码中:
pos_dist = -tf.log(-tf.divide((pos_dist),beta)+1+epsilon)
neg_dist = -tf.log(-tf.divide((N-neg_dist),beta)+1+epsilon)
这两行代码从"tf."风格转换为"K."风格。
谢谢。
英文:
I am trying to replicate the lossless tripler loss, but using the "K." syntax, like in my triplet loss below:
My code
def triplet_loss_01(y_true, y_pred, alpha = 0.2):
total_lenght = y_pred.shape.as_list()[-1]
print("triplet_loss.total_lenght: ", total_lenght)
anchor = y_pred[:,0:int(total_lenght*1/3)]
positive = y_pred[:,int(total_lenght*1/3):int(total_lenght*2/3)]
negative = y_pred[:,int(total_lenght*2/3):int(total_lenght*3/3)]
pos_dist = K.sum(K.square(anchor-positive),axis=1)
neg_dist = K.sum(K.square(anchor-negative),axis=1)
basic_loss = pos_dist-neg_dist+alpha
loss = K.maximum(basic_loss,0.0)
return loss
Code from the article
def lossless_triplet_loss(y_true, y_pred, N = 3, beta=N, epsilon=1e-8):
anchor = tf.convert_to_tensor(y_pred[:,0:N])
positive = tf.convert_to_tensor(y_pred[:,N:N*2])
negative = tf.convert_to_tensor(y_pred[:,N*2:N*3])
# distance between the anchor and the positive
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,positive)),1)
# distance between the anchor and the negative
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor,negative)),1)
#Non Linear Values
# -ln(-x/N+1)
pos_dist = -tf.log(-tf.divide((pos_dist),beta)+1+epsilon)
neg_dist = -tf.log(-tf.divide((N-neg_dist),beta)+1+epsilon)
# compute loss
loss = neg_dist + pos_dist
return loss
As I unterstand, all I have to do is to insert
pos_dist = -tf.log(-tf.divide((pos_dist),beta)+1+epsilon)
neg_dist = -tf.log(-tf.divide((N-neg_dist),beta)+1+epsilon)
in my code. Is there a "translation" from "tf." style to "K." style for these lines?
Thank you.
答案1
得分: 2
这是一种方法:
VECTOR_SIZE = 10 # 根据您的模型设置该值
def lossless_triplet_loss(y_true, y_pred, N=VECTOR_SIZE, beta=VECTOR_SIZE, epsilon=1e-8):
anchor = y_pred[:, 0:N]
positive = y_pred[:, N:2*N]
negative = y_pred[:, 2*N:]
# 锚点与正样本之间的距离
pos_dist = K.sum(K.square(anchor - positive), axis=1)
# 锚点与负样本之间的距离
neg_dist = K.sum(K.square(anchor - negative), axis=1)
# 一些魔法
pos_dist = -K.log(-((pos_dist) / beta) + 1 + epsilon)
neg_dist = -K.log(-((N - neg_dist) / beta) + 1 + epsilon)
loss = neg_dist + pos_dist
return loss
英文:
Here's a way you can do:
VECTOR_SIZE = 10 # set it to value based on your model
def lossless_triplet_loss(y_true, y_pred, N = VECTOR_SIZE, beta=VECTOR_SIZE, epsilon=1e-8):
anchor = y_pred[:,0:N]
positive = y_pred[:,N:2*N]
negative = y_pred[:,2*N:]
# distance between the anchor and the positive
pos_dist = K.sum(K.square(anchor - positive),axis=1)
# distance between the anchor and the negative
neg_dist = K.sum(K.square(anchor - negative),axis=1)
# some magic
pos_dist = -K.log(-((pos_dist) / beta)+1+epsilon)
neg_dist = -K.log(-((N-neg_dist) / beta)+1+epsilon)
loss = neg_dist + pos_dist
return loss
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论