我如何衡量一句话与其否定含义之间的语义相似性?

huangapple go评论72阅读模式
英文:

How can i measure semantic similarity between a sentence and its negative meaning?

问题

我一直在开发一个“自动短答案评分系统”的模型。在这个模型中,教师的答案和学生的答案将被匹配以计算相似性。

我计算了这些文本之间的语义相似度。在这里,我使用了一个名为 bert-base-uncased 的预训练 BERT 模型。但它给出的余弦相似度是 0.92。这是正确的吗?在这种情况下,每个文本具有不同的含义。

我们如何使用任何自然语言处理模型来处理这个问题?我真的很感激您提供的任何帮助。

英文:
tecAnswer = """
Gradient descent is an optimization algorithm which is commonly-used to train machine learning models and neural networks. Training data helps these models learn over time, and the cost function within gradient descent specifically acts as a barometer, gauging its accuracy with each iteration of parameter updates.
"""
stdAnswer = """
Gradient descent (GD) is not an iterative first-order optimisation algorithm used to find a local minimum/maximum of a given function. This method is commonly used in machine learning (ML) and deep learning(DL) to minimise a cost/loss function (e.g. in a linear regression). Due to its importance and ease of implementation, this algorithm is usually taught at the beginning of almost all machine learning courses.
"""`
predict_similarity([tecAnswer, stdAnswer])

I have been developing a model for the Automatic Short Answer Grading System. In this model, the teacher's answer and the student's answer will be matched for similarity.

I calculated the semantic similarity between these texts. Here I used BERT pre-trained model called bert-base-uncased. But it gives the cosine similarity 0.92. Is it correct? In this situation, each text has a different meaning.

How can we handle this issue using any nlp model? I really appreciate any help you can provide.

答案1

得分: 2

如果我理解正确,您想要检测两个句子是否具有相反的含义。解决这个问题的一种方法是使用来自Sentence-BERT的交叉编码器。您可以在这里查看预训练的交叉编码器,特别是那些在NLI(自然语言推理)上训练的编码器。给定一对句子,这个模型可以告诉您它们是否支持彼此('entailment')、是否相互矛盾('contradiction')或都不是('neutral')。

英文:

If I understand correctly, you want to detect if the two sentences have opposing meanings. One way to solve this is to use the cross-encoder from Sentence-BERT. You can check out the pre-trained cross-encoders here, in particular the ones trained on NLI (Natural Language Inference). Given a pair of sentences, this model can tell whether they support each other ('entailment'), contradict each other ('contradiction'), or neither ('neutral').

huangapple
  • 本文由 发表于 2023年7月28日 04:08:29
  • 转载请务必保留本文链接:https://go.coder-hub.com/76783098.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定