secrets.compare_digest函数中发生碰撞的机会有多大?

huangapple go评论71阅读模式
英文:

What's the chance of a collision in Python's secrets. compare_digest function?

问题

我能找到与Python标准库中的恒定时间比较最相近的功能是secrets.compare_digest

但这让我想知道,如果用它来验证一个秘密令牌:

  • 发生碰撞的机会有多大?也就是说,秘密传递不匹配正确令牌,但函数返回true的机会有多大?(假设传递的两个字符串长度相同)

  • 当秘密令牌长度变得没有意义,至少在减轻暴力攻击方面没有意义时,秘密令牌的长度是多少?

英文:

The closest function I can find to a constant time compare in Python's standard library is secrets.compare_digest

But it makes me wonder, if in the case of using it to verify a secret token:

  • What's the chance of a collision? As in, what's the chance of a secret passed that doesn't match the correct token, but the function returns true? (Assuming that both strings passed are the same length)

  • What's the length of secret token when it becomes pointless to make the secret longer, at least in terms of mitigating brute force attacks?

答案1

得分: 1

secrets 使用 hmac,它使用 hashopenssl。它使用 _tscmp 函数来比较事物,该函数使用 openssl 的 CRYPTO_memcmp。文档中没有提及概率。我不擅长阅读 assembly code commit,但看起来它似乎只是直接比较内存,而不做其他操作。所以我不明白为什么会有碰撞的概率,因为这里没有涉及哈希。

至于暴力攻击的问题 - 假设我们在现代 GPU 的最大带宽下进行比较,其速度为 2TB/s,相当于 1.6e+13 位,如果我们将其乘以 1 年的时间,即约 3.1e+7 秒,我们得到每年每个 GPU 约 5e21 位的比较。log2(5e21) = 72 -> 这意味着我们理论上可以在一年内使用 GPU 破解一个 72 位的密钥。(这可能并不可行,因为 GPU 不是为此设计的,但可以制造一个具有此性能的 ASIC)。以每年每设备 72 位的粗略估计为基础,您可以简单地将设备数量或每位增加的年数加倍。(或者也许您可以假设效率每年翻倍,然后在年数上加 1)。在这里,128 位提供了 20 年的效率翻倍 + 10 亿个 GPU / 1 万台超级计算机。

额外说明 -> 最强超级计算机 FLOPS 为每秒 1e18 = 100,000 个 GPU(它具有 8,730,112 个核心或 136,408 个 AMD 64c CPU...)。

在不切实际的规模上 -> 宇宙可能包含大约 1e82 个原子,一个核子可能持续 1e200 年,约为 1e207 秒。假设每个原子可以在 1e-44 秒(普朗克时间)内进行比较,我们可以在理论上计算 1e333 位。这将产生一个大小为 1106 的密钥。我怀疑假设将宇宙转化为计算机的效率为 1e-24 时,1024 位的密钥应该足够。

英文:

secrets uses hmac which uses hashopenssl. This uses the _tscmp function to compare things, which uses openssl's CRYPTO_memcmp. The docs don't mention anything about the probability. I'm not good at reading the assembly code commit but it doesn't look like it is doing anything other than directly comparing the memory. So I don't see why it would have a probability of collision, since hashing is not involved

As for the question of a brute force attack - assuming assume we did the comparison at max gpu bandwidth of a modern gpu with 2TB/s, which is 1.6e+13 bits, if we multiply that by 1 year of time, that is ~ 3.1e+7 seconds we get ~5e21 bits of comparison per year per gpu. log2(5e21) = 72 -> which means we could theoretically break a 72 bit key with a a gpu in 1 year. (This is likely not possible because gpus are not designed for this, but an asic could probably be made with this kind of performance). Going with the eough estimate of 72/yr/device -> you can simply double the number of devices or number of years per bit increased. (Or maybe you could add 1 to the number of years assuming efficiency doubles every year). Here, 128 bits gives 20 years of doubling efficiency + 1 billion gpus / 10,000 supercomputers.

Addidional note -> top supercomputer flops is 1e18 per second = 100,000 gpus (it does have 8,730,112 cores or 136,408 amd 64c cpus...)

On impractical scales -> the universe contains maybe 1e82 atoms and a neucleon could last for 1e200 years which is ~1e207 seconds. Assuming each atom can do a bit comparison in 1e-44 seconds (plank time), we could theoretically compute 1e333 bits. that's a keysize of 1106. I suspect a keysize of 1024 should be enough assuming the efficiency of turning the universe into a computer is 1e-24.

huangapple
  • 本文由 发表于 2023年2月19日 16:17:38
  • 转载请务必保留本文链接:https://go.coder-hub.com/75498815.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定