在这里是否存在代码重新排序和锁定失败的风险?

huangapple go评论69阅读模式
英文:

Is there a risk here of code reordering and the lock failing?

问题

以下是代码的翻译部分:

我有以下的锁,用于防止多个线程访问相同的代码:

struct SimpleAtomicThreadLock
{
    constexpr SimpleAtomicThreadLock() : bIsLocked(false) {}
    std::atomic<bool> bIsLocked;

    void lock()
    {
        while (bIsLocked.exchange(true));
    }
    void unlock()
    {
        bIsLocked.store(false);
    }
};

我使用它的方式如下:

void Mesh::drawMesh()
{
    static constexpr SimpleAtomicThreadLock lock;

    lock.lock();
    /* MESH 和 MATERIAL 都需要准备好供着色器使用 */
    if (!mesh->isDeviceLocal() && !mesh->bIsBeingMadeDeviceLocal)
    {
        Engine::makeMeshDeviceLocal(mesh);
    }
    if (!this->material->isReadyForShaderUse() && !this->material->bIsBeingMadeReadyForShaderUse)
    {
        Engine::makeMaterialShaderAccessReady(material);
    }

    lock.unlock();
}

关于C++内存模型和原子操作,通常情况下,编译器可以重新排序函数中的语句。在这种情况下,调用可能会在锁和解锁之前执行吗?

英文:

I have the following lock that I use to prevent multiple threads from accessing the same code:

struct SimpleAtomicThreadLock
{
constexpr SimpleAtomicThreadLock() : bIsLocked(false) {}
std::atomic&lt;bool&gt; bIsLocked;

void lock()
{
    while (bIsLocked.exchange(true));
}
void unlock()
{
    bIsLocked.store(false);
}
};

And I use it like this:

void Mesh::drawMesh()
{
    static constexpr SimpleAtomicThreadLock lock;

    lock.lock();
		/* BOTH MESH AND MATERIAL NEED TO BE MADE SHADER ACCESS READY */
		if (!mesh-&gt;isDeviceLocal() &amp;&amp; !mesh-&gt;bIsBeingMadeDeviceLocal)
		{
			Engine::makeMeshDeviceLocal(mesh);
		}
		if (!this-&gt;material-&gt;isReadyForShaderUse() &amp;&amp; !this-&gt;material-&gt;bIsBeingMadeReadyForShaderUse)
		{
			Engine::makeMaterialShaderAccessReady(material);
		}

		lock.unlock();
}

I've been reading about the c++ memory model with respect to atomics, and found out that (ordinarily) the compiler can reorder statements in a function. In this case is it possible that the calls in between lock and unlock can be made even before lock?

答案1

得分: 1

不可能在这种情况下。

原因是您调用的原子函数,std::atomic&lt;bool&gt;::exchangestd::atomic&lt;bool&gt;::store
都需要将 std::memory_ordering 作为第二个参数,而默认值是 std::memory_ordering_seq_cst

有关此含义的信息,您可以参考:
https://en.cppreference.com/w/cpp/atomic/atomic/exchange
https://en.cppreference.com/w/cpp/atomic/atomic/store

Seq Cst(顺序一致性)是最保守的内存顺序,通常被建议使用,这也是为什么它是默认值的原因。

根据标准的规定,线程间同步是根据程序中不同语句之间的一些正式属性来定义的,称为“同步与”,“发生在”,和“承载依赖”。非正式地说,只要其他线程适当地锁定和解锁您的锁,发生在临界区域的任何副作用都会“发生在”解锁之前,而锁定“发生在”这些更改之前。每当有人锁定或解锁锁时,它都与其他锁定操作具有“同步与”关系。

(通常情况下,标准不涉及编译器允许重新排序什么,因为这太低级,他们不喜欢规定实现细节。相反,他们试图定义什么算作“可观察行为”和什么是未定义行为,如果编译器不会改变那些没有未定义行为的程序的“可观察行为”的任何优化都是允许的。)

总之,编译器不允许将具有内存副作用的操作移动到锁定和解锁语句之间,因为如果这样做,它们将改变由C++标准定义的程序的可观察行为。

请注意,这里关键的是使用SeqCst,如果您使用“relaxed”原子操作,那么情况将不同,编译器将被允许将语句移出临界区,从而导致错误。

如果您想更详细地了解内存模型的思想,我强烈推荐观看Herb Sutters的“原子武器”演讲:https://www.youtube.com/watch?v=A8eCGOqgvH4

这个演讲详细讨论了内存模型的含义,以及它如何与编译器和现代硬件体系结构相关联,还提供了很多关于如何思考它以及如何使用它的有用和实用建议。

在我看来,其中一个主要观点是,几乎总是应该使用默认的Sequential Consistency,只要您不创建竞争,您的程序和锁通常都会按照您期望的方式工作。除非您在纳秒级的性能上做一些非常低级的工作,否则开始使用比SeqCst更复杂的内存顺序通常不值得,因为它会增加理解和维护代码的复杂性,而性能收益通常不足以证明这种复杂性。如果您在标准库或类似项目上工作,其中您的锁将被大量其他项目使用,这些项目有非常广泛的需求,那么定制std::memory_order参数可能是有意义的。除了这些情况外,使用比SeqCst更复杂的东西通常不值得。

英文:

No, it is not possible in this case.

The reason is that the atomic functions you are calling, std::atomic&lt;bool&gt;::exchange and std::atomic&lt;bool&gt;::store,
both take a std::memory_ordering as a second parameter, and the default value is std::memory_ordering_seq_cst.

https://en.cppreference.com/w/cpp/atomic/atomic/exchange
https://en.cppreference.com/w/cpp/atomic/atomic/store

For info about what this means, you can see:
https://en.cppreference.com/w/cpp/atomic/memory_order

Seq Cst (sequential consistency) is the most conservative memory ordering, and is the most commonly recommended one to use, which is why it is the default.

The way the standard is written, inter-thread synchronization is defined in terms of a number of formal properties between different statements in your program, called "synchronizes with", "happens before", and "carries dependency". Informally, as long as other threads lock and unlock your lock appropriately, any side-effects that happen in the critical region "happen before" unlocking, and locking "happens before" those changes. And whenever someone locks or unlocks the lock, that has a "synchronizes with" relationship with other locking operations.

(Usually, the standard doesn't talk about what the compiler is allowed to reorder, because that's too low-level, and they don't like to mandate implementation details. Instead they try to define what counts as "observable behavior" and what is undefined behavior, and the compiler is allowed to make any optimizations that don't change the "observable behavior" of programs that do not have undefined behavior.)

The upshot is that the compiler is not allowed to move operations with memory side-effects across your locking and unlocking statements, because they would be changing the observable behavior of your program, as defined by the C++ standard, if they do that.

Note that the fact that SeqCst is being used here is critical -- if you were using "relaxed" atomics, then this would not be the case, and the compiler would be allowed to move the statements outside the critical section, and you would have a bug.

If you would like to understand the memory model ideas in more detail, I highly recommend Herb Sutters' Atomic Weapon's talk: https://www.youtube.com/watch?v=A8eCGOqgvH4

That talk goes into considerable detail with examples about what the memory model means and how it connects not only to the compiler, but to modern hardware architectures, and it also gives a lot of useful and practical advice about how to think about this and how to use it.

One of the main takeaways (IMO) is you should almost always just use the default of Sequential Consistency, and your programs and locks will generally just work the way you think they should as long you don't create races. You generally have to be doing something very low level where performance on the level of nanoseconds is critical before it starts to become worth it to start using a weaker memory ordering. It may also make sense to customize the std::memory_order parameter if you are working on the standard library or something like this, where your lock will be used by an enormous number of other projects with a very wide range of needs. Outside of these cases, the performance benefits you will get from using something more complicated than SeqCst are usually not enough to justify the increased complexity of understanding and maintaining the code.

huangapple
  • 本文由 发表于 2023年2月27日 00:08:25
  • 转载请务必保留本文链接:https://go.coder-hub.com/75573266.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定