Why does OpenGL's glDrawArrays() fail with GL_INVALID_OPERATION under Core Profile 3.2, but not 3.3 or 4.2?

huangapple go评论73阅读模式
英文:

Why does OpenGL's glDrawArrays() fail with GL_INVALID_OPERATION under Core Profile 3.2, but not 3.3 or 4.2?

问题

我有一个OpenGL渲染代码,调用glDrawArrays函数,在OpenGL上下文为4.2时完美运行,但在显式请求的OpenGL核心上下文3.2时始终失败(GL_INVALID_OPERATION)。根据规范,只有两种情况下glDrawArrays()会失败并返回GL_INVALID_OPERATION:

  • "如果已绑定到启用的数组的非零缓冲对象名称,并且缓冲对象的数据存储当前已映射" - 在这一点上我没有进行任何缓冲映射。

  • "如果几何着色器处于活动状态,并且模式与[...]不兼容" - 不,目前没有几何着色器。

此外:

  1. 我已经验证并仔细检查了只有glDrawArrays()调用失败。还仔细检查了在两个GL版本下传递给glDrawArrays()的所有参数以及缓冲绑定。

  2. 这在3个不同的NVIDIA GPU和2个不同的操作系统(Win7和OSX,都是64位的 - 当然,在OSX上我们只有3.2上下文,无论如何没有4.2)上发生。

  3. 这在集成的“Intel HD” GPU上不会发生,但对于该GPU,我只获得一个自动隐式的3.3上下文(尝试通过GLFW在此处显式强制使用3.2核心配置文件会导致窗口创建失败,但这是一个完全不同的问题...)

就它的价值而言,这是从渲染循环中摘录的相关例程,使用Golang编写:

func (me *TMesh) render () {
    curMesh = me
    curTechnique.OnRenderMesh()
    gl.BindBuffer(gl.ARRAY_BUFFER, me.glVertBuf)
    if me.glElemBuf > 0 {
        gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, me.glElemBuf)
        gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil))
        gl.DrawElements(me.glMode, me.glNumIndices, gl.UNSIGNED_INT, gl.Pointer(nil))
        gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, 0)
    } else {
        gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil))
        /* BOOM! */
        gl.DrawArrays(me.glMode, 0, me.glNumVerts)
    }
    gl.BindBuffer(gl.ARRAY_BUFFER, 0)
}

当然,这是一个更大的渲染循环的一部分,尽管现在整个“*TMesh”构造只有两个实例,一个是简单的立方体,另一个是简单的金字塔。重要的是,在3.3和4.2下查询GL错误时,整个绘图循环都能完美运行,没有报告任何错误,但在具有显式3.2核心配置文件的3个NVIDIA GPU上失败,并返回一个根据规范只在两种特定情况下调用的错误代码,据我所知,这些情况都不适用于此处。

这里可能出了什么问题?你是否遇到过这种情况?你有什么想法我可能忽略了什么?

英文:

I have OpenGL rendering code calling glDrawArrays that works flawlessly when the OpenGL context is (automatically / implicitly obtained) 4.2 but fails consistently (GL_INVALID_OPERATION) with an explicitly requested OpenGL core context 3.2. (Shaders are always set to #version 150 in both cases but that's beside the point here I suspect.)

According to specs, there are only two instances when glDrawArrays() fails with GL_INVALID_OPERATION:

  • "if a non-zero buffer object name is bound to an enabled array and the buffer object's data store is currently mapped" -- I'm not doing any buffer mapping at this point

  • "if a geometry shader is active and mode​ is incompatible with [...]" -- nope, no geometry shaders as of now.

Furthermore:

  1. I have verified & double-checked that it's only the glDrawArrays() calls failing. Also double-checked that all arguments passed to glDrawArrays() are identical under both GL versions, buffer bindings too.

  2. This happens across 3 different nvidia GPUs and 2 different OSes (Win7 and OSX, both 64-bit -- of course, in OSX we have only the 3.2 context, no 4.2 anyway).

  3. It does not happen with an integrated "Intel HD" GPU but for that one, I only get an automatic implicit 3.3 context (trying to explicitly force a 3.2 core profile with this GPU via GLFW here fails the window creation but that's an entirely different issue...)

For what it's worth, here's the relevant routine excerpted from the render loop, in Golang:

func (me *TMesh) render () {
	curMesh = me
	curTechnique.OnRenderMesh()
	gl.BindBuffer(gl.ARRAY_BUFFER, me.glVertBuf)
	if me.glElemBuf > 0 {
		gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, me.glElemBuf)
		gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil))
		gl.DrawElements(me.glMode, me.glNumIndices, gl.UNSIGNED_INT, gl.Pointer(nil))
		gl.BindBuffer(gl.ELEMENT_ARRAY_BUFFER, 0)
	} else {
		gl.VertexAttribPointer(curProg.AttrLocs["aPos"], 3, gl.FLOAT, gl.FALSE, 0, gl.Pointer(nil))
		/* BOOM! */
		gl.DrawArrays(me.glMode, 0, me.glNumVerts)
	}
	gl.BindBuffer(gl.ARRAY_BUFFER, 0)
}

So of course this is part of a bigger render-loop, though the whole "*TMesh" construction for now is just two instances, one a simple cube and the other a simple pyramid. What matters is that the entire drawing loop works flawlessly with no errors reported when GL is queried for errors under both 3.3 and 4.2, yet on 3 nvidia GPUs with an explicit 3.2 core profile fails with an error code that according to spec is only invoked in two specific situations, none of which as far as I can tell apply here.

What could be wrong here? Have you ever run into this? Any ideas what I have been missing?

答案1

得分: 1

我有一个猜测。

据我了解,所有的OpenGL调用必须在同一个线程上进行。这个限制与goroutines不太兼容,因为同一个goroutine在执行过程中可能在不同的线程上运行。

为了解决这个问题,你需要在主goroutine(或者进行OpenGL调用的任何goroutine)开始时将其锁定到当前线程,然后再初始化OpenGL。

import "runtime"

func main() {
    runtime.LockOSThread()

    ...
}

你看到不一致的结果的原因可能是由于实现的差异。

英文:

I have a wild guess.

As I understand it, all OpenGL calls must happen on the same thread. This restriction does not mix well with goroutines, since the same goroutine can run on different threads at different points in its execution.

To get around this problem, you need to lock your main goroutine (or whatever goroutine's doing OpenGL calls) to its current thread as soon as it starts, before initializing OpenGL.

import "runtime"

func main() {
    runtime.LockOSThread()

    ...
}

The reason you're seeing inconsistent results could be explained by implementation differences.

答案2

得分: 1

这里不仅仅是DrawArrays,我在这里弄错了。不知何故,我调用glVertexAttribPointer的方式是有问题的:在任何严格的核心配置文件中,无论是3.2还是4.2...我会进一步调查。在一个4.2的非严格上下文中,没有问题。

英文:

It's not just DrawArrays, I was mistaken here. Somehow my way of calling glVertexAttribPointer is the problem here: in any strict core profile, whether 3.2 or 4.2... will investigate further. In a 4.2 non-strict context, no problem.

huangapple
  • 本文由 发表于 2012年10月24日 22:42:59
  • 转载请务必保留本文链接:https://go.coder-hub.com/13051587.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定