英文:
Should I even pay attention to the native image format for OpenGL textures?
问题
例如,glTexImage2D
函数具有参数 internalformat
,该参数指定纹理数据在内部的格式。GL 驱动程序将会通过 format
、type
和 data
参数来转换我提供的数据。
许多来源(甚至是来自供应商的官方文档)都说,我提供的数据应该与内部格式匹配。例如,英特尔文档中的第33页:https://web.archive.org/web/20200428134853/https://software.intel.com/sites/default/files/managed/49/2b/Graphics-API-Performance-Guide-2.5.pdf
> 在上传纹理时,应提供与内部格式相同的纹理,以避免图形驱动程序中的隐式转换。
但我对这种方法有一些疑虑:
- 我根本不知道显卡的本机格式是什么。它可能是带有10位归一化整数的RGBA,甚至可能是一些真正的奇特东西。因此,驱动程序无论如何都必须进行转换。OpenGL 规范只是定义了一些必须由实现精确支持的内部格式。但当然,驱动程序可能会将内部格式转换为某种其他的“本机格式”。
- 在大多数情况下,我将以我无法控制的格式从外部来源加载纹理。所以我有两个选择:编写一个在我的应用程序中转换图像数据的函数,或者让驱动程序来完成这项工作。在我看来,第二个选项更好。驱动程序可能已经实现了高度优化的转换算法,而且比我的算法出错的可能性要小,因为它已经经过了充分的测试。
因此,是否真的有必要担心这些问题,还是将数据按原样提供给OpenGL就可以了?
英文:
For example, the glTexImage2D
function has the parameter internalformat
, which specifies the format, the texture data will have internally. The GL driver will then convert the data I supply through the format
, type
and data
parameters.
Many sources (even official documents from vendors) say, that the data I supply, should match the internal format. For example, page 33 in this document from Intel: https://web.archive.org/web/20200428134853/https://software.intel.com/sites/default/files/managed/49/2b/Graphics-API-Performance-Guide-2.5.pdf
> When uploading textures, provide textures in a format that is the same as the
internal format, to avoid implicit conversions in the graphics driver.
But I see some issues with this approach:
- I simply do not know, what the native formats of the graphics card are. It may be RGBA with 10 bpp normalized integer or even some real exotic stuff. So the driver has to do a conversion anyway. The OpenGL specification just defines some internal formats which are required to be supported exactly by the implementation. But of course, the driver may convert the internal format to some other „native format“.
- In most cases, I will load my texture from external sources in a format that I have not much influence. So I have two choices: Write a function that converts the image data in my own application or let the driver do the work. In my opinion, the second option is the better one. The driver will likely have highly optimized conversion algorithms implemented and will much less error prone than my own algorithm may be, because it's already very well tested.
So, is there really a need to bother about these things or is it perfectly fine to just feed OpenGL the data as it is?
答案1
得分: 1
你最大的担忧不应该是上传单个图像到GPU所需的时间。
最大的瓶颈不是你只会做一次的事情,而是你会重复做的事情。例如,如果你超过了常驻纹理的限制,那么OpenGL实现可能会将纹理换出到主内存(这可能会成为瓶颈)。但是,如果你能够通过使用较低的位数或压缩格式来节省内存,那么你就能够在GPU上保留更多的纹理,这将导致更好(更流畅)的性能。
还有一件事情你也应该记住,就是你的应用程序不是唯一会占用GPU资源的应用程序。为了获得整体流畅的体验,只使用你需要的资源(最低限度),不要贪心。
当然,拥有大量的GPU内存需要很多才能使GPU达到极限,但这并不意味着没有限制。
在你的评论之后,我重新阅读了你的问题,我认为我有点误解了它。
我会说,在认为驱动程序具有高度优化的转换例程的情况下,转换兼容的颜色格式并没有太多意义。
但在像YUV到RGB转换这样的情况下,唯一的选择是为每个平面使用纹理(在着色器中进行计算)或将YUV转换为RGB三元组。对于HSV或CMYK颜色格式也是如此,其中在主机端进行转换是不可避免的。
再次强调,在兼容的格式内,让驱动程序完成工作,否则自行转换。
作为一个附带说明:
根据你使用的OpenGL堆栈,例如EGL可以让你选择具有特定属性的帧缓冲配置(参见:https://registry.khronos.org/EGL/sdk/docs/man/html/eglChooseConfig.xhtml - 例如EGL_CONFIG_CAVEAT)。根据你选择的配置,你应该了解你的帧缓冲属性(位深度、颜色大小等)。
英文:
Your biggest concern shouldn't be the time you need to upload a single image to the GPU.
The biggest bottleneck is not the thing you'll do only once but that you'll do repetitively. If you for example exceed the limit of resident textures, then the OpenGL implementation could swap the textures out into main memory (which can then become a bottleneck).
But if you're able to spare memory by using lower bits or compressed formats, then you'll be able to keep more textures on the GPU, which would lead to better (smoother) performance.
And one thing you also should keep in mind is, that your application is not the only application that acquires GPU resources. For an overall smooth experience, use only what you need (bare minimum), don't be greedy.
Sure, with gigs of GPU memory, it takes a lot to bring the GPU to its limits, but that dosn't mean that there would be no limit.
After your comment I've reread your question and i think, that i have slightly misinterpreted it.
I would say, that you're right in thinking that the driver has highly optimized conversion routines, where it does not make much sense to convert between compatible color formats.
But in cases like YUV to RGB conversions, the only alternatives are to use textures for each plane (do the calculation in the shader) or convert the YUV into RGB triplets. Same goes for HSV or CMYK color formats, where a conversion on the host side is unavoidable.
Again, within the compatible formats, let the driver do the work, convert yourself otherwise.
As a side note:
Depending on the OpenGL stack you're using, EGL for example lets you choose a frame buffer configuration with specific attributes (see: https://registry.khronos.org/EGL/sdk/docs/man/html/eglChooseConfig.xhtml - e.g. EGL_CONFIG_CAVEAT). Based on your chosen configuration, you should have the knowledge of your frame buffer properties (bit depth, color size, etc).
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论