OpenGL图形中大范围缩放时的渲染精度问题。

huangapple go评论114阅读模式
英文:

Rendering Accuracy Problem with Large Zoom in/Out Range in OpenGL Graphics

问题

我有一个基于简单摄像机的“现代”OpenGL 3D图形显示,用于渲染由指定点、线和曲线集合构建的相对简单的对象(例如立方体、圆柱体等)。三条不同颜色的固定长度线条绘制,它们在世界空间的中心交汇,以直观地表示XYZ笛卡尔坐标轴。用户可以通过鼠标控制在视图周围平移、旋转视图和缩放(通过鼠标滚轮移动)。

如果我尝试在坐标轴原点处进行大量缩放,以便能够在原点附近渲染的一些非常接近的点之间视觉分开(例如,相隔0.000001长度单位),我会遇到渲染精度的问题:

(1)三条坐标轴线开始无法全部在同一点(原点)相交。其中两条坐标轴相交,而第三条坐标轴线则与这两条线分别在离原点稍远的地方交叉。第三条坐标轴的分离量会随着视角旋转而略有变化。

(2)例如,本应位于其中一条坐标轴上的点不再如此渲染,而是看起来稍微偏离坐标轴线(同样,点与坐标轴的分离量会随着视角旋转略有变化)。

为了提高精度,我已从使用默认的“GLfloat”更改为“GLdouble”,并修改了所有与模型几何相关的代码,如指定顶点位置、距离等,以使用双精度(即使用“dvec3”而不是默认的“vec3”等)。但这并没有改善情况。[注意:继续使用GLfloat而不是GLdouble的唯一项目是指定用于渲染的点或线的颜色的RGB值等事项]

如何在极端缩放到非常小的比例时保持渲染的准确性?

英文:

I have a simple camera based 'modern' OpenGL 3D graphics display, for rendering relatively simple objects that are constructed from collections of specified points, lines and curves (e.g. a cube, cylinder, etc). Three different colored fixed length lines are drawn that intersect each other at the center of the world space, to visually represent the XYZ Cartesian axes. The user can, via mouse control, pan around the view, rotate the view AND zoom in/out (via mouse wheel movement).

If I try and do a large amount of zooming in on the origin of the Axes to a level that allows me to visually separate some very close together rendered points that lie near the origin (say, 0.000001 length units apart) I get problems with rendering accuracy:

(1) the three Axes lines start to fail to ALL intersect each other at same point (origin). Two of the Axes intersect and a third axis line crosses each of those two lines separately a small distance away from the origin. The amount of separation of the third axis varying slightly with viewing rotation.

AND

(2) Points that, for example, are intended to lie exactly on one of the Axes are no longer rendered as such, and instead appear to be located slightly off the axis line (again the amount of separation of the points from the axis varies a little with viewing rotation).

OpenGL图形中大范围缩放时的渲染精度问题。

To increase accuracy I have changed from using default 'GLfloat' to 'GLdouble' and modified all model geometry related code such as specifying vertex positions, distances, etc. to be in double precision (i.e. use of 'dvec3' instead of default 'vec3', etc.). But this makes no difference. [NOTE: The only items that continue to use GLfloat instead of GLdouble are things like specifying RGB values for the colors of points or lines that are rendered]

How do I maintain accuracy of rendering with extreme zooming in to very small scales?

答案1

得分: 1

你必须只用两个距离原点远的点来渲染你的线,比如 (-1,0,0) 和 (1,0,0)。

当这些点投影到放大的视口上(假设比例为 S),它们的浮点坐标变得非常大(数量级为 S)。然后,光栅化器在将它们渲染到屏幕上时需要剪裁这些坐标,在视口内的坐标相对较小(<1)。这会导致精度丧失:因为线的端点不精确(舍入误差为 ε∙S),视口内的插值结果将同样不精确(即最多偏离理论值 ε∙S)。

如果内部使用 32 位浮点数,我预计光栅化器的精度应该约为 ε = 2^(-23) ≈ 0.0000001,这与开始观察到的效果尺度相符。请注意,这不受属性精度的影响,而是光栅化器内部的问题,很可能是硬件电路中的固定值。

解决方案实际上相当简单。你只需要将线段分割成经过原点的两部分:(-1,0,0) 到 (0,0,0),以及 (0,0,0) 到 (1,0,0)。这样,原点将被投影到其精确位置,视口外端点的舍入误差将仅对绘制线的其余部分产生极小的影响。

英文:

You must be rendering your lines with just two points that are far from the origin, like (-1,0,0) and (1,0,0).

When those are projected onto the zoomed-in viewport (let's say at scale S), their floating point coordinates get very large (on the order of S). The rasterizer then needs to clip those when rendering them onto the screen, where the coordinates within the viewport are comparatively small (<1). This results in the loss of precision: since the endpoints of the lines are imprecise (rounding error of ε·S), the interpolated result within the viewport is going to be just as imprecise (i.e. up to ε·S off the theoretical value).

I expect the precision of the rasterizer to be about ε = 2<sup>−23</sup> ≈ 0.0000001 if it uses 32-bit floating point numbers internally, which is in agreement to the scale that you start observing the effect. Note that it is not affected by the precision of the attributes, but is rather internal to the rasterizer, and could be very well hardwired in the circuitry.

The solution is actually rather simple. All you need to do is to split your lines so they go through the origin explicitly. I.e. render it in two segments: (-1,0,0) to (0,0,0), and (0,0,0) to (1,0,0). This way the origin will be projected to its exact location, and the rounding errors of the endpoint outside the viewport will have only a minimal influence on the rest of the drawn line.

答案2

得分: 0

以下是翻译好的部分:

  1. 根据数值的大小分开成更多的浮点数

    如果你知道范围,可以将值分成更多的浮点数,具体见:

  2. 模拟传递双精度浮点数使用3个浮点数

    将双精度浮点数的尾数(1+52位)分解并存储到3个浮点数(1+23位)中,在CPU端传递给顶点着色器:

    double x;       //    64位输入
    float x0,x1,x2; // 3个32位输出
    x0=float(x); x-=x0;
    x1=float(x); x-=x1;
    x2=float(x);
    

    然后在片段着色器中重新构建:

    double x;       //    64位输出
    float x0,x1,x2; // 3个32位输入
    x =x0;
    x+=x1;
    x+=x2;
    
  3. 使用相对坐标

    这样你就知道你的视图以某个点p0为中心,所以在应用缩放(或渲染)之前,你只需从所有坐标(和摄像机位置)中减去该值。这将显著减少需要的尾数位数并解决精度问题(在一定程度上)

    例如,这对我解决了以下高级缩放问题:

    此方法还不需要着色器(如果你被困在旧版GL中)

请注意,#1、#2和使用内置的几何渲染对我来说从未真正有所帮助(可能是我的旧图形卡上着色器编译器的原因),在这种情况下,你需要完全绕过基元光栅化器,并像这样自己渲染:

因此,渲染你的基元的BBOX QUAD,然后在着色器内部使用SDF来discard;在外部的点。简单的基元,如线条和三角形等,可以使用简单的方程在O(1)内解决,但上面的链接不是这种情况(这就是为什么它如此复杂)。

英文:

There is no way (that I know of) to directly pass 64bit doubles to interpolators as Vertex is truncating gl_Position to 32bit floats (or less).

Still there are ways to improve precision:

  1. separate values to more floats based on magnitude

    this can be done if you know the ranges inside see:

  2. emulate passing double with 3 floats

    so you disect doubles mantissa (1+52 bits) and store it into 3 floats (1+23 bits) on CPU side and pass that to Vertex shaders:

    double x;       //    64 bit input
    float x0,x1,x2; // 3x 32 bit output
    x0=float(x); x-=x0;
    x1=float(x); x-=x1;
    x2=float(x);
    

    then on Fragment side you reconstruct back:

    double x;       //    64 bit output
    float x0,x1,x2; // 3x 32 bit input
    x =x0;
    x+=x1;
    x+=x2;
    
  3. use relative coordinates

    so you know your view is centered around some point p0 so you just substract that value from all coordinates (and camera position) before you apply zoom (or render). This will significantly lover mantissa bits needed and overcome precision problems (up to a degree)

    this helped me for example with this high zoom problem:

    Also this way does not require shaders (in case youre stuck with old GL)

Note that both #1,#2 and using inbuild geometry rendering never really helped me (probably shader compiler's doing on my long outdated gfx cards) in such case you would need to bypass primitive rasterizers completely and render them on your own like this:

so render BBOX QUAD of your primitive and inside shader use SDF to discard; points outside. Simple primitives like lines triangles etc can be decided in O(1) with simple equation the above link is not the case however (that is why its so complex).

huangapple
  • 本文由 发表于 2023年5月25日 15:12:43
  • 转载请务必保留本文链接:https://go.coder-hub.com/76329729.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定