英文:
Understanding OpenGL perspective projection matrix, Setting the near plane below 1.0
问题
I am rendering a simple cube with 8 vertices, but I am having an issue understanding how to get the camera closer to the cube. I was expecting setting the near plane below 1.0 would allow me to get closer to the objects, and they would appear bigger, however, setting the near plane below 1.0 does not do it.
f32 r = 1;
f32 l = -1;
f32 t = 1;
f32 b = -1;
f32 n = 1.0f;
f32 f = 5.0;
mat4 Projection =
{{
2.0f*n/(r-l), 0, (l+r)/(r-l), 0,
0, 2.0f*n/(t-b), (t+b)/(t-b), 0,
0, 0, (f+n)/(n-f), (2*n*f)/(n-f) ,
0, 0, -1, 0,
}};
I based my projection matrix on this.
The moment you set n < 1.0 then objects initially look further (that is because the projection matrix multiplies X,Y by n) thus, we achieved nothing because the maximum size an object can appear on the camera seems to be fixed! I do understand, that I can effectively zoom in by scaling the entire scene, but I believe I made a bug somewhere that might have caused this (or maybe my affine transformations are not complete? I have not added FOV and Aspect Ratio but that is because I want these to be fixed, to begin with). I think it is a bug because I have not read anything about an object having a maximum size on the screen, regardless of the camera distance.
So to summarize, my questions are: How do I get to make effectively make my camera get closer (and objects look bigger)? Or is it true that objects do have a maximum size that they can appear on screen? which is effectively their scaled coordinates multiplied by (1/z).
英文:
I am rendering a simple cube with 8 vertices, but I am having an issue understanding how to get the camera closer to the cube. I was expecting setting the near plane below 1.0 would allow me to get closer to the objects, and they would appear bigger, however, setting the near plane below 1.0 does not do it (the entire code is public here, but here is the relevant part)
f32 r = 1;
f32 l = -1;
f32 t = 1;
f32 b = -1;
f32 n = 1.0f;
f32 f = 5.0;
mat4 Projection =
{{
2.0f*n/(r-l), 0, (l+r)/(r-l), 0,
0, 2.0f*n/(t-b), (t+b)/(t-b), 0,
0, 0, (f+n)/(n-f), (2*n*f)/(n-f) ,
0, 0, -1, 0,
}};
I based my projection matrix on this
The moment you set n < 1.0 then objects initially look further (that is because the projection matrix multiplies X,Y by n) thus, we achieved nothing because the maximum size an object can appear on the camera seems to be fixed! I do understand, that I can effectively zoom in by scaling the entire scene, but I believe I made a bug somewhere that might have caused this (or maybe my affine transformations are not complete? I have not added FOV and Aspect Ratio but that is because I want these to be fixed, to begin with). I think it is a bug because I have not read anything about an object having a maximum size on the screen, regardless of the camera distance.
So to summarize, my questions are:
How do I get to make effectively make my camera get closer (and objects look bigger)?
Or is it true that objects do have a maximum size that they can appear on screen? which is effectively their scaled coordinates multiplied by (1/z).
答案1
得分: 1
> ...如何将相机拉近到立方体。
正如@Rabbid76所指出的,你可以通过对模型视图矩阵应用平移来将相机拉近。改变投影矩阵可以使模型变大或变小,但这相当于在不移动相机的情况下进行缩放(即改变视野),相机位置保持不变。
> 一旦你设定 n < 1.0 那么物体看起来就会更远... 因此,我们没有取得任何实质性的进展,因为相机所能呈现的对象的最大尺寸似乎是固定的!
首先,n=1.0 没有什么特别的地方。近、远平面是以你场景的距离单位来度量的,可以是任意值。只是这么说而已。
其次,近、远平面通常根据你场景深度复杂度来设置,与相机位置无关。你应该选择使近平面成为相机可能接近任何对象的最近位置,然后通过碰撞检测和响应来强制执行(即防止相机靠得太近物体)。
另外:在某些情况下,人们可能希望更改近/远平面以优化固定精度深度缓冲的使用;然而,现在有更好的选择,可以使用浮点数深度缓冲来处理这个问题。
> 我明白了,我可以通过缩放整个场景来实现放大效果
实际上不是这样。如果你统一缩放所有内容,它在屏幕上的投影将保持不变。如果你只缩放投影空间中的 XY 坐标,那么你会得到放大/缩小的效果,但这已经是投影矩阵为你做的事情了,这不同于将相机拉近到对象!
> 我没有添加 FOV 和宽高比,但这是因为我想从一开始就固定它们。
我不明白你的推理。如果你想将它们固定,你可以将它们作为参数隔离出来,然后基于它们计算其他所有内容。实际上,我从不使用具有 top/left/bottom/right 平面的投影矩阵公式,因为它很反直觉。相反,使用一些数学来从已知信息计算所需的内容。我更喜欢使用摄影中使用的物理相机属性。在你的情况下,使用 FOV 和视口大小来设置投影矩阵:
// 参数:
f32 n = 0.1; // 相机能接近的最近位置
f32 f = 10.0; // 你关心的最远的物体
f32 fovx = 1.5; // 水平视野(弧度)
f32 w = 1920, h = 1080; // 你视口的尺寸(用于宽高比)
// 临时变量:
f32 A = 1/tan(fovx/2);
f32 B = A*w/h;
mat4 投影矩阵 =
{{
A, 0, 0, 0,
0, B, 0, 0,
0, 0, (f+n)/(n-f), (2*n*f)/(n-f) ,
0, 0, -1, 0,
}};
之后这些值就固定了,你可以通过对模型视图矩阵进行平移来移动相机,就像之前说的那样。
英文:
> ...how to get the camera closer to the cube.
As @Rabbid76 has pointed out, you move the camera closer by applying a translation to your model-view matrix. Changing the projection matrix can make the model bigger or smaller, but that's equivalent to zooming in/out (i.e. changing the FOV) while the camera remains stationary.
> The moment you set n < 1.0 then objects initially look further ... thus, we achieved nothing because the maximum size an object can appear on the camera seems to be fixed!
Firstly, there's nothing special about n=1.0. The near and far planes are measured in your scene distance units, and those can be arbitrary. Just saying.
Secondly, the near and far planes are usually set according to your scene depth complexity, independent of the camera position. You should choose the near plane to be the closest your camera would ever get to any object, and then enforce that through collision detection and response (i.e. by preventing the camera from getting too close to the objects).
As an aside: in some circumstance one may want to change near/far planes to optimize the use of the fixed-precision depth buffer; however, nowadays there are better options to deal with that using a floating point depth buffer.
> I do understand, that I can effectively zoom in by scaling the entire scene
Actually no. If you scale everything uniformly, its projection on the screen will stay the same. If you scale only the XY coordinates in the projective space then you get the zoom in/out effect, but that's what the projection matrix is already doing for you, and that's not the same as getting the camera closer to the object!
> I have not added FOV and Aspect Ratio but that is because I want these to be fixed, to begin with.
I don't understand your reasoning. If you want these to be fixed, you isolate them as parameters and then calculate everything else based on those. In fact I never use that formula for the projection matrix with the top/left/bottom/right planes, precisely because it's counter-intuitive. Instead, use some math to calculate what you need from what you have. I prefer using physical camera properties as used in photography. In your case, use FOV and viewport size to set the projection matrix:
// parameters:
f32 n = 0.1; // the closest your camera ever be
f32 f = 10.0; // the farthest object you ever care
f32 fovx = 1.5; // horizontal fov in radians
f32 w = 1920, h = 1080; // the size of your viewport (for aspect ratio)
// temporaries:
f32 A = 1/tan(fovx/2);
f32 B = A*w/h;
mat4 Projection =
{{
A, 0, 0, 0,
0, B, 0, 0,
0, 0, (f+n)/(n-f), (2*n*f)/(n-f) ,
0, 0, -1, 0,
}};
After this those are fixed, and you move your camera by translating the model-view matrix as was said earlier.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论