英文:
How to cast a CMRotationMatrix from CoreMotion to be used by a SceneKit camera (or any SCNNode)
问题
我正在尝试使用CMHeadphoneMotionManager
提供的attitude
来指导SKSceneView
内的摄像机。如果我没有弄错的话,它们使用不同的参考系统,因此直接初始化一个float4x4
矩阵类似于这样在没有任何排列或更改的情况下将无法正常工作(CoreMotion和SceneKit之间的轴不匹配)。
为了提供一些背景信息,我可能是错的,因为我找不到确切的参考系统,不得不运行一些测试来找出。通过你的Airpods(或支持运动的耳机)提供的attitude
所使用的坐标系统如下:正Y指向从你的鼻子前方,正Z指向逆着重力穿过头部向上,正X指向右边(在开始捕捉运动时选择的随机方向):
然而,SceneKit的参考坐标系统具有正Y指向上方,正Z指向后方(假设你是摄像机,它朝着负Z方向看)。X轴似乎是相同的:
我的线性代数知识在这个水平上有些有限,尽管我已经尝试了几天,但我不知道如何将来自HeadphoneMotion的给定旋转矩阵转换为SceneKit摄像机的变换。这就是问题所在。(理想情况下,还需配合所需的列排列背后的概念,以了解如何完成这个转换)。
此外,我想避免在这个阶段使用eulerAngles
或quaternions
。
英文:
I'm trying to use the attitude
given by CMHeadphoneMotionManager
to guide a camera inside a SKSceneView
. If I'm not mistaken, they use difference reference systems, and so the direct initialisation of a float4x4
matrix like this one would not work without any permutation or change (axis do not match between CoreMotion and SceneKit).
To add some context, and I might well be wrong because I couldn't find the exact reference system, and had to run some tests to find out, the coordinate system used as reference by the attitude
given by your Airpods (or motion-enabled headphones), goes like this, with positive Y pointing forward from your nose, positive Z pointing up against gravity through your head, and positive X pointing right (at a random direction picked when you start capturing motion):
However, the reference coordinate system for SceneKit have positive Y pointing upwards, and positive Z pointing backwards (assuming you are the camera, which looks towards negative Z). Axis X seems the be the same:
My linear algebra knowledge at this level is sort of limited, and even though I've been trying for a few days, I don't know how to convert the given rotation matrix from HeadphoneMotion, to be used by the transform of a SceneKit camera. That would be the question. (Ideally, paired with the concepts behind the permutation of columns required, to learn how it's done.)
Also, I would like to avoid using eulerAngles
or quaternions
at this point.
答案1
得分: 0
在深入研究矩阵后,我提出了一个解决方案。不确定是否是正确的方法,也不确定背后的原因(如果有人知道并愿意详细解释,请随时)。基本上,假设我绘制的两个坐标系的方案是正确的(尤其是CoreMotion的那个),我们可以计算一个旋转矩阵 T
,将两个系统分开;也就是说,一个矩阵将一个系统转换为另一个系统。
通过查看这些方案,我们知道SceneKit坐标系统与CoreMotion坐标系统相距90°
(在X轴
上),旋转矩阵T
如下(用Swift表示,逐列):
let matrixT = simd_float4x4([
simd_float4( 1.0, 0.0, 0.0, 0.0),
simd_float4( 0.0, 0.0, 1.0, 0.0),
simd_float4( 0.0, -1.0, 0.0, 0.0),
simd_float4( 0.0, 0.0, 0.0, 1.0)
])
知道这一点,我们可以获得R'
,它只是目标坐标系统中的初始旋转矩阵R
,方法如下(这是关键):
R' = T^-1 * R * T
在Swift中,可以这样表示:
let newRotation = matrixT.inverse * originalRotationMatrix * matrixT;
就像这样,新的旋转可以用作SceneKit相机中的变换,它可以正常工作。
英文:
After digging a bit more into matrices, I came up with a solution.
Not sure if it's the right way, and not sure of the reasoning behind it (If someone knows and want to elaborate, please feel free).
Basically, assuming the schemes I drew of both coordinates systems are correct (specially the CoreMotion one), we can calculate a rotation matrix T
that separates both system; this is, one matrix that would turn one system into the other.
By looking at those schemes, we know the SceneKit coordinate system is 90º
apart (on the axis X
) from the CoreMotion coordinate system, and the rotation matrix T
would be as follows (in swift, column by column):
let matrixT = simd_float4x4([
simd_float4( 1.0, 0.0, 0.0, 0.0),
simd_float4( 0.0, 0.0, 1.0, 0.0),
simd_float4( 0.0, -1.0, 0.0, 0.0),
simd_float4( 0.0, 0.0, 0.0, 1.0)
])
Knowing this, we can obtain R'
, which is just the initial rotation matrix R
but in the target coordinate system, by doing this (and this is the key):
R' = T^-1 * R * T
Which in swift, would be:
let newRotation = matrixT.inverse * originalRotationMatrix * matrixT;
Just like that, the newRotation can be used as a transform in the SceneKit camera, and it works.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论