英文:
How do I rotate a sprite in C without pixel gaps
问题
我正在使用C语言制作一个游戏引擎,其中包含一个旋转函数,用于将图像旋转到指定角度。但是这个函数存在一些问题。我之前使用坐标x和y,经过旋转变换后得到nx和ny,然后将dest[x, y] = source[nx, ny],这个方法效果很好,没有出现问题。但是因为这种方法是根据变换后的坐标来读取像素值,所以有时会读取到源图像宽度和高度之外的内容,导致垃圾内存被引入。我可以通过检查nx或ny是否超过了源图像的宽度或高度,或小于零来轻松解决这个问题。但是这样会在旋转过程中剪切精灵的部分内容,因为有时图像的高度大于宽度,反之亦然,对角线也较长等等。我知道我可以调整图像的大小来解决这个问题,但这似乎效率不高。
示例图像
这是没有边界检查的图像。
我的新方法是在渲染到dest之前从普通的源坐标中读取并在渲染之前进行变换,所以现在没有大小限制了,如下所示:dest[nx, ny] = source[x, y]。这个方法可以渲染任何大小的图像并进行旋转而不剪切,但是会出现一个点状的抖动效果,在45度角度上更常见,在基本方向上则不会出现。这里是它的效果。
以下是我使用第一种方法的代码:
void drawRotation(Sprite src, int xloc, int yloc, int degrees)
{
float radians = Deg2Rad(degrees);
float cosr = cos(radians);
float sinr = sin(radians);
uint32_t *dest = (uint32_t *)Screen.memory + (xloc - src.ox) + (yloc - src.oy) * Screen.width;
for (int y = 0; y < src.height; y++)
{
for (int x = 0; x < src.width; x++)
{
int nx = (cosr * (x - src.ox) + sinr * (y - src.oy));
int ny = (-sinr * (x - src.ox) + cosr * (y - src.oy));
dest[(nx + src.ox) + (ny + src.oy) * Screen.width] = ((uint32_t *)src.memory)[(y) * src.width + (x)];
}
}
}
我尝试了这两种方法,喜欢有点状效果的那一种,只是如果没有这些点就更好了。但只要效率高,我也可以接受其他方法。我宁愿不使用插值,因为这个问题似乎很小,但如果没有其他方法的话...
我正在使用wingdi32图形库。图像是使用malloc存储的,是void指针。当转换为整数时,颜色格式为ARGB。
英文:
I'm making a game engine in C and I have a rotation function that rotates an image to a degree. But there's a few problems with it. I previously had coordinates x and y, transformed them into nx, ny using a rotation transform then assigned dest[x, y] = source[nx, ny] which worked nicely and had no dot effect. But this method because it reads from the transformed coordinates reads from things outside the width and height of the source image, brining in garbage memory. I can easily remedy this by checking if nx or ny exceeds the width or height or is less than zero. But then this cuts off parts of the sprite during rotation because sometimes the height of the image is larger than the width or vice versa, and diagonals are longer and such. I know I can resize the image to accommodate this, but that seems inefficient.
Example image
That's the image without bounds checking.
My new method was to read from the normal source coordinates and transform them before rendering onto the dest so size is no longer a limitation, so: dest[nx, ny] = source[x, y]. And this worked in that it can render any size image and rotate it without clipping, but instead there's a dotted dithering effect that gets more common in 45 degree angles and goes away at cardinal angles. Here's what that looks like.
And here is my code with the first method commented out
void drawRotation(Sprite src, int xloc, int yloc, int degrees)
{//()
float radians = Deg2Rad(degrees);
float cosr = cos(radians);
float sinr = sin(radians);
uint32_t *dest = (uint32_t *)Screen.memory+ (xloc-src.ox) + (yloc-src.oy) * Screen.width;
for(int y = 0; y < src.height; y++)
{
for(int x = 0; x < src.width; x++)
{
int nx = (cosr * (x-src.ox) + sinr * (y-src.oy));
int ny = (-sinr *(x-src.ox) + cosr * (y-src.oy));
dest[(nx+src.ox) + (ny+src.oy) * Screen.width] = ((uint32_t*)src.memory)[(y) * src.width + (x)];
// dest[(x) + (y) * Screen.width] = ((uint32_t*)src.memory)[(ny+src.oy) * src.width + (nx+src.ox)];
}
}
}
I tried those two methods and prefer the dotted one if only it had no dots. But I'm open to other ways as long as it's efficient. I'd rather not use interpolation for something seemingly so small but if there's no other way...
I'm using wingdi32 graphics library. Images are stored with malloc and are void pointers. When cast as integers the colour is in format ARGB.
答案1
得分: 0
问题是,你遍历了源图像中的每个像素,但像素映射不是一对一的,意味着在显示图像中可能会有比源图像更多的像素,或者反过来也可能是。这意味着你必须预期会出现间隙和多次绘制的像素。
一种解决方法是遍历每个显示像素,将其旋转回源图像(即计算在源图像上的位置),检查像素是否在源图像上,并且如果是,则检查颜色。这应该可以工作,但并不是最有效的方式,特别是当你有多个源图像生成一个显示图像时。
当你有多个图像时,更有效的方式是:将源图像分割成三角形(在矩形图像的情况下,你可以创建2个三角形)。旋转每个顶点(边)并存储新位置。然后遍历显示图像,检查像素是否位于三角形内,如果是的话,确定(如果有多个的话)应该使用哪个源图像/三角形,并且你应该能够根据当前显示像素到顶点的距离来计算在源图像上的相应坐标。
在使用低级图形API(如OpenGL,我假设Vulcan和DirectX也是相同的)来显示2D和3D图形时,也会进行类似的操作。如果你对这些不熟悉,我建议学习如何使用这样的API或类似的库。这样你可以提高对计算机图形如何工作以及如何使用矩阵(因为你在你的代码中没有使用它们)的理解……
英文:
The problem is, that you loop over every pixel in the source image, but the pixel mapping isn't 1:1, means they could be more pixel in the displayed image than in the source image or the other way around. This means you have to expect gaps and pixels that are drawn multiple times.
One solution is to loop over ever display pixel, rotate it back to the source image (i.e. calculate the position on the source image), check if the pixel is on the source image and if it is check the colour. That should work but isn't the most efficient way, especially when you have multiple source images that generate one displayed image.
The more efficient way when you have more than a single image: Split your source images in triangles (in a case of a rectangular image, you would make 2 triangles). Rotate each vertex (edge) and store the new position. Then loop over the display image and check if a pixel is inside a triangle and if so, determine (in case there is more than 1) which source image/triangle it should use, and you should be able to calculate the corresponding coordinates on the source image from the distance of the current display pixel to the vertexes.
A similar thing is done when using a low level graphics API, like OpenGL (i assume it is the same for Vulcan and DirectX) to display 2D and 3D graphics. If you are not familiar with any of it, i suggest to learn how to such a API or a similar library. This way you may improve your understanding how computer graphics works, how to use matrices (since you didn't use them in your code), ...
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论