在WebGL中使用帧缓冲拼接图像。

huangapple go评论55阅读模式
英文:

Stitch images together in WebGL using a framebuffer

问题

这是对我的问题的一种跟进,在画布上异步/按顺序绘制纹理会删除旧纹理,但是一个朋友向我推荐了一种不同的方法。我刚开始学习WebGL,所以请多包涵。

我的目标

  • 异步加载图像
  • 以“并排平铺”的方式将这些图像渲染到单个WebGL画布上。每个图像都包含了在画布上渲染它的坐标
  • 在每次异步加载图像时,将整个画布视为单个纹理,然后在着色器中对整个纹理应用一些图像处理

使用帧缓冲

我的理解是你可以创建一个帧缓冲,将纹理渲染到它上面,然后将帧缓冲渲染到目标纹理,最后将目标纹理渲染到屏幕上。

// 首先创建帧缓冲和目标纹理,并将纹理附加到帧缓冲
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
const targetTexture = gl.createTexture();

gl.framebufferTexture2D(
	gl.FRAMEBUFFER,
	gl.COLOR_ATTACHMENT0,
	gl.TEXTURE_2D,
	targetTexture,
	0
);

我的想法是在每次加载图像时,你可以从图像创建一个纹理。在启用每个纹理的顶点属性后,你可以调用drawArrays,这将绘制到帧缓冲区。然后,可以取消绑定帧缓冲,然后再次调用drawArrays,这应该……将帧缓冲区绘制到屏幕上?这是我感到困惑的地方:

// 假设现在我们有一个包含几个瓦片URL的数组:
tiles.forEach((tile) => {
  const image = new Image();
  image.onload = () => render(image, tile);
  image.src = tile.path;
});

function render(tileImage: HTMLImageElement, tile: Tile) {
	// 查找顶点数据应该放在哪里
	var positionLocation = gl.getAttribLocation(program, 'a_position');
	var texcoordLocation = gl.getAttribLocation(program, 'a_texCoord');

	// 创建一个缓冲区来存放三个2D裁剪空间点
	var positionBuffer = gl.createBuffer();

	gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
	// 设置与图像相同大小的矩形。
	setRectangle(
		gl,
		tile.position.x,
		tile.position.y,
		tileImage.width,
		tileImage.height
	);

	// 为矩形提供纹理坐标
	var texcoordBuffer = gl.createBuffer();
	gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);

	gl.bufferData(
		gl.ARRAY_BUFFER,
		new Float32Array([
			0.0, 0.0,
	        1.0, 0.0,
	        0.0, 1.0,
	        0.0, 1.0,
	        1.0, 0.0,
	        1.0, 1.0,
		]),
		gl.STATIC_DRAW
	);

	// 创建纹理并将其绑定到gl上下文
	const texture = gl.createTexture();
	gl.bindTexture(gl.TEXTURE_2D, texture);

	// 设置参数以渲染任何大小的图像。
	gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
	gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
	gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
	gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);

	// 上传瓦片图像到纹理
	gl.texImage2D(
		gl.TEXTURE_2D,
		0,
		gl.RGBA,
		gl.RGBA,
		gl.UNSIGNED_BYTE,
		tileImage
	);

	// 查找uniform变量
	var resolutionLocation = gl.getUniformLocation(program, 'u_resolution');
	var textureSizeLocation = gl.getUniformLocation(program, 'u_textureSize');

	// 告诉WebGL如何将裁剪空间转换为像素
	gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);

	// 告诉它使用我们的程序(一对着色器)
	gl.useProgram(program);

	// 打开位置属性
	gl.enableVertexAttribArray(positionLocation);
	gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
	gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);

	// 打开纹理坐标属性
	gl.enableVertexAttribArray(texcoordLocation);
	gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);
	gl.vertexAttribPointer(texcoordLocation, 2, gl.FLOAT, false, 0, 0);

	// 设置分辨率和图像大小
	gl.uniform2f(resolutionLocation, gl.canvas.width, gl.canvas.height);
	gl.uniform2f(textureSizeLocation, 256, 256);

    // 绑定帧缓冲并绘制数组 - 绘制到帧缓冲?
	gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
	gl.drawArrays(gl.TRIANGLES, 0, 6);

    // 解绑帧缓冲并绘制...到画布?
	gl.bindFramebuffer(gl.FRAMEBUFFER, null);
	gl.drawArrays(gl.TRIANGLES, 0, 6);
}

在最后几行我感到困惑。我知道这不起作用的原因是因为如果我在每个图像加载上放置一个人为的延迟,你可以看到每个图像都被绘制到画布上,但是当下一个图像被绘制时,前一个图像消失了。

演示此问题的Codesandbox

我已经阅读了许多关于这个问题的讨论。在[WebGL显示帧缓冲区?

英文:

This is something of a follow-up to my question Draw textures to canvas async / in sequence deletes old textures, but with a different approach recommended to me by a friend. I am just learning WebGL, so bear with me.

My goal

  • Load images asyncronously
  • Render those images to a single webgl canvas in a side-by-side "tiled" fashion. Each image will contain coordinates to dictate where in the canvas it should be rendered
  • On each async image load, treat the whole canvas as a single texture, then apply some image processing in the shader to the texture as a whole

Using a framebuffer

My understanding is that you can create a framebuffer, render textures to it, and then render the framebuffer to a target texture, and then render the target texture to the screen.

// First I create the frame buffer, and a target texture to render to,
// and attach the texture to the framebuffer
const fb = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
const targetTexture = gl.createTexture();
gl.framebufferTexture2D(
gl.FRAMEBUFFER,
gl.COLOR_ATTACHMENT0,
gl.TEXTURE_2D,
targetTexture,
0
);

My idea is that on every image load, you can create a texture from the image. After enabling the vertex attributes on each texture, you can then call drawArrays, which would then draw to the framebuffer. After doing that, you should be able to unbind the framebuffer, then call drawArrays again, which should...draw the framebuffer to the screen? This is where I am getting confused:

// Let's pretend we have a few tile urls in an array for now:
tiles.forEach((tile) => {
  const image = new Image();
  image.onload = () => render(image, tile);
  image.src = tile.path;
});

function render(tileImage: HTMLImageElement, tile: Tile) {
	// look up where the vertex data needs to go.
	var positionLocation = gl.getAttribLocation(program, 'a_position');
	var texcoordLocation = gl.getAttribLocation(program, 'a_texCoord');

	// Create a buffer to put three 2d clip space points in
	var positionBuffer = gl.createBuffer();

	gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
	// Set a rectangle the same size as the image.
    // see Appending of question for details
	setRectangle(
		gl,
		tile.position.x,
		tile.position.y,
		tileImage.width,
		tileImage.height
	);

	// provide texture coordinates for the rectangle.
	var texcoordBuffer = gl.createBuffer();
	gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);

	gl.bufferData(
		gl.ARRAY_BUFFER,
		new Float32Array([
			0.0, 0.0,
	        1.0, 0.0,
	        0.0, 1.0,
	        0.0, 1.0,
	        1.0, 0.0,
	        1.0, 1.0,
		]),
		gl.STATIC_DRAW
	);

	// Create a texture and bing it to the gl context
	const texture = gl.createTexture();
	gl.bindTexture(gl.TEXTURE_2D, texture);

	// Set the parameters so we can render any size image.
	gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
	gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
	gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
	gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);

	// Upload the tile image to the texture
	gl.texImage2D(
		gl.TEXTURE_2D,
		0,
		gl.RGBA,
		gl.RGBA,
		gl.UNSIGNED_BYTE,
		tileImage
	);

	// lookup uniforms
	var resolutionLocation = gl.getUniformLocation(program, 'u_resolution');
	var textureSizeLocation = gl.getUniformLocation(program, 'u_textureSize');

	// Tell WebGL how to convert from clip space to pixels
	gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);

	// Tell it to use our program (pair of shaders)
	gl.useProgram(program);

	// Turn on the position attribute
	gl.enableVertexAttribArray(positionLocation);
	gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
	gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);

	// Turn on the texcoord attribute
	gl.enableVertexAttribArray(texcoordLocation);
	gl.bindBuffer(gl.ARRAY_BUFFER, texcoordBuffer);
	gl.vertexAttribPointer(texcoordLocation, 2, gl.FLOAT, false, 0, 0);

	// set the resolution and size of image
	gl.uniform2f(resolutionLocation, gl.canvas.width, gl.canvas.height);
	gl.uniform2f(textureSizeLocation, 256, 256);

    // bind frame buffer and draw arrays - draw TO the framebuffer?
	gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
	gl.drawArrays(gl.TRIANGLES, 0, 6);

    // Unbind framebuffer and draw...to the canvas?
	gl.bindFramebuffer(gl.FRAMEBUFFER, null);
	gl.drawArrays(gl.TRIANGLES, 0, 6);
}

It is in the last few lines that I get confused. The reason I know this is not working, is because if I put an artificial delay on each image load, you can see that each image is drawn to the canvas, but when the next one is drawn, the previous one disappears.

Codesandbox demonstrating the issue

I have read many discussions on this. In WebGL display framebuffer?, where gman shows how to render to a framebuffer, then to the screen, for a single image. The question How to work with framebuffers in webgl? is very similar as well. Most of the questions I've found have been either like this - rendering a simple single image to a framebuffer, then to the screen - or far beyond my level at this point, i.e. using a framebuffer to render to the faces of a spinning cube. I can't seem to find any information on how to take simple 2d images, and render them to a webgl canvas in an async way.

I have also seen several recommendations to draw images to a 2d canvas, and use that as the source of a singular 2d texture. For example, in the question Can I create big texture from other small textures in webgl?, gman recommends:

> If you have to do it at runtime for some reason then the easiest way to combine images into a single texture is to first load all your images, then use the canvas 2D api to draw them into a 2D canvas, then use that canvas as a source for texImage2D in WebGL

I don't understand why this is preferable.

How can I take these images async and stitch them together within a single webgl canvas?

Appendix:

export function setRectangle(
	gl: WebGLRenderingContext,
	x: number,
	y: number,
	width: number,
	height: number
) {
	const x1 = x,
		x2 = x + width,
		y1 = y,
		y2 = y + height;

	gl.bufferData(
		gl.ARRAY_BUFFER,
		// prettier-ignore
		new Float32Array([
      x1, y1, 
      x2, y1, 
      x1, y2, 
      x1, y2, 
      x2, y1, 
      x2, y2]),
		gl.STATIC_DRAW
	);
}

答案1

得分: 2

以下是翻译好的部分:

对于简短的答案,有一个简单的修复方法,使用:

canvas.getContext("webgl", { preserveDrawingBuffer: true })

而不是

canvas.getContext("webgl")

您还可以摆脱所有帧缓冲区的内容,因为实际上在您的程序中没有做任何事情。您将图像渲染到帧缓冲区,然后将图像渲染到画布,但没有从帧缓冲区渲染到画布。这在此答案中大部分已经解释了,但可能不清楚的是,使用framebufferTexture2D调用时,您正在告诉帧缓冲区将渲染到给定的纹理上,然后您需要稍后将其用作源。在您的代码中,没有将您渲染到帧缓冲区的纹理与您想要“存储”所有图块的纹理分开。

那么为什么上面的修复方法有效呢?忽略所有帧缓冲区的内容(在您的程序中实际上不起作用),您正在跨多个帧渲染到画布。WebGL默认在帧之间清除画布,除非设置了preserveDrawingBuffer标志,这解释了为什么您以前渲染的图块在每次绘制另一个图块时会消失。请参阅此答案

编辑:如何正确使用帧缓冲区?

要使这与帧缓冲区按预期工作,需要进行一些更改。

  1. 您创建了targetTexture,即您要渲染组合图块的纹理。它的大小和格式是什么?这从未被指定,GL也不会推断出来;这些参数通常可以与您绘制到的主画布不同。

    const targetTexture = gl.createTexture();
    gl.bindTexture(gl.TEXTURE_2D, targetTexture);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA,
        512, 512, 0,
        gl.RGBA, gl.UNSIGNED_BYTE, null);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
    

    您可以参考texImage2D的文档以获取参数。它们与您现有的texImage2D调用类似,但必须提供宽度和高度,因为我们正在为纹理的“源”传递null - 目前还没有纹理,我们只是希望GL为我们创建一个空的纹理。

  2. render函数中,您希望执行两个绘图调用,分别对应于您已经有的两个drawArrays调用。这些绘图调用应该具有不同的uniform和framebuffer设置:

    1. 第一个调用的帧缓冲区必须绑定到fb以进行绘制,然后对于第二个调用绑定到null以绘制到画布上(在您的代码中已经完成)。
    2. 第一个调用的纹理必须是texture(要渲染的新图块),然后对于第二个调用使用targetTexture以利用帧缓冲区。
    // 在第二个drawArrays调用之前
    gl.bindTexture(gl.TEXTURE_2D, targetTexture);
    
    1. 为了使事情正确缩放,第二个绘图调用的textureSizeLocation uniform应设置为帧缓冲区的分辨率:
    // 在第二个drawArrays调用之前
    gl.uniform2f(textureSizeLocation, 512, 512);
    
    1. 位置缓冲区必须在第一个调用中指定为帧缓冲区的一部分,但在第二个调用中指定为完整的画布。
    // 在第二个drawArrays调用之前
    gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
    setRectangle(
      gl,
      0,
      0,
      gl.canvas.width,
      gl.canvas.height
    );
    

    5.(这会垂直翻转图像,某处需要翻转符号,例如在前面的步骤中)。

这样就可以正常工作,但我还要注意的是,render中执行的许多操作不需要每次都执行:例如,只需一次附加program,只需一次查找uniform和缓冲区位置等等。放入属性缓冲区的数据也会保留在那里。 (除非在render调用之间有其他无关的GL调用:在这种情况下,至少重新绑定程序可能是一个好主意。)

英文:

For a short answer, there is a simple fix, use:

canvas.getContext("webgl", { preserveDrawingBuffer: true })

instead of

canvas.getContext("webgl")

You can also get rid of all the framebuffer stuff, it is not actually doing anything in your program. You are rendering to the framebuffer, then you are rendering to the canvas, but you are not rendering from the framebuffer to the canvas. This is mostly explained in this answer but it might not be clear that with the framebufferTexture2D call you are saying that the framebuffer will render to the given texture, which you will need to later use as a source again. In your code there is no separation between the texture you are rendering to the framebuffer and the texture you want to "store" all the tiles in.

So why does the fix above work? Ignoring all the framebuffer stuff (which, again, does nothing in your program), you are rendering to the canvas across multiple frames. WebGL by default clears the canvas between frames unless the preserveDrawingBuffer flag is set, explaining why the previous tiles you rendered disappear every time you draw another one. See this answer.


Edit: how to actually make use of the framebuffer?

To make this work with the framebuffer as intended, a couple of changes need to be made.

  1. You created targetTexture, i.e. the texture to which you want to render the composed tile. What is its size and format? This is never specified and GL does not infer it; these parameters can in general differ from the main canvas you are drawing to.

    const targetTexture = gl.createTexture();
    gl.bindTexture(gl.TEXTURE_2D, targetTexture);
    gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA,
        512, 512, 0,
        gl.RGBA, gl.UNSIGNED_BYTE, null);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
    gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
    

    You can refer to the documentation of texImage2D for the parameters. They are similar to your existing texImage2D call, but the width and height must be provided, because we are passing null for the texture "source"—there is no texture yet, we just want GL to create an empty one for us.

  2. In the render function, you want to perform two draw calls, corresponding to the two drawArrays calls you already have. These draw calls should have the uniforms and framebuffers set differently from one another:

    1. The framebuffer for the first call must be bound to fb to draw to it, then for the second call to null to draw to the canvas. (Already done in your code.)
    2. The texture for the first call must be texture (the new tile to be rendered), then for the second call targetTexture to make use of the framebuffer.
    // before the second drawArrays call
    gl.bindTexture(gl.TEXTURE_2D, targetTexture);
    
    1. To make things scale correctly, the textureSizeLocation uniform should be set to the framebuffer resolution for the second draw call:
    // before the second drawArrays call
    gl.uniform2f(textureSizeLocation, 512, 512);
    
    1. The position buffer must specify a tile/quarter of the framebuffer in the first call, but the full canvas in the second call.
    // before the second drawArrays call
    gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
    setRectangle(
      gl,
      0,
      0,
      gl.canvas.width,
      gl.canvas.height
    );
    
    1. (This inverts the image vertically, somewhere a sign needs to be flipped, for example in the previous point.)

This works, but let me also note that many of the operations done in render do not need to be done every time: for example, the program only needs to be attached once, the uniform and buffer locations can only be looked up once, etc etc. Data you put into attribute buffers also stays there over calls. (Unless there are other, unrelated GL calls that might take place in between the render calls: in which case, at least re-binding the program might be a good idea.)

huangapple
  • 本文由 发表于 2023年6月8日 06:33:33
  • 转载请务必保留本文链接:https://go.coder-hub.com/76427492.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定