从 requests 中显示图像通过 framebuf

huangapple go评论67阅读模式
英文:

Display image from requests via framebuf

问题

I've built an HTTP API returning an image (`HTTP GET http://myRaspi/api/latestImage` → returns image with content-type `image/jpeg`).

On my Raspberry Pi Pico, I loading the image with the [`urequests` library](https://docs.python-requests.org/en/latest/user/quickstart/#binary-response-content):

```python
import network
import urequests

wifi = network.WLAN(network.STA_IF)
wifi.active(True)
wifi.connect('<<SSID>>', '<<Password>>')

response = urequests.get("http://myRaspi/api/latestImage")
imageBytes = response.content

I've attached a Waveshare ePaper display to the Raspi Pico which can be controlled via this API based on framebuf.

Drawing primitives works like a charm, but I want to display the retrieved image, so I somehow have to translate between the image's byte array and framebuf, but I'm struggling with finding the easiest way. I came up with the following code:

from PIL import Image
from io import BytesIO
import network
import urequests

wifi = network.WLAN(network.STA_IF)
wifi.active(True)
wifi.connect('<<SSID>>', '<<Password>>')

response = urequests.get("http://myRaspi/api/latestImage")
imageBytes = response.content

i = Image.open(BytesIO(imageBytes))

for x in width:
    for y in height:
        pixelValue = i.getpixel([x,y])
        myFrameBuffer.pixel(x, y, pixelValue)

...where myFrameBuffer would be Waveshare's self.imageblack FrameBuffer.

But so far, this does nothing but failing because Pillow is not supported on MicroPython 😅


<details>
<summary>英文:</summary>

I&#39;ve built an HTTP API returning an image (`HTTP GET http://myRaspi/api/latestImage` → returns image with content-type `image/jpeg`).  
On my Raspberry Pi Pico, I loading the image with the [`urequests` library](https://docs.python-requests.org/en/latest/user/quickstart/#binary-response-content):
```python
import network
import urequests

wifi = network.WLAN(network.STA_IF)
wifi.active(True)
wifi.connect(&#39;&lt;&lt;SSID&gt;&gt;&#39;, &#39;&lt;&lt;Password&gt;&gt;&#39;)

response = urequests.get(&quot;http://myRaspi/api/latestImage&quot;)
imageBytes = response.content

I've attached a Waveshare ePaper display to the Raspi Pico which can be controlled via this API based on framebuf.

Drawing primitives works like a charm, but I want to display the retrieved image, so I somehow have to translate between the image's byte array and framebuf, but I'm struggling with finding the easiest way. I came up with the following code:

from PIL import Image
from io import BytesIO
import network
import urequests

wifi = network.WLAN(network.STA_IF)
wifi.active(True)
wifi.connect(&#39;&lt;&lt;SSID&gt;&gt;&#39;, &#39;&lt;&lt;Password&gt;&gt;&#39;)

response = urequests.get(&quot;http://myRaspi/api/latestImage&quot;)
imageBytes = response.content

i = Image.open(BytesIO(imageBytes))

for x in width:
    for y in height:
        pixelValue = i.getpixel([x,y])
        myFrameBuffer.pixel(x, y, pixelValue)

...where myFrameBuffer would be Waveshare's self.imageblack FrameBuffer.

But so far, this does nothing but failing because Pillow is not supported on MicroPython 😅

答案1

得分: 1

为了使这个工作,你需要找到(或者可能是编写)一个在MicroPython上运行的JPEG解码器。这可能不太容易实现(除非你感到特别有动力)。

一个更简单的解决方案是让在你的树莓派上运行的HTTP API 返回一张不需要进一步处理的图像。也就是说,在树莓派上运行PIL代码(或其他代码),然后让/latestImage端点返回一块字节(以及宽度和高度),你可以直接写入帧缓冲区。

也许你的API会返回类似这样的内容:

+--------------------------------------------------------------------------+
| 宽度2字节 | 高度2字节 | 图像内容宽度 * 高度位 |
+--------------------------------------------------------------------------+

你可以像这样读取宽度和高度:

import struct
import urequests

...

res = urequests.get("http://myRaspi/api/latestImage")

width = struct.unpack('h', res.raw.read(2))
height = struct.unpack('h', res.raw.read(2))

其余的部分只是从响应中读取字节,并将相应的像素值写入你的帧缓冲区。

英文:

For this to work, you would need to find (or probably write) a JPEG decoder that runs on Micropython. That's probably a non-starter (unless you're feeling especially motivated).

A much simpler solution is to have the HTTP API running on your raspberry pi return an image that doesn't require further processing. That is, run the PIL code (or whatever) on the Pi, and have the /latestImage endpoint return a block of bytes (and a height and width) that you can write directly to the framebuffer.

Maybe your API would return something like:

+--------------------------------------------------------------------------+
| width (2 bytes) | height (2 bytes) | image content (width * height bits) |
+--------------------------------------------------------------------------+

You could read the width and height like this:

import struct
import urequests

...

res = urequests.get(&quot;http://myRaspi/api/latestImage&quot;)

width = struct.unpack(&#39;h&#39;, res.raw.read(2))
height = struct.unpack(&#39;h&#39;, res.raw.read(2))

The remainder is just reading bytes from the response and writing the corresponding pixel values to your frame buffer.

huangapple
  • 本文由 发表于 2023年6月25日 22:50:58
  • 转载请务必保留本文链接:https://go.coder-hub.com/76550999.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定