英文:
Track colour change of individual LEDs on RGB LED strip using OpenCV
问题
I am trying to make a program in OpenCV that will allow me to track if the colour of individual LEDs on an RGB LED strip, specifically log exactly when they change in colour. I have managed to segment the individual LEDs, but I am stuck at this point.
当我从每个LED的中心提取RGB值时,它总是白色(255, 255, 255),但从边界矩形的一个角提取会导致像素值变化太大。我已经附上了我的摄像头输出的图片作为参考。
LED strip after masking and applying segmentation
Is there any way by which I can reliably extract the LED colours and more importantly, determine if they have changed colour?
有没有办法可靠地提取LED的颜色,更重要的是,确定它们是否改变了颜色?
Edit: Here is the raw video feed from my camera that can be used to reproduce the issue:
Raw Feed
Raw Feed 2
编辑:这是我的摄像头的原始视频,可以用来复现这个问题:
Raw Feed
Raw Feed 2
Edit 2: Screenshots of various possible frames that need to be processed:
Image 1
Image 2
Image 3
Image 4
Image 5
编辑2:各种可能需要处理的帧的截图:
图片1
图片2
图片3
图片4
图片5
For reference, this is the source code:
供参考,这是源代码:
import cv2
import numpy as np
import csv
import datetime
import os
# ... (Source code omitted for brevity)
import cv2
import numpy as np
import csv
import datetime
import os
# ... (为了简洁起见,省略了源代码的部分)
I hope this helps with your project!如果需要进一步的帮助,请告诉我。
英文:
I am trying to make a program in OpenCV that will allow me to track if the colour of individual LEDs on an RGB LED strip, specifically log exactly when they change in colour. I have managed to segment the individual LEDs, but I am stuck at this point.
When I extract the RGB value from the center of each LED, it is always white (255, 255, 255), but extracting from one of the corners of the bounding rectangle causes the pixel values to vary too much. I have attached a picture of my camera output as reference.
LED strip after masking and applying segmentation
Is there any way by which I can reliably extract the LED colours and more importantly, determine if they have changed colour?
Edit: Here is the raw video feed from my camera that can be used to reproduce the issue:<br>
Raw Feed<br>
Raw Feed 2
Edit 2: Screenshots of various possible frames that need to be processed:<br>
Image 1<br>
Image 2<br>
Image 3<br>
Image 4<br>
Image 5<br>
<br>
For reference, this is the source code:
import cv2
import numpy as np
import csv
import datetime
import os
lower = np.array([0, 0, 230])
upper = np.array([179, 120, 255])
pixels = [[],[],[],[],[],[],[],[],[],[],[]]
def sliderCallback(x):
global lower
global upper
lower[0] = cv2.getTrackbarPos('Hue Min', 'Trackbars')
upper[0] = cv2.getTrackbarPos('Hue Max', 'Trackbars')
lower[1] = cv2.getTrackbarPos('Saturation Min', 'Trackbars')
upper[1] = cv2.getTrackbarPos('Saturation Max', 'Trackbars')
lower[2] = cv2.getTrackbarPos('Value Min', 'Trackbars')
upper[2] = cv2.getTrackbarPos('Value Max', 'Trackbars')
def getPos(event,x,y,flags,param):
if(event == cv2.EVENT_LBUTTONDOWN):
print(event, x, y)
def getPixelNumber(x):
n = 1
temp = 640 / 11
while(1):
if((temp * n) > x):
return n;
else:
n += 1
def storeToCSV(pixel_list):
try:
timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
filename = f"pixels.csv"
write_header = not os.path.exists(filename)
with open(filename, mode='a', newline='') as file:
writer = csv.writer(file)
if write_header:
header = ['Timestamp']
for i, pixel in enumerate(pixel_list, 1):
header.extend([f'Pixel_{i}_B', f'Pixel_{i}_G', f'Pixel_{i}_R'])
writer.writerow(header)
row = [timestamp]
for pixel in pixel_list:
b, g, r = pixel
row.extend([b, g, r])
writer.writerow(row)
except:
pass
cv2.namedWindow('Trackbars')
cv2.createTrackbar('Hue Min', 'Trackbars', 0, 179, sliderCallback)
cv2.createTrackbar('Hue Max', 'Trackbars', 0, 179, sliderCallback)
cv2.createTrackbar('Saturation Min', 'Trackbars', 0, 255, sliderCallback)
cv2.createTrackbar('Saturation Max', 'Trackbars', 0, 255, sliderCallback)
cv2.createTrackbar('Value Min', 'Trackbars', 0, 255, sliderCallback)
cv2.createTrackbar('Value Max', 'Trackbars', 0, 255, sliderCallback)
cv2.namedWindow('image')
cv2.setMouseCallback('image',getPos)
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if ret:
frame = frame[180:260, :]
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, lower, upper)
#mask = cv2.bitwise_not(mask)
adjusted = cv2.bitwise_and(frame, frame, mask=mask);
(h, s, v) = cv2.split(adjusted)
contours, hierarchy = cv2.findContours(v, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contourCount = 0
for c in contours:
area = cv2.contourArea(c)
if(area > 100):
contourCount += 1
x, y, h, w = cv2.boundingRect(c)
pixel = frame[y - int(h / 10), x - int(w / 10)]
pixels[getPixelNumber(x) - 1] = pixel
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.drawContours(frame, c, contourIdx=-1, color=(255, 0, 0), thickness=3)
cv2.imshow('image', frame)
cv2.imshow('mask', mask)
cv2.imshow('adjusted', adjusted)
storeToCSV(pixels)
key = cv2.waitKey(10) & 0xff
if key == 27: # ESC
break
cv2.destroyAllWindows()
cap.release()
答案1
得分: 1
Mark Setchell 在评论中解释,我的摄像头拍摄的图像曝光过度,导致了问题。解决方案是找到一种方法来降低曝光度。<br>
为了做到这一点,我用我的智能手机摄像头替换了原始网络摄像头(除了焦点外没有其他控制),并使用了一个叫做Camo Studio的应用程序在加载到我的程序之前调整图像。<br>
生成的图像看起来像这样:在Camo Studio中处理的图像<br>
我将快门速度和ISO降低到允许的最小值,确保白平衡保持恒定,并将饱和度设置为0.5。<br>
由于LED的性质,仍然存在一些过于明亮的斑点,这些斑点的位置取决于颜色,所以我的解决方案是获取每个轮廓内的平均颜色。
for c in contours:
area = cv2.contourArea(c)
if(area > 50):
x, y, h, w = cv2.boundingRect(c)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
contourMask = np.zeros(frame.shape[:2], np.uint8)
cv2.fillPoly(contourMask, pts=[c], color=(255, 255, 255))
pixels[getPixelNumber(x) - 1] = cv2.mean(frame, mask=contourMask)[:3]
英文:
As Mark Setchell explained in the comments, the images in my camera feed were overexposed which were causing the issues. The solution was to find a way to somehow reduce the exposure.<br>
In order to do so, I replaced my original web cam (which had no controls other than focus) with my smartphone camera and used an application called Camo Studio to adjust the images before they were loaded into my program.<br>
the resulting image looks like this: Image processed in Camo Studio<br>
I reduced the shutter speed and ISO to the minimum allowed values, made sure that the white balance was kept constant, and set the saturation to 0.5.<br>
Due to the nature of the LEDs, there are still some excessively bright spots that change location depending on the colour, so my solution is to get the average colour inside each contour.
for c in contours:
area = cv2.contourArea(c)
if(area > 50):
x, y, h, w = cv2.boundingRect(c)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
contourMask = np.zeros(frame.shape[:2], np.uint8)
cv2.fillPoly(contourMask, pts=[c], color=(255, 255, 255))
pixels[getPixelNumber(x) - 1] = cv2.mean(frame, mask=contourMask)[:3]
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论