在使用 Chaquopy 在 Android Studio 上运行 Python 脚本时无法打开相机。

huangapple go评论96阅读模式
英文:

Running python script on android studio using Chaquopy can't open Camera

问题

My python script uses camera to detect fingers and runs successfully on PyCharm but when trying to run it on android studio using chaquopy it gives error camera is not defined.

Java Code:

public class MainActivity extends PythonConsoleActivity {

    @Override protected Class<?> getTaskClass() {
        return Task.class;
    }


    public static class Task extends PythonConsoleActivity.Task {
        public Task(Application app) {
            super(app);
        }

        @Override public void run() {
            py.getModule("enders_keyboard_vision").callAttr("test");

        }
    }
}

Python Code:

import cv2 as cv
import numpy as np
import imutils
import time
import math
import enders_keyboard

bg = None

def test():
    print("Yarraaabb")

# Rest of your Python code...

Error:

com.chaquo.python.PyException: NameError: name 'camera' is not defined
at <python>.enders_keyboard_vision.<module>(enders_keyboard_vision.py:262)
at <python>.importlib._bootstrap._call_with_frames_removed(<frozen importlib._bootstrap>:219)
at <python>.importlib._bootstrap_external.exec_module(<frozen importlib._bootstrap_external>:783)
at <python>.importlib._bootstrap._load_unlocked(<frozen importlib._bootstrap>:671)
at <python>.importlib._bootstrap._find_and_load_unlocked(<frozen importlib._bootstrap>:975)
at <python>.importlib._bootstrap._find_and_load(<frozen importlib._bootstrap>:991)
at <python>.importlib._bootstrap._gcd_import(<frozen importlib._bootstrap>:1014)
at <python>.importlib.import_module(__init__.py:127)
at <python>.chaquopy_java.Java_com_chaquo_python_Python_getModule(chaquopy_java.pyx:153)
at com.chaquo.python.Python.getModule(Native Method)
at com.chaquo.python.console.MainActivity$Task.run(MainActivity.java:21)
at com.chaquo.python.utils.ConsoleActivity$Task$1.run(ConsoleActivity.java:359)

(Note: The Python code snippet provided here is a partial representation. Please make sure to include the complete Python code in your project.)

英文:

My python script uses camera to detect fingers and runs successfully on PyCharm but when trying to run it on android studio using chaquopy it gives error camera is not defined.
I am new with chaquopy and cant find similar problems with other people or similar answers.

Java Code :

public class MainActivity extends PythonConsoleActivity {

    @Override protected Class&lt;? extends Task&gt; getTaskClass() {
        return Task.class;
    }


    public static class Task extends PythonConsoleActivity.Task {
        public Task(Application app) {
            super(app);
        }

        @Override public void run() {
            py.getModule(&quot;enders_keyboard_vision&quot;).callAttr(&quot;test&quot;);

        }
    }
}

Python Code:

import cv2 as cv
import numpy as np
import imutils
import time
import math
import enders_keyboard
&quot;&quot;&quot;import enders_keyboard_output&quot;&quot;&quot;
bg = None
def test():
print(&quot;Yarraaabb&quot;)
def run_avg(image, aWeight):
global bg
if bg is None:
bg = image.copy().astype(&quot;float&quot;)
return
cv.accumulateWeighted(image, bg, aWeight)
def segment(image, threshold = 10):
kernel = np.ones((5,5),np.uint8)
global bg
diff = cv.absdiff(bg.astype(&quot;uint8&quot;), image)
thresholded = cv.threshold(diff, threshold, 255, cv.THRESH_BINARY)[1]
cv.GaussianBlur(thresholded, (11, 11), 0)
thresholded = cv.dilate(thresholded, kernel, 10)
thresh = cv.threshold(thresholded, 100, 255, cv.THRESH_BINARY)[1]
thresh = cv.erode(thresh, kernel, 20)
conts,hierarchy = cv.findContours(thresholded.copy(), cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
if len(conts) == 0:
return
else:
segmented = max(conts, key=cv.contourArea)
return (thresh, segmented)
def rough_hull(hull_ids, cont, max_dist):
if len(hull_ids) &gt; 0 and len(cont) &gt; 0:
res = []
current_pos = cont[hull_ids[0]][0][0]
points = []
for point in hull_ids:
dist = np.linalg.norm(cont[point][0][0] - current_pos)
if dist &gt; max_dist:
res.append(point)
current_pos = cont[point][0][0]
return res
else:
return []
def get_mouse_pos(event, x, y, flags, param):
if event == cv.EVENT_LBUTTONDOWN:
print(x, y)
def vector_proj(v1, v2):
return np.multiply((np.dot(v1, v2) / np.dot(v1, v1)), v1)
def simulate_key(key):
&quot;&quot;&quot;enders_keyboard_output.type_key(key)&quot;&quot;&quot;
print(&quot;ESHTAA&quot;)
if __name__ == &quot;__main__&quot;:
aWeight = 0.5
key_dict = {
0 : &quot;a&quot;,
1 : &quot;b&quot;,
2 : &quot;c&quot;,
3 : &quot;d&quot;,
4 : &quot;e&quot;,
5 : &quot;f&quot;,
6 : &quot;g&quot;,
7 : &quot;h&quot;,
8 : &quot;i&quot;,
9 : &quot;j&quot;,
10 : &quot;k&quot;,
11 : &quot;l&quot;,
12 : &quot;m&quot;,
13 : &quot;n&quot;,
14 : &quot;o&quot;,
15 : &quot;p&quot;,
16 : &quot;q&quot;,
17 : &quot;r&quot;,
18 : &quot;s&quot;,
19 : &quot;t&quot;,
20 : &quot;u&quot;,
21 : &quot;v&quot;,
22 : &quot;w&quot;,
23 : &quot;x&quot;,
24 : &quot;y&quot;,
25 : &quot;z&quot;
}
camera = cv.VideoCapture(0)
time.sleep(1)
top, right, bottom, left = 10, 350, 350, 750
num_fingers = 0
num_frames = 0
start_points = [
(385, 235), #thumb
(425, 125), #index
(500, 105), #middle
(560, 130), #ring
(615, 210) #pinky
]
start_center = (0, 0)
current_points = start_points.copy()
act = False
last_found = [True, True, True, True, True]
while(True):
(grabbed, frame) = camera.read()
frame = imutils.resize(frame, width = 700)
frame = cv.flip(frame, 1)
(height, width) = frame.shape[:2]
roi = frame[top:bottom, right:left]
gray = cv.cvtColor(roi, cv.COLOR_BGR2GRAY)
gray = cv.GaussianBlur(gray, (5, 5), 0)
inner = [False, False, False, False, False]
outer = [False, False, False, False, False]
if num_frames &lt; 10:
run_avg(gray, aWeight)
cv.circle(frame, (int(height / 2), int(width / 2)), 30, (0, 0, 255))
else:
cv.imshow(&quot;background&quot;, bg/255)
hand = segment(gray)
if hand is not None:
(thresholded, segmented) = hand
#if cv.countNonZero(thresholded) &gt; ((top - bottom) * (left - right) * 0.95):
#    time.sleep(0.5)
#    bg = None
#    num_frames = 0
#cv.drawContours(frame, [segmented + (right, top)], -1, (0, 0, 255))
convex_hull = cv.convexHull(segmented + (right, top), returnPoints = False)
hull = rough_hull(convex_hull, segmented, 40)
#remove bottom two points
#del hull[hull[0][:, :, 1].argmin()[0]]
#del hull[hull[0][:, :, 1].argmin()[1]]
if len(segmented) &gt; 0:
hull_sorted = sorted(hull, key = lambda a : segmented[a[0]][0][1])
hull_sorted = hull_sorted[:min(len(hull_sorted), 5)]
activated = []
if act is False:
for point in range(5):
activated.append(False)
for pt in hull_sorted:
if math.hypot(start_points[point][0] - segmented[pt][0][0][0] - right, start_points[point][1] - segmented[pt][0][0][1]) &lt; 25:
activated[point] = True
cv.circle(frame, (start_points[point][0], start_points[point][1]), 30, (255, 0, 0), thickness = -1 if activated[point] else 1)
num_fingers = 0
for active in activated:
if active is True:
num_fingers += 1
if num_fingers &gt;= 5:
print(&quot;act&quot;)
act = True
m = cv.moments(segmented)
start_center = (int(m[&quot;m10&quot;] / m[&quot;m00&quot;]) + right, int(m[&quot;m01&quot;] / m[&quot;m00&quot;]) + top)
else:
m = cv.moments(segmented)
current_center = (int(m[&quot;m10&quot;] / m[&quot;m00&quot;]) + right, int(m[&quot;m01&quot;] / m[&quot;m00&quot;]) + top)
center_diff = np.subtract(current_center, start_center)
for point in range(len(start_points)):
start_points[point] = np.add(start_points[point], center_diff)
start_center = current_center
thumb_inner = False
thumb_outer = False
found = [False, False, False, False, False]
for point in range(5):
vect = [start_points[point][0] - current_center[0], start_points[point][1] - current_center[1]]
mag = math.sqrt(vect[0]**2 + vect[1]**2)
inner[point] = False
outer[point] = False
for pt in hull_sorted:
if math.hypot(current_points[point][0] - segmented[pt][0][0][0] - right, current_points[point][1] - segmented[pt][0][0][1]) &lt; 20 and math.hypot(start_points[point][0] - segmented[pt][0][0][0] - right, start_points[point][1] - segmented[pt][0][0][1]) &lt; 40:
current_points[point] = (segmented[pt][0][0][0] + right, segmented[pt][0][0][1])
diff = np.subtract((current_points[point][0], current_points[point][1]), start_points[point])
adjusted_pt = np.add(start_points[point], vector_proj(vect, diff))
cv.circle(frame, (int(adjusted_pt[0]), int(adjusted_pt[1])), 30, (255, 0, 0), thickness = -1)
found[point] = True
if (not found[point]) and found[point] is not last_found[point]:
d = math.hypot(current_points[point][0] - current_center[0], current_points[point][1] - current_center[1])
current_points[point] = start_points[point]
if d &lt; mag:
inner[point] = True
outer[point] = False
else:
inner[point] = False
outer[point] = True
cv.circle(frame, (current_points[point][0], current_points[point][1]), 25, (255, 0, 255))
last_found[point] = found[point]
cv.circle(frame, (start_points[point][0], start_points[point][1]), 30, (0, 255, 0), thickness = 1)
cv.line(frame, (start_points[point][0], start_points[point][1]), (int(start_points[point][0] + vect[0] * 15 / mag), int(start_points[point][1] + vect[1] * 15 / mag)), (0, 255, 0), thickness = 1)
cv.circle(frame, current_center, 25, (0, 0, 255))
thumb_dist = math.hypot(current_points[0][0] - current_center[0], current_points[0][1] - current_center[1])
thumb_vect = [start_points[0][0] - start_center[0], start_points[0][1] - start_center[1]]
thumb_mag = math.sqrt(vect[0]**2 + vect[1]**2)
if thumb_dist - thumb_mag &lt; -15:
thumb_inner = True
thumb_outer = False
elif thumb_dist - thumb_mag &gt; 15:
thumb_outer = True
thumb_inner = False
for finger in range(len(inner)):
if inner[finger] is True:
simulate_key(key_dict[(8 if thumb_inner else (16 if thumb_outer else 0)) + finger * 2])
elif outer[finger] is True:
simulate_key(key_dict[(8 if thumb_inner else (16 if thumb_outer else 0)) + finger * 2 + 1])
#cv.drawContours(frame, [cv.convexHull(segmented + (right, top), segmented, 5)], -1, (0, 0, 255))
cv.imshow(&quot;Thresholded&quot;, thresholded)
cv.rectangle(frame, (left, top), (right, bottom), (0, 255, 0), 2)
num_frames += 1
cv.setMouseCallback(&quot;Video Feed&quot;, get_mouse_pos)
cv.imshow(&quot;Video Feed&quot;, frame)
keypress = cv.waitKey(10) &amp; 0xFF
if keypress == ord(&quot;q&quot;):
break
if keypress == ord(&quot;r&quot;):
num_frames = 0
start_points = [
(385, 235), #thumb
(425, 125), #index
(500, 105), #middle
(560, 130), #ring
(615, 210) #pinky
]
act = False
bg = None
time.sleep(0.1)
if keypress == ord(&quot;s&quot;):
enders_keyboard.search(num_fingers)
camera.release()
cv.destroyAllWindows()

Error :

com.chaquo.python.PyException: NameError: name 'camera' is not defined
at <python>.enders_keyboard_vision.<module>(enders_keyboard_vision.py:262)
at <python>.importlib._bootstrap._call_with_frames_removed(<frozen importlib._bootstrap>:219)
at <python>.importlib._bootstrap_external.exec_module(<frozen importlib._bootstrap_external>:783)
at <python>.importlib._bootstrap._load_unlocked(<frozen importlib._bootstrap>:671)
at <python>.importlib._bootstrap._find_and_load_unlocked(<frozen importlib._bootstrap>:975)
at <python>.importlib._bootstrap._find_and_load(<frozen importlib._bootstrap>:991)
at <python>.importlib._bootstrap._gcd_import(<frozen importlib._bootstrap>:1014)
at <python>.importlib.import_module(init.py:127)
at <python>.chaquopy_java.Java_com_chaquo_python_Python_getModule(chaquopy_java.pyx:153)
at com.chaquo.python.Python.getModule(Native Method)
at com.chaquo.python.console.MainActivity$Task.run(MainActivity.java:21)
at com.chaquo.python.utils.ConsoleActivity$Task$1.run(ConsoleActivity.java:359)

答案1

得分: 1

你的模块未被加载为__main__,因此那整段代码将不会被运行。请将其放入一个函数中,然后使用callAttr调用该函数。

另外,Chaquopy版本的OpenCV目前不支持直接访问相机。最简单的解决方法是使用Java捕获图像,将其保存到临时文件,然后在Python中加载该文件。

英文:

Your module is not being loaded as __main__, so that entire block of code will not be run. Instead of putting it in a __main__ block, put it in a function and call it using callAttr.

Separately, the Chaquopy build of OpenCV doesn't currently support accessing the camera directly. The easiest workaround is to capture the image using Java, save it to a temporary file, and load that file in Python.

答案2

得分: 0

前往您的 Android 项目配置中添加明确的摄像头使用权限。
通过 chaqoupy 通道访问的所有外部资源(摄像头、麦克风、Internet 请求、GPS 请求)需要明确的权限,否则将返回未定义/未找到的错误。

英文:

Go and add explicit permission to use camera in your android project config.
All external resources (camera, mic, internet request, gps request) accessed via chaqoupy channel need explicit permission, otherwise it'll return not defined / not found errors.

huangapple
  • 本文由 发表于 2020年4月6日 02:11:57
  • 转载请务必保留本文链接:https://go.coder-hub.com/61047244.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定