如何使用react-native-vision-camera代替expo-camera?

huangapple go评论67阅读模式
英文:

How to use react-native-vision-camera instead of expo-camer?

问题

如何在使用react native cli开发项目时,使用react-native-vision-camera而不是expo-camera?之前我在expo中开发项目,但现在我正在使用react native cli进行开发。有人可以告诉我如何使用这个叫做react-native-vision-camera的包,并使其与我的expo代码相同,没有错误吗?我尝试自己使用它,但是它给我显示了一个黑屏。在我的情况下是在Android上。

Expo代码:

import { Camera } from 'expo-camera';
import { TouchableOpacity, Text, View, ActivityIndicator } from 'react-native';
import { useState, useEffect, useRef } from "react";
import * as FaceDetector from 'expo-face-detector';

export default function CameraPage({ navigation }) {
  
  const [hasPermission, setHasPermission] = useState(null);
  const cameraRef = useRef(null);
  const [faceData, setFaceData] = useState([]);

  useEffect(() => {
    (async () => {
      const { status } = await Camera.requestCameraPermissionsAsync();
      setHasPermission(status === 'granted');
    })();
  }, []);

  if (hasPermission === null) {
    return <View />;
  }

  if (hasPermission === false) {
    return <Text>No access to camera</Text>;
  }

  const handleTakePicture = async () => {
    if (faceData.length === 0) {
      alert('No Face');
    } else if (cameraRef.current) {
      const photo = await cameraRef.current.takePictureAsync();
      console.log(photo.uri);
      if (!photo.cancelled) {
        navigation.navigate('addphoto', { Image: photo.uri });
      }
    }
  }

  const handleFacesDetected = ({ faces }) => {
    setFaceData(faces);
  }

  return (
    <View style={{ flex: 1, backgroundColor: 'black' }}>
      <Camera
        onFacesDetected={handleFacesDetected}
        faceDetectorSettings={{
          mode: FaceDetector.FaceDetectorMode.fast,
          detectLandmarks: FaceDetector.FaceDetectorLandmarks.none,
          runClassifications: FaceDetector.FaceDetectorClassifications.none,
          minDetectionInterval: 100,
          tracking: true,
        }}
        style={{
          borderTopLeftRadius: 30,
          borderTopRightRadius: 30,
          borderBottomLeftRadius: 30,
          borderBottomRightRadius: 30,
          overflow: 'hidden',
          width: '130%',
          aspectRatio: 1,
        }}
        type={Camera.Constants.Type.front}
        ref={cameraRef}
      >
        <View style={{ flex: 1, backgroundColor: 'transparent', flexDirection: 'row' }}>
        </View>
      </Camera>
      <TouchableOpacity
        style={{
          alignSelf: 'center',
          alignItems: 'center',
          width: 90,
          height: 90,
          borderRadius: 500,
          marginTop: '30%',
          marginLeft: '5%',
          borderColor: '#5A5A5A',
          borderWidth: 6,
        }}
        onPress={handleTakePicture}
      >
        <View style={{ opacity: 0.5 }} />
      </TouchableOpacity>
    </View>
  );
}
英文:

How to use react-native-vision-camera instead of expo-camera? earlier I was developing my project in expo but currently I am developing it using react native cli can anyone can tell me how can I use this package call react-native-vision-camera and make it work same as my expo code with any errors I tried using myself my it was giving me a black screen In my case android

Expo Code:

import { Camera } from &#39;expo-camera&#39;;
import { TouchableOpacity, Text, View, ActivityIndicator } from &#39;react-native&#39;;
import { useState, useEffect, useRef } from &quot;react&quot;;
import * as FaceDetector from &#39;expo-face-detector&#39;;
export default function CameraPage({ navigation }) {
const [hasPermission, setHasPermission] = useState(null);
const cameraRef = useRef(null);
const [faceData, setFaceData] = useState([]);
useEffect(() =&gt; {
(async () =&gt; {
const { status } = await Camera.requestCameraPermissionsAsync();
setHasPermission(status === &#39;granted&#39;);
})();
}, []);
if (hasPermission === null) {
return &lt;View /&gt;;
}
if (hasPermission === false) {
return &lt;Text&gt;No access to camera&lt;/Text&gt;;
}
const handleTakePicture = async () =&gt; {
if (faceData.length === 0) {
alert(&#39;No Face&#39;)
}
else if
(cameraRef.current) {
const photo = await cameraRef.current.takePictureAsync();
console.log(photo.uri)
if (!photo.cancelled) {
navigation.navigate(&#39;addphoto&#39;, { Image: photo.uri });
}
}
}
const handleFacesDetected = ({ faces }) =&gt; {
setFaceData(faces);
}
return (
&lt;View style={{ flex: 1, backgroundColor: &#39;black&#39; }}&gt;
&lt;Camera
onFacesDetected={handleFacesDetected}
faceDetectorSettings={{
mode: FaceDetector.FaceDetectorMode.fast,
detectLandmarks: FaceDetector.FaceDetectorLandmarks.none,
runClassifications: FaceDetector.FaceDetectorClassifications.none,
minDetectionInterval: 100,
tracking: true,
}}
style={{
borderTopLeftRadius: 30,
borderTopRightRadius: 30,
borderBottomLeftRadius: 30,
borderBottomRightRadius: 30,
overflow: &#39;hidden&#39;,
width: &#39;130%&#39;,
aspectRatio: 1,
}}
type={Camera.Constants.Type.front}
ref={cameraRef}
&gt;
&lt;View style={{ flex: 1, backgroundColor: &#39;transparent&#39;, flexDirection: &#39;row&#39; }}&gt;
&lt;/View&gt;
&lt;/Camera&gt;
&lt;TouchableOpacity
style={{
alignSelf: &#39;center&#39;,
alignItems: &#39;center&#39;,
width: 90,
height: 90,
borderRadius: 500,
marginTop: &#39;30%&#39;,
marginLeft: &#39;5%&#39;,
borderColor: &#39;#5A5A5A&#39;,
borderWidth: 6,
}}
onPress={handleTakePicture}
&gt;
&lt;View style={{ opacity: 0.5 }} /&gt;
&lt;/TouchableOpacity&gt;
&lt;/View&gt;
);
}

答案1

得分: 3

我是react-native-vision-camera的作者。

要使摄像头预览运行,首先需要获取摄像头权限:

const cameraPermission = await Camera.requestCameraPermission()

如果权限被授予,那么你可以渲染摄像头预览:

function App() {
  const devices = useCameraDevices()
  const device = devices.back

  if (device == null) return &lt;LoadingView /&gt;
  return (
    &lt;Camera
      style={StyleSheet.absoluteFill}
      device={device}
      isActive={true}
    /&gt;
  )
}

根据你需要使用摄像头的情况,你可以启用这些功能:

&lt;Camera
  style={StyleSheet.absoluteFill}
  device={device}
  isActive={true}
  video={true} // &lt;-- 可选
  audio={true} // &lt;-- 可选 (需要音频权限)
  photo={true} // &lt;-- 可选
 /&gt;

如果你想在帧中检测人脸,你可以使用Frame Processors

如文档中所述,你需要安装Reanimated v2才能使用它们。

然后,你可以简单地使用其中一个社区帧处理器插件,例如rodgomesc/vision-camera-face-detector 插件:

import { scanFaces } from &#39;vision-camera-face-detector&#39;;

function App() {
  const devices = useCameraDevices()
  const device = devices.back

  const frameProcessor = useFrameProcessor((frame) =&gt; {
    &#39;worklet&#39;
    const faces = scanFaces(frame)
    console.log(&quot;Faces:&quot;, faces)
  }, [])

  if (device == null) return &lt;LoadingView /&gt;
  return (
    &lt;Camera
      style={StyleSheet.absoluteFill}
      device={device}
      isActive={true}
      frameProcessor={frameProcessor}
    /&gt;
  )
}

或者,如果你想完全控制扫描,也可以构建自己的插件。

如果遇到问题,请查阅问题报告,并确保你理解Worklets是什么,因为Frame Processor是一个Worklet。 (参见Reanimated的这些文档)

英文:

I'm the author of react-native-vision-camera.

So to get the Camera Preview running, you'd first have to get the Camera Permission:

const cameraPermission = await Camera.requestCameraPermission()

And if that's &#39;granted&#39;, then you can render the Camera Preview:

function App() {
  const devices = useCameraDevices()
  const device = devices.back

  if (device == null) return &lt;LoadingView /&gt;
  return (
    &lt;Camera
      style={StyleSheet.absoluteFill}
      device={device}
      isActive={true}
    /&gt;
  )
}

Depending on what you need the Camera for, you can enable those features:

&lt;Camera
  style={StyleSheet.absoluteFill}
  device={device}
  isActive={true}
  video={true} // &lt;-- optional
  audio={true} // &lt;-- optional (requires audio permission
  photo={true} // &lt;-- optional
 /&gt;

If you want to detect faces in a Frame, you can use Frame Processors.

As stated in the docs, you need to install Reanimated v2 to use those.

Then you can simply use one of the community frame processor plugins, for example the rodgomesc/vision-camera-face-detector plugin:

import { scanFaces } from &#39;vision-camera-face-detector&#39;;

function App() {
  const devices = useCameraDevices()
  const device = devices.back

  const frameProcessor = useFrameProcessor((frame) =&gt; {
    &#39;worklet&#39;
    const faces = scanFaces(frame)
    console.log(&quot;Faces:&quot;, faces)
  }, [])

  if (device == null) return &lt;LoadingView /&gt;
  return (
    &lt;Camera
      style={StyleSheet.absoluteFill}
      device={device}
      isActive={true}
      frameProcessor={frameProcessor}
    /&gt;
  )
}

Or you can also build your own Plugin if you want full control over the scan.

Browse the issues if something doesn't work for you, and make sure you understand what Worklets are as the Frame Processor is a worklet. (see these docs by Reanimated)

huangapple
  • 本文由 发表于 2023年2月14日 00:24:47
  • 转载请务必保留本文链接:https://go.coder-hub.com/75438600.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定