Android CameraX | 颜色检测

huangapple go评论141阅读模式
英文:

Android CameraX | Color detection

问题

I'm working with the new CameraX on Android.

我正在使用Android上的新CameraX。

I did a basic application (similar to the "Get Started") in which I have a camera preview and a luminosity analyzer. Every second I display my luminosity in a TextView.

我创建了一个基本的应用程序(类似于“入门”),其中包括相机预览和亮度分析器。每秒钟我会在TextView中显示亮度值。

Now, following the CameraX guidelines, I would like to do color detection. Every second or so, I want to have the color from the pixel in the center of my screen.

现在,根据CameraX的指南,我想进行颜色检测。大约每秒钟,我想获取屏幕中心像素的颜色。

The fact is that I don't know how to do color detection following the same structure as the luminosity analyzer.

事实是,我不知道如何按照与亮度分析器相同的结构进行颜色检测。

Luminosity Analyzer Class :

亮度分析器类:

class LuminosityAnalyzer : ImageAnalysis.Analyzer {

    private var lastTimeStamp = 0L
    private val TAG = this.javaClass.simpleName
    var luma = BehaviorSubject.create<Double>()

    override fun analyze(image: ImageProxy, rotationDegrees: Int) {
        val currentTimeStamp = System.currentTimeMillis()
        val intervalInSeconds = TimeUnit.SECONDS.toMillis(1)
        val deltaTime = currentTimeStamp - lastTimeStamp
        if(deltaTime >= intervalInSeconds) {
            val buffer = image.planes[0].buffer
            val data = buffer.toByteArray()
            val pixels = data.map { it.toInt() and 0xFF }
            luma.onNext(pixels.average())
            lastTimeStamp = currentTimeStamp
            Log.d(TAG, "Average luminosity: ${luma.value}")
        }
    }

    private fun ByteBuffer.toByteArray(): ByteArray {
        rewind()
        val data = ByteArray(remaining())
        get(data)
        return data
    }
}

Main Activity :

主要活动:

/* display the luminosity */
private fun createLuminosityAnalyzer(): ImageAnalysis{
    val analyzerConfig = ImageAnalysisConfig.Builder().apply {
        setLensFacing(lensFacing)
        setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
    }.build()

    val analyzer = ImageAnalysis(analyzerConfig).apply {
        val luminosityAnalyzer = LuminosityAnalyzer()
        luminosityAnalyzer.luma
            .observeOn(AndroidSchedulers.mainThread())
            .subscribe({
            // success
            luminosity.text = it.toString()
        },{
            // error
            Log.d(TAG, "Can not get luminosity :(")
        })
        setAnalyzer(executor, luminosityAnalyzer)
    }
    return analyzer
}

How can I do something equivalent but being a Color Analyzer?

我如何做一个类似的东西,但作为颜色分析器?

英文:

I'm working with the new CameraX on Android.

I did a basic application (similar to the "Get Started") in which I have a camera preview and a luminosity analyzer. Every second I display my lumonisity in a TextView.

Now, following the CameraX guidelines, I would like to do color detection. Every second or so, I want to have the color from the pixel in the center of my screen.

The fact is that I don't know how to do color detection following the same sructure as luminosity analyzer.

Luminosity Analyzer Class :

class LuminosityAnalyzer : ImageAnalysis.Analyzer {

private var lastTimeStamp = 0L
private val TAG = this.javaClass.simpleName
var luma = BehaviorSubject.create&lt;Double&gt;()

override fun analyze(image: ImageProxy, rotationDegrees: Int) {
    val currentTimeStamp = System.currentTimeMillis()
    val intervalInSeconds = TimeUnit.SECONDS.toMillis(1)
    val deltaTime = currentTimeStamp - lastTimeStamp
    if(deltaTime &gt;= intervalInSeconds) {
        val buffer = image.planes[0].buffer
        val data = buffer.toByteArray()
        val pixels = data.map { it.toInt() and 0xFF }
        luma.onNext(pixels.average())
        lastTimeStamp = currentTimeStamp
        Log.d(TAG, &quot;Average luminosity: ${luma.value}&quot;)
    }


private fun ByteBuffer.toByteArray(): ByteArray {
    rewind()
    val data = ByteArray(remaining())
    get(data)
    return data
}
}

Main Activity :

/* display the luminosity */
private fun createLuminosityAnalyzer(): ImageAnalysis{
    val analyzerConfig = ImageAnalysisConfig.Builder().apply {
        setLensFacing(lensFacing)
        setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
    }.build()

    val analyzer = ImageAnalysis(analyzerConfig).apply {
        val luminosityAnalyzer = LuminosityAnalyzer()
        luminosityAnalyzer.luma
            .observeOn(AndroidSchedulers.mainThread())
            .subscribe({
            // success
            luminosity.text = it.toString()
        },{
            // error
            Log.d(TAG, &quot;Can not get luminosity :(&quot;)
        })
        setAnalyzer(executor, luminosityAnalyzer)
    }
    return analyzer
}

How can I do something equivalent but being a Color Analyzer ?

答案1

得分: 5

Sure, here's the translated code:

So I figured out how to do it by myself

**Color Analyzer Class:**

class ColorAnalyzer : ImageAnalysis.Analyzer {

    private var lastTimeStamp = 0L
    private val TAG = this.javaClass.simpleName
    var hexColor = BehaviorSubject.create<Any>()

    /* every 100ms, analyze the image we receive from the camera */
    override fun analyze(image: ImageProxy, rotationDegrees: Int) {
        val currentTimeStamp = System.currentTimeMillis()
        val intervalInMilliSeconds = TimeUnit.MILLISECONDS.toMillis(100)
        val deltaTime = currentTimeStamp - lastTimeStamp
        if (deltaTime >= intervalInMilliSeconds) {

            val imageBitmap = image.image?.toBitmap()
            val pixel = imageBitmap!!.getPixel((imageBitmap.width / 2), (imageBitmap.height / 2))
            val red = Color.red(pixel)
            val blue = Color.blue(pixel)
            val green = Color.green(pixel)
            hexColor.onNext(String.format("#%02x%02x%02x", red, green, blue))
            Log.d(TAG, "Color: ${hexColor.value}")

            lastTimeStamp = currentTimeStamp
        }
    }

    // convert the image into a bitmap
    private fun Image.toBitmap(): Bitmap {
        val yBuffer = planes[0].buffer // Y
        val uBuffer = planes[1].buffer // U
        val vBuffer = planes[2].buffer // V

        val ySize = yBuffer.remaining()
        val uSize = uBuffer.remaining()
        val vSize = vBuffer.remaining()

        val nv21 = ByteArray(ySize + uSize + vSize)

        yBuffer.get(nv21, 0, ySize)
        vBuffer.get(nv21, ySize, vSize)
        uBuffer.get(nv21, ySize + vSize, uSize)

        val yuvImage = YuvImage(nv21, ImageFormat.NV21, this.width, this.height, null)
        val out = ByteArrayOutputStream()
        yuvImage.compressToJpeg(Rect(0, 0, yuvImage.width, yuvImage.height), 50, out)
        val imageBytes = out.toByteArray()
        return BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
    }
}

**Main Activity:**

/* Get the color from Color Analyzer Class */
private fun createColorAnalyzer(): ImageAnalysis {
    val analyzerConfig = ImageAnalysisConfig.Builder().apply {
        setLensFacing(lensFacing)
        setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
    }.build()

    val analyzer = ImageAnalysis(analyzerConfig).apply {
        val colorAnalyzer = ColorAnalyzer()
        colorAnalyzer.hexColor
            .observeOn(AndroidSchedulers.mainThread())
            .subscribe({
                // success
                colorName.text = it.toString() // hexa code in the textView
                colorName.setBackgroundColor(Color.parseColor(it.toString())) // background color of the textView
                (sight.drawable as GradientDrawable).setStroke(10, Color.parseColor(it.toString())) // border color of the sight in the middle of the screen
            }, {
                // error
                Log.d(TAG, "Can not get color :(")
            })
        setAnalyzer(executor, colorAnalyzer)
    }
    return analyzer
}

Hope it will be useful for someone ;)

**EDIT:**

If you read the @Minhaz answer getting the color by doing image -> bitmap -> getPixel() is not very efficient. The most effective is to do image -> RGB.

So here's the Minhaz answer working with Kotlin.

**Color Analyzer Class:**

class ColorAnalyzer : ImageAnalysis.Analyzer {

    private var lastAnalyzedTimestamp = 0L

    private fun ByteBuffer.toByteArray(): ByteArray {
        rewind()    // Rewind the buffer to zero
        val data = ByteArray(remaining())
        get(data)   // Copy the buffer into a byte array
        return data // Return the byte array
    }

    private fun getRGBfromYUV(image: ImageProxy): Triple<Double, Double, Double> {
        val planes = image.planes

        val height = image.height
        val width = image.width

        // Y
        val yArr = planes[0].buffer
        val yArrByteArray = yArr.toByteArray()
        val yPixelStride = planes[0].pixelStride
        val yRowStride = planes[0].rowStride

        // U
        val uArr = planes[1].buffer
        val uArrByteArray = uArr.toByteArray()
        val uPixelStride = planes[1].pixelStride
        val uRowStride = planes[1].rowStride

        // V
        val vArr = planes[2].buffer
        val vArrByteArray = vArr.toByteArray()
        val vPixelStride = planes[2].pixelStride
        val vRowStride = planes[2].rowStride

        val y = yArrByteArray[(height * yRowStride + width * yPixelStride) / 2].toInt() and 255
        val u = (uArrByteArray[(height * uRowStride + width * uPixelStride) / 4].toInt() and 255) - 128
        val v = (vArrByteArray[(height * vRowStride + width * vPixelStride) / 4].toInt() and 255) - 128

        val r = y + (1.370705 * v)
        val g = y - (0.698001 * v) - (0.337633 * u)
        val b = y + (1.732446 * u)

        return Triple(r, g, b)
    }

    // analyze the color
    override fun analyze(image: ImageProxy, rotationDegrees: Int) {
        val currentTimestamp = System.currentTimeMillis()
        if (currentTimestamp - lastAnalyzedTimestamp >= TimeUnit.MILLISECONDS.toMillis(100)) {

            val colors = getRGBfromYUV(image)
            var hexColor = String.format("#%02x%02x%02x", colors.first.toInt(), colors.second.toInt(), colors.third.toInt())
            Log.d("test", "hexColor: $hexColor")

            lastAnalyzedTimestamp = currentTimestamp
        }
    }
}
英文:

So I figured out how to do it by myself

Color Analyzer Class :

class ColorAnalyzer : ImageAnalysis.Analyzer {
private var lastTimeStamp = 0L
private val TAG = this.javaClass.simpleName
var hexColor = BehaviorSubject.create&lt;Any&gt;()
/* every 100ms, analyze the image we receive from camera */
override fun analyze(image: ImageProxy, rotationDegrees: Int) {
val currentTimeStamp = System.currentTimeMillis()
val intervalInMilliSeconds = TimeUnit.MILLISECONDS.toMillis(100)
val deltaTime = currentTimeStamp - lastTimeStamp
if(deltaTime &gt;= intervalInMilliSeconds) {
val imageBitmap = image.image?.toBitmap()
val pixel = imageBitmap!!.getPixel((imageBitmap.width/2), (imageBitmap.height/2))
val red = Color.red(pixel)
val blue = Color.blue(pixel)
val green = Color.green(pixel)
hexColor.onNext(String.format(&quot;#%02x%02x%02x&quot;, red, green, blue))
Log.d(TAG, &quot;Color: ${hexColor.value}&quot;)
lastTimeStamp = currentTimeStamp
}
}
// convert the image into a bitmap
private fun Image.toBitmap(): Bitmap {
val yBuffer = planes[0].buffer // Y
val uBuffer = planes[1].buffer // U
val vBuffer = planes[2].buffer // V
val ySize = yBuffer.remaining()
val uSize = uBuffer.remaining()
val vSize = vBuffer.remaining()
val nv21 = ByteArray(ySize + uSize + vSize)
yBuffer.get(nv21, 0, ySize)
vBuffer.get(nv21, ySize, vSize)
uBuffer.get(nv21, ySize + vSize, uSize)
val yuvImage = YuvImage(nv21, ImageFormat.NV21, this.width, this.height, null)
val out = ByteArrayOutputStream()
yuvImage.compressToJpeg(Rect(0, 0, yuvImage.width, yuvImage.height), 50, out)
val imageBytes = out.toByteArray()
return BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.size)
}
}

Main Activity :

 /* Get the color from Color Analyzer Class */
private fun createColorAnalyzer(): ImageAnalysis{
val analyzerConfig = ImageAnalysisConfig.Builder().apply {
setLensFacing(lensFacing)
setImageReaderMode(ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE)
}.build()
val analyzer = ImageAnalysis(analyzerConfig).apply {
val colorAnalyzer = ColorAnalyzer()
colorAnalyzer.hexColor
.observeOn(AndroidSchedulers.mainThread())
.subscribe({
// success
colorName.text = it.toString() //hexa code in the textView
colorName.setBackgroundColor(Color.parseColor(it.toString())) //background color of the textView
(sight.drawable as GradientDrawable).setStroke(10, Color.parseColor(it.toString())) //border color of the sight in the middle of the screen
},{
// error
Log.d(TAG, &quot;Can not get color :(&quot;)
})
setAnalyzer(executor, colorAnalyzer)
}
return analyzer
}

Hope it will be useful for someone Android CameraX | 颜色检测

EDIT :

If you read the @Minhaz answer getting the color by doing image -> bitmap -> getPixel() is not very efficient. The most effective is to do image -> RGB.

So here's the Minhaz answer working with Kotlin.

Color Analyzer Class :

class ColorAnalyzer : ImageAnalysis.Analyzer {
private var lastAnalyzedTimestamp = 0L
private fun ByteBuffer.toByteArray(): ByteArray {
rewind()    // Rewind the buffer to zero
val data = ByteArray(remaining())
get(data)   // Copy the buffer into a byte array
return data // Return the byte array
}
private fun getRGBfromYUV(image: ImageProxy): Triple&lt;Double, Double, Double&gt; {
val planes = image.planes
val height = image.height
val width = image.width
// Y
val yArr = planes[0].buffer
val yArrByteArray = yArr.toByteArray()
val yPixelStride = planes[0].pixelStride
val yRowStride = planes[0].rowStride
// U
val uArr = planes[1].buffer
val uArrByteArray =uArr.toByteArray()
val uPixelStride = planes[1].pixelStride
val uRowStride = planes[1].rowStride
// V
val vArr = planes[2].buffer
val vArrByteArray = vArr.toByteArray()
val vPixelStride = planes[2].pixelStride
val vRowStride = planes[2].rowStride
val y = yArrByteArray[(height * yRowStride + width * yPixelStride) / 2].toInt() and 255
val u = (uArrByteArray[(height * uRowStride + width * uPixelStride) / 4].toInt() and 255) - 128
val v = (vArrByteArray[(height * vRowStride + width * vPixelStride) / 4].toInt() and 255) - 128
val r = y + (1.370705 * v)
val g = y - (0.698001 * v) - (0.337633 * u)
val b = y + (1.732446 * u)
return Triple(r,g,b)
}
// analyze the color
override fun analyze(image: ImageProxy, rotationDegrees: Int) {
val currentTimestamp = System.currentTimeMillis()
if (currentTimestamp - lastAnalyzedTimestamp &gt;= TimeUnit.MILLISECONDS.toMillis(100)) {
val colors = getRGBfromYUV(image)
var hexColor = String.format(&quot;#%02x%02x%02x&quot;, colors.first.toInt(), colors.second.toInt(), colors.third.toInt())
Log.d(&quot;test&quot;, &quot;hexColor: $hexColor&quot;)
lastAnalyzedTimestamp = currentTimestamp
}
}
}

答案2

得分: 5

正如评论中提到的,如果您的目标仅是获取中心像素的颜色,将整个YUV图像转换为位图然后分析中心值的逻辑可能非常低效。您可以直接查看YUV图像中的颜色,定位到正确的像素。在YUV图像中,有三个平面,一个用于Y(每像素1字节),另一个用于U和V平面(每像素0.5字节,交织)。暂时忽略旋转,因为中心像素应该与旋转无关(不考虑高度或宽度的奇数值的可能性)。获取中心像素RGB值的高效逻辑如下:

planes = imageProxy.getPlanes()

val height = imageProxy.getHeight()
val width = imageProxy.getWidth()

// You may have to find the logic to get array from ByteBuffer
// Y
val yArr = planes[0].buffer.array()
val yPixelStride = planes[0].getPixelStride()
val yRowStride = planes[0].getRowStride()

// U
val uArr = planes[1].buffer.array()
val uPixelStride = planes[1].getPixelStride()
val uRowStride = planes[1].getRowStride()

// V
val vArr = planes[2].buffer.array()
val vPixelStride = planes[2].getPixelStride()
val vRowStride = planes[2].getRowStride()

val y = yArr[(height * yRowStride + width * yPixelStride) / 2] & 255 
val u = (uArr[(height * uRowStride + width * uPixelStride) / 4] & 255) - 128
val v = (vArr[(height * vRowStride + width * vPixelStride) / 4] & 255) - 128 

val r = y + (1.370705 * v);
val g = y - (0.698001 * v) - (0.337633 * u);
val b = y + (1.732446 * u);

有关魔术值的参考链接:https://en.wikipedia.org/wiki/YUV#Y%E2%80%B2UV420sp_(NV21)to_RGB_conversion(Android)

尝试在您的Kotlin代码中使用此逻辑,看看它是否工作并且是否适用于实时操作。这应该将O(height * width)的操作复杂度降低到常数时间复杂度。

英文:

As mentioned in the comment, if your target is to only get center pixel color the logic of converting the whole YUV image to Bitmap and then analysing the center value may be very inefficient. You can directly look into the color in YUV image by targeting the right pixel. In a YUV image you have three planes one for Y (1 byte per pixel) and U & V plane (.5 byte per pixel, interleaved). Ignoring the rotation at the moment as center pixel should be same regardless of rotation (discarding possibility of odd value of height or width). The efficient logic for getting center pixel rgb values would look like:

planes = imageProxy.getPlanes()
val height = imageProxy.getHeight()
val width = imageProxy.getWidth()
// You may have to find the logic to get array from ByteBuffer
// Y
val yArr = planes[0].buffer.array()
val yPixelStride = planes[0].getPixelStride()
val yRowStride = planes[0].getRowStride()
// U
val uArr = planes[1].buffer.array()
val uPixelStride = planes[1].getPixelStride()
val uRowStride = planes[1].getRowStride()
// V
val vArr = planes[2].buffer.array()
val vPixelStride = planes[2].getPixelStride()
val vRowStride = planes[2].getRowStride()
val y = yArr[(height * yRowStride + width * yPixelStride) / 2] &amp; 255 
val u = (uArr[(height * uRowStride + width * uPixelStride) / 4] &amp; 255) - 128
val v = (vArr[(height * vRowStride + width * vPixelStride) / 4] &amp; 255) - 128 
val r = y + (1.370705 * v);
val g = y - (0.698001 * v) - (0.337633 * u);
val b = y + (1.732446 * u);

Reference to magic values: https://en.wikipedia.org/wiki/YUV#Y%E2%80%B2UV420sp_(NV21)_to_RGB_conversion_(Android)

Try using this logic in your Kotlin code to see if it's working and is fast for realtime operations. This should definitely reduce a O(height * width) operation to constant time complexity.

huangapple
  • 本文由 发表于 2020年1月6日 22:32:47
  • 转载请务必保留本文链接:https://go.coder-hub.com/59613886.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定