Hello everyone. I'm building a hybrid streaming app where I need to broadcast via RTMP (using the device's camera) and simultaneously establish a WebRTC P2P connection. Since Android doesn't allow two camera clients simultaneously, I'm trying to create a custom VideoCapturer that captures the RTMP SurfaceView using PixelCopy and feeds those frames into WebRTC.
I'm using: implementation("io.getstream:stream-webrtc-android:1.3.1")
The Issue:
The P2P connection (IceConnectionState.CONNECTED) establishes perfectly. However, the moment the WebRTC engine tries to process my custom injected frames to send them over the network, the app suffers a silent Native Crash (SIGSEGV). I suspect it's a memory violation regarding how I'm allocating the byte array or a thread context issue with SurfaceTextureHelper.
My current approach (Simplified):
Kotlin
// I have an active SurfaceView rendering the local camera.
// I run this at 15 FPS using a Handler.
PixelCopy.request(surfaceView, bitmapOriginal, { result ->
if (result == PixelCopy.SUCCESS) {
// 1. Scale down to avoid massive memory usage
val scaledBitmap = Bitmap.createScaledBitmap(bitmapOriginal, 320, 240, true)
// 2. Convert Bitmap to NV21 ByteArray (Custom math method)
val nv21Bytes = convertBitmapToNV21(scaledBitmap)
// 3. Inject into WebRTC
val timestampNs = System.nanoTime()
val buffer = org.webrtc.NV21Buffer(nv21Bytes, 320, 240, null)
val videoFrame = org.webrtc.VideoFrame(buffer, 0, timestampNs)
// videoSource is my org.webrtc.VideoSource
videoSource?.capturerObserver?.onFrameCaptured(videoFrame)
videoFrame.release()
}
}, myHandler)
My Questions:
What is the thread-safe and memory-safe (JNI-compliant) way to convert an Android Bitmap into a WebRTC VideoFrame using the GetStream library?
Should I be using JavaI420Buffer.allocate() or a direct ByteBuffer instead of a standard Kotlin ByteArray for the NV21Buffer to prevent the C++ segfault?
Do I strictly need to wrap this PixelCopy loop inside a SurfaceTextureHelper thread, or is a standard Android HandlerThread sufficient if I'm just pushing byte arrays?
Any guidance or snippet on how to properly implement a custom Bitmap to VideoFrame pipeline without crashing the native lib would be hugely appreciated!
Hello everyone. I'm building a hybrid streaming app where I need to broadcast via RTMP (using the device's camera) and simultaneously establish a WebRTC P2P connection. Since Android doesn't allow two camera clients simultaneously, I'm trying to create a custom VideoCapturer that captures the RTMP SurfaceView using PixelCopy and feeds those frames into WebRTC.
I'm using: implementation("io.getstream:stream-webrtc-android:1.3.1")
The Issue:
The P2P connection (IceConnectionState.CONNECTED) establishes perfectly. However, the moment the WebRTC engine tries to process my custom injected frames to send them over the network, the app suffers a silent Native Crash (SIGSEGV). I suspect it's a memory violation regarding how I'm allocating the byte array or a thread context issue with SurfaceTextureHelper.
My current approach (Simplified):
Kotlin
// I have an active SurfaceView rendering the local camera.
// I run this at 15 FPS using a Handler.
PixelCopy.request(surfaceView, bitmapOriginal, { result ->
if (result == PixelCopy.SUCCESS) {
}, myHandler)
My Questions:
What is the thread-safe and memory-safe (JNI-compliant) way to convert an Android Bitmap into a WebRTC VideoFrame using the GetStream library?
Should I be using JavaI420Buffer.allocate() or a direct ByteBuffer instead of a standard Kotlin ByteArray for the NV21Buffer to prevent the C++ segfault?
Do I strictly need to wrap this PixelCopy loop inside a SurfaceTextureHelper thread, or is a standard Android HandlerThread sufficient if I'm just pushing byte arrays?
Any guidance or snippet on how to properly implement a custom Bitmap to VideoFrame pipeline without crashing the native lib would be hugely appreciated!