Hey there! Welcome again to our collection on CameraX and Jetpack Compose. Within the earlier posts, we’ve coated the basics of organising a digicam preview and added tap-to-focus performance.
- 🧱 Half 1: Constructing a primary digicam preview utilizing the brand new camera-compose artifact. We coated permission dealing with and primary integration.
- 👆 Half 2: Utilizing the Compose gesture system, graphics, and coroutines to implement a visible tap-to-focus.
- 🔦 Half 3 (this put up): Exploring tips on how to overlay Compose UI parts on prime of your digicam preview for a richer person expertise.
- 📂 Half 4: Utilizing adaptive APIs and the Compose animation framework to easily animate to and from tabletop mode on foldable telephones.
On this put up, we’ll dive into one thing a bit extra visually participating — implementing a highlight impact on prime of our digicam preview, utilizing face detection as the premise for the impact. Why, you say? I’m unsure. Nevertheless it positive seems to be cool 🙂. And, extra importantly, it demonstrates how we are able to simply translate sensor coordinates into UI coordinates, permitting us to make use of them in Compose!
First, let’s modify the CameraPreviewViewModel to allow face detection. We’ll use the Camera2Interop
API, which permits us to work together with the underlying Camera2 API from CameraX. This provides us the chance to make use of digicam options that aren’t uncovered by CameraX straight. We have to make the next modifications:
- Create a StateFlow that incorporates the face bounds as a listing of
Rect
s. - Set the
STATISTICS_FACE_DETECT_MODE
seize request choice to FULL, which allows face detection. - Set a
CaptureCallback
to get the face data from the seize consequence.
class CameraPreviewViewModel : ViewModel() {
...
non-public val _sensorFaceRects = MutableStateFlow(listOf())
val sensorFaceRects: StateFlow> = _sensorFaceRects.asStateFlow()non-public val cameraPreviewUseCase = Preview.Builder()
.apply {
Camera2Interop.Extender(this)
.setCaptureRequestOption(
CaptureRequest.STATISTICS_FACE_DETECT_MODE,
CaptureRequest.STATISTICS_FACE_DETECT_MODE_FULL
)
.setSessionCaptureCallback(object : CameraCaptureSession.CaptureCallback() {
override enjoyable onCaptureCompleted(
session: CameraCaptureSession,
request: CaptureRequest,
consequence: TotalCaptureResult
) {
tremendous.onCaptureCompleted(session, request, consequence)
consequence.get(CaptureResult.STATISTICS_FACES)
?.map { face -> face.bounds.toComposeRect() }
?.toList()
?.let { faces -> _sensorFaceRects.replace { faces } }
}
})
}
.construct().apply {
...
}
With these modifications in place, our view mannequin now emits a listing of Rect
objects representing the bounding containers of detected faces in sensor coordinates.
The bounding containers of detected faces that we saved within the final part use coordinates within the sensor coordinate system. To attract the bounding containers in our UI, we have to remodel these coordinates in order that they’re appropriate within the Compose coordinate system. We have to:
- Rework the sensor coordinates into preview buffer coordinates
- Rework the preview buffer coordinates into Compose UI coordinates
These transformations are carried out utilizing transformation matrices. Every of the transformations has its personal matrix:
We are able to create a helper technique that may do the transformation for us:
non-public enjoyable Record.transformToUiCoords(
transformationInfo: SurfaceRequest.TransformationInfo?,
uiToBufferCoordinateTransformer: MutableCoordinateTransformer
): Record = this.map { sensorRect ->
val bufferToUiTransformMatrix = Matrix().apply {
setFrom(uiToBufferCoordinateTransformer.transformMatrix)
invert()
}val sensorToBufferTransformMatrix = Matrix().apply {
transformationInfo?.let {
setFrom(it.sensorToBufferTransform)
}
}
val bufferRect = sensorToBufferTransformMatrix.map(sensorRect)
val uiRect = bufferToUiTransformMatrix.map(bufferRect)
uiRect
}
- We iterate by the listing of detected faces, and for every face execute the transformation.
- The
CoordinateTransformer.transformMatrix
that we get from ourCameraXViewfinder
transforms coordinates from UI to buffer coordinates by default. In our case, we would like the matrix to work the opposite means round, reworking buffer coordinates into UI coordinates. Due to this fact, we use theinvert()
technique to invert the matrix. - We first remodel the face from sensor coordinates to buffer coordinates utilizing the
sensorToBufferTransformMatrix
, after which remodel these buffer coordinates to UI coordinates utilizing thebufferToUiTransformMatrix
.
Now, let’s replace the CameraPreviewContent
composable to attract the highlight impact. We’ll use a Canvas
composable to attract a gradient masks over the preview, making the detected faces seen:
@Composable
enjoyable CameraPreviewContent(
viewModel: CameraPreviewViewModel,
modifier: Modifier = Modifier,
lifecycleOwner: LifecycleOwner = LocalLifecycleOwner.present
) {
val surfaceRequest by viewModel.surfaceRequest.collectAsStateWithLifecycle()
val sensorFaceRects by viewModel.sensorFaceRects.collectAsStateWithLifecycle()
val transformationInfo by
produceState(null, surfaceRequest) {
strive {
surfaceRequest?.setTransformationInfoListener(Runnable::run) { transformationInfo ->
worth = transformationInfo
}
awaitCancellation()
} lastly {
surfaceRequest?.clearTransformationInfoListener()
}
}
val shouldSpotlightFaces by bear in mind {
derivedStateOf { sensorFaceRects.isNotEmpty() && transformationInfo != null}
}
val spotlightColor = Shade(0xDDE60991)
..surfaceRequest?.let { request ->
val coordinateTransformer = bear in mind { MutableCoordinateTransformer() }
CameraXViewfinder(
surfaceRequest = request,
coordinateTransformer = coordinateTransformer,
modifier = ..
)
AnimatedVisibility(shouldSpotlightFaces, enter = fadeIn(), exit = fadeOut()) {
Canvas(Modifier.fillMaxSize()) {
val uiFaceRects = sensorFaceRects.transformToUiCoords(
transformationInfo = transformationInfo,
uiToBufferCoordinateTransformer = coordinateTransformer
)
// Fill the entire area with the colour
drawRect(spotlightColor)
// Then extract every face and make it clear
uiFaceRects.forEach { faceRect ->
drawRect(
Brush.radialGradient(
0.4f to Shade.Black, 1f to Shade.Clear,
middle = faceRect.middle,
radius = faceRect.minDimension * 2f,
),
blendMode = BlendMode.DstOut
)
}
}
}
}
}
Right here’s the way it works:
- We acquire the listing of faces from the view mannequin.
- To ensure we’re not recomposing the entire display each time the listing of detected faces modifications, we use
derivedStateOf
to maintain monitor of whether or not any faces are detected in any respect. This could then be used withAnimatedVisibility
to animate the coloured overlay out and in. - The
surfaceRequest
incorporates the knowledge we have to remodel sensor coordinates to buffer coordinates within theSurfaceRequest.TransformationInfo
. We use theproduceState
perform to arrange a listener within the floor request, and clear this listener when the composable leaves the composition tree. - We use a
Canvas
to attract a translucent pink rectangle that covers your entire display. - We defer the studying of the
sensorFaceRects
variable till we’re contained in theCanvas
draw block. Then we remodel the coordinates into UI coordinates. - We iterate over the detected faces, and for every face, we draw a radial gradient that can make the within of the face rectangle clear.
- We use
BlendMode.DstOut
to make it possible for we’re chopping out the gradient from the pink rectangle, creating the highlight impact.
Notice: While you change the digicam to DEFAULT_FRONT_CAMERA
you’ll discover that the highlight is mirrored! This can be a recognized situation, tracked within the Google Subject Tracker.
With this code, we have now a totally purposeful highlight impact that highlights detected faces. You will discover the complete code snippet right here.
This impact is only the start — by utilizing the facility of Compose, you’ll be able to create a myriad of visually beautiful digicam experiences. With the ability to remodel sensor and buffer coordinates into Compose UI coordinates and again means we are able to make the most of all Compose UI options and combine them seamlessly with the underlying digicam system. With animations, superior UI graphics, easy UI state administration, and full gesture management, your creativeness is the restrict!
Within the ultimate put up of the collection, we’ll dive into tips on how to use adaptive APIs and the Compose animation framework to seamlessly transition between completely different digicam UIs on foldable gadgets. Keep tuned!