Home Blog

Construct adaptive Android apps that shine throughout type components



Construct adaptive Android apps that shine throughout type components

Posted by Fahd Imtiaz – Product Supervisor, Android Developer

In case your app isn’t constructed to adapt, you’re lacking out on the chance to succeed in an enormous swath of customers throughout 500 million gadgets! At Google I/O this 12 months, we’re exploring how adaptive growth isn’t simply a good suggestion, however important to constructing apps that shine throughout the increasing Android gadget ecosystem. That is your information to assembly customers wherever they’re, with experiences which are completely tailor-made to their wants.

The benefit of constructing adaptive

In at this time’s multi-device world, customers anticipate their favourite functions to work flawlessly and intuitively, whether or not they’re on a smartphone, pill, or Chromebook. This expectation for seamless experiences is not nearly comfort; it is an essential issue for consumer engagement and retention.

For instance, leisure apps (together with Prime Video, Netflix, and Hulu) customers on each cellphone and pill spend nearly 200% extra time in-app (practically 3x engagement) than phone-only customers within the US*.

Peacock, NBCUniversal’s streaming service has seen a pattern of customers shifting between cellular and enormous screens and constructing adaptively allows a single construct to work throughout the totally different type components.

“This permits Peacock to have extra time to innovate sooner and ship extra worth to its prospects.”

– Diego Valente, Head of Cellular, Peacock and World Streaming

Adaptive Android growth gives the strategic resolution, enabling apps to carry out successfully throughout an increasing array of gadgets and contexts by way of clever design selections that emphasize code reuse and scalability. With Android’s steady progress into new type components and upcoming enhancements corresponding to desktop windowing and linked shows in Android 16, an app’s capability to seamlessly adapt to totally different display sizes is turning into more and more essential for retaining customers and staying aggressive.

Past direct consumer advantages, designing adaptively additionally interprets to elevated visibility. The Google Play Retailer actively helps promote builders whose apps excel on totally different type components. In case your software delivers an amazing expertise on tablets or is great on ChromeOS, customers on these gadgets may have a better time discovering your app. This creates a win-win state of affairs: higher high quality apps for customers and a broader viewers for you.

examples of form factors across small phones, tablets, laoptops, and auto

Newest in adaptive Android growth from Google I/O

That can assist you extra successfully construct compelling adaptive experiences, we shared a number of key updates at I/O this 12 months.

Construct for the increasing Android gadget ecosystem

Your cellular apps can now attain customers past telephones on over 500 million energetic gadgets, together with foldables, tablets, Chromebooks, and even suitable vehicles, with minimal adjustments. Android 16 introduces vital developments in desktop windowing for a real desktop-like expertise on giant screens and when gadgets are linked to exterior shows. And, Android XR is opening a brand new dimension, permitting your current cellular apps to be accessible in immersive digital environments.

The mindset shift to Adaptive

With the increasing Android gadget ecosystem, adaptive app growth is a basic technique. It is about how the identical cellular app runs properly throughout telephones, foldables, tablets, Chromebooks, linked shows, XR, and vehicles, laying a robust basis for future gadgets and differentiating for particular type components. You need not rebuild your app for every type issue; however relatively make small, iterative adjustments, as wanted, when wanted. Embracing this adaptive mindset at this time is not nearly preserving tempo; it is about main the cost in delivering distinctive consumer experiences throughout the whole Android ecosystem.

Construct adaptive Android apps that shine throughout type components

Leverage highly effective instruments and libraries to construct adaptive apps:

    • Compose Adaptive Layouts library: This library makes adaptive growth simpler by permitting your app code to suit into canonical format patterns like list-detail and supporting pane, that mechanically reflow as your app is resized, flipped or folded. Within the 1.1 launch, we launched pane enlargement, permitting customers to resize panes. The Socialite demo app showcased how one codebase utilizing this library can adapt throughout six type components. New adaptation methods like “Levitate” (elevating a pane, e.g., right into a dialog or backside sheet) and “Reflow” (reorganizing panes on the identical degree) have been additionally introduced in 1.2 (alpha). For XR, element overrides can mechanically spatialize UI components.

    • Jetpack Navigation 3 (Alpha): This new navigation library simplifies defining consumer journeys throughout screens with much less boilerplate code, particularly for multi-pane layouts in Compose. It helps deal with eventualities the place checklist and element panes is perhaps separate locations on smaller screens however proven collectively on bigger ones. Try the brand new Jetpack Navigation library in alpha.

    • Jetpack Compose enter enhancements: Compose’s layered structure, sturdy enter assist, and single location for format logic simplify creating adaptive UIs. Upcoming in Compose 1.9 are right-click context menus and enhanced trackpad/mouse performance.

    • Window Dimension Courses: Use window measurement lessons for top-level format selections. AndroidX.window 1.5 introduces two new width measurement lessons – “giant” (1200dp to 1600dp) and “extra-large” (1600dp and bigger) – offering extra granular breakpoints for giant screens. This helps in deciding when to broaden navigation rails or present three panes of content material. Assist for these new breakpoints was additionally introduced within the Compose adaptive layouts library 1.2 alpha, together with design steering.

    • Compose previews: Get fast suggestions by visualizing your layouts throughout all kinds of display sizes and facet ratios. You may as well specify totally different gadgets by title to preview your UI on their respective sizes and with their inset values.

    • Testing adaptive layouts: Validating your adaptive layouts is essential and Android Studio gives varied instruments for testing – together with previews for various sizes and facet ratios, a resizable emulator to check throughout totally different display sizes with a single AVD, screenshot exams, and instrumental habits exams. And with Journeys with Gemini in Android Studio, you possibly can outline exams utilizing pure language for much more strong testing throughout totally different window sizes.

Making certain app availability throughout gadgets

Keep away from unnecessarily declaring required options (like particular cameras or GPS) in your manifest, as this will stop your app from showing within the Play Retailer on gadgets that lack these particular {hardware} elements however might in any other case run your app completely.

Dealing with totally different enter strategies

Bear in mind to deal with varied enter strategies like contact, keyboard, and mouse, particularly with Chromebook detachables and linked shows.

Put together for orientation and resizability API adjustments in Android 16

Starting in Android 16, for apps focusing on SDK 36, manifest and runtime restrictions on orientation, resizability, and facet ratio shall be ignored on shows which are no less than 600dp in each dimensions. To fulfill consumer expectations, your apps will want layouts that work for each portrait and panorama home windows, and assist resizing at runtime. There is a short-term opt-out manifest flag at each the applying and exercise degree to delay these adjustments till targetSdk 37, and these adjustments presently don’t apply to apps categorized as “Video games”. Be taught extra about these API adjustments.

Adaptive issues for video games

Video games should be adaptive too and Unity 6 will add enhanced assist for configuration dealing with, together with APIs for screenshots, facet ratio, and density. Success tales like Asphalt Legends Unite present vital consumer retention will increase on foldables after implementing adaptive options.

examples of form factors including vr headset

Begin constructing adaptive at this time

Now could be the time to raise your Android apps, making them intuitively responsive throughout type components. With the newest instruments and updates we’re introducing, you’ve gotten the ability to construct experiences that seamlessly movement throughout all gadgets, from foldables to vehicles and past. Implementing these methods will mean you can broaden your attain and delight customers throughout the Android ecosystem.

Get impressed by the “Adaptive Android growth makes your app shine throughout gadgets” discuss, and discover all of the sources you’ll want to begin your journey at developer.android.com/adaptive-apps!

Discover this announcement and all Google I/O 2025 updates on io.google beginning Could 22.

*Supply: inner Google information

Android Design at Google I/O 2025



Android Design at Google I/O 2025

Posted by Ivy Knight – Senior Design Advocate

Right here’s your information to the important Android Design periods, sources, and bulletins for I/O ‘25:

Take a look at the most recent Android updates

The Android Present: I/O Version

The Android Present had a particular I/O version this yr with some thrilling bulletins like Materials Expressive!

Study extra concerning the new Stay Replace Notification templates within the Android Notifications & Stay Updates for an in-depth have a look at what they’re, when to make use of them, and why. You can even get the Stay Replace design template within the Android UI Package, learn extra within the up to date Notification steering, and get hands-on with the Jetsnack Stay Updates and Widget case research.

Make your apps extra expressive

Get a soar on the way forward for Google’s UX design: Materials 3 Expressive. Discover ways to use new emotional design patterns to spice up engagement, usability, and want on your product within the Construct Subsequent-Degree UX with Materials 3 Expressive session and take a look at the expressive replace on Materials.io.

Keep updated with Android Accessibility Updates, highlighting accessibility options launching with Android 16: enhanced darkish themes, choices for these with movement illness, a brand new method to enhance textual content distinction, and extra.

Catch the Mastering textual content enter in Compose session to study extra about how partaking strong textual content experiences are constructed with Jetpack Compose. It covers Autofill integration, dynamic textual content resizing, and customized enter transformations. It is a nice session to look at to see what’s potential when designing textual content inputs.

Considering throughout type elements

These design sources and periods may help you design throughout extra Android type elements or replace your present experiences.

Preview Gemini in-car, imagining seamless navigation and personalised leisure within the New In-Automotive App Experiences session. Then discover the brand new Automotive UI Design Package to carry your app to Android Automotive platforms and pace up your course of with the most recent Android type issue equipment.

Partaking with customers on Google TV with glorious TV apps session discusses new methods the Google TV expertise is making it simpler for customers to seek out and interact with content material, together with enchancment to out-of-box options and updates to Android TV OS.

Need a peek at how you can carry immersive content material, like 3D fashions, to Android XR with the Constructing differentiated apps for Android XR with 3D Content material session.

Plus WearOS is releasing an up to date design equipment @AndroidDesign Figma and studying Pathway.

Tip prime apps

We’ve additionally launched the next new Android design steering that will help you design the most effective Android experiences:

In-app Settings

Learn up on the most recent advised patterns to construct out your app’s settings.

Assist and Suggestions

Together with settings, find out about including assist and suggestions to your app.

Widget Configuration

Does your app want setup? New steering to assist information in including configuration to your app’s widgets.

Edge-to-edge design

Enable your apps to take full benefit of your entire display with the most recent steering on designing for edge-to-edge.

Take a look at figma.com/@androiddesign for much more new and up to date sources.

Go to the I/O 2025 web site, construct your schedule, and interact with the group. In case you are on the Shoreline come say howdy to us within the Android tent at our cubicles.

We will not wait to see what you create with these new instruments and insights. Joyful I/O!

Discover this announcement and all Google I/O 2025 updates on io.google beginning Could 22.

How Androidify leverages Gemini, Firebase and ML Package



How Androidify leverages Gemini, Firebase and ML Package

Posted by Thomas Ezan – Developer Relations Engineer, Rebecca Franks – Developer Relations Engineer, and Avneet Singh – Product Supervisor

We’re bringing again Androidify later this 12 months, this time powered by Google AI, so you possibly can customise your very personal Android bot and share your creativity with the world. At the moment, we’re releasing a brand new open supply demo app for Androidify as an incredible instance of how Google is utilizing its Gemini AI fashions to boost app experiences.

On this submit, we’ll dive into how the Androidify app makes use of Gemini fashions and Imagen by way of the Firebase AI Logic SDK, and we’ll present some insights discovered alongside the best way that can assist you incorporate Gemini and AI into your individual tasks. Learn extra in regards to the Androidify demo app.

App circulate

The general app features as follows, with varied components of it utilizing Gemini and Firebase alongside the best way:

flow chart demonstrating Androidify app flow

Gemini and picture validation

To get began with Androidify, take a photograph or select a picture in your gadget. The app must make it possible for the picture you add is appropriate for creating an avatar.

Gemini 2.5 Flash by way of Firebase helps with this by verifying that the picture incorporates an individual, that the particular person is in focus, and assessing picture security, together with whether or not the picture incorporates abusive content material.

val jsonSchema = Schema.obj(
   properties = mapOf("success" to Schema.boolean(), "error" to Schema.string()),
   optionalProperties = listOf("error"),
   )
   
val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI())
   .generativeModel(
            modelName = "gemini-2.5-flash-preview-04-17",
   	     generationConfig = generationConfig {
                responseMimeType = "utility/json"
                responseSchema = jsonSchema
            },
            safetySettings = listOf(
                SafetySetting(HarmCategory.HARASSMENT, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.HATE_SPEECH, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.SEXUALLY_EXPLICIT, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.DANGEROUS_CONTENT, HarmBlockThreshold.LOW_AND_ABOVE),
                SafetySetting(HarmCategory.CIVIC_INTEGRITY, HarmBlockThreshold.LOW_AND_ABOVE),
    	),
    )

 val response = generativeModel.generateContent(
            content material {
                textual content("You're to investigate the supplied picture and decide whether it is acceptable and applicable primarily based on particular standards.... (extra particulars see the total pattern)")
                picture(picture)
            },
        )

val jsonResponse = Json.parseToJsonElement(response.textual content)
val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true
val error = jsonResponse.jsonObject["error"]?.jsonPrimitive?.content material

Within the snippet above, we’re leveraging structured output capabilities of the mannequin by defining the schema of the response. We’re passing a Schema object by way of the responseSchema param within the generationConfig.

We need to validate that the picture has sufficient info to generate a pleasant Android avatar. So we ask the mannequin to return a json object with success = true/false and an elective error message explaining why the picture would not have sufficient info.

Structured output is a strong characteristic enabling a smoother integration of LLMs to your app by controlling the format of their output, much like an API response.

Picture captioning with Gemini Flash

As soon as it is established that the picture incorporates enough info to generate an Android avatar, it’s captioned utilizing Gemini 2.5 Flash with structured output.

val jsonSchema = Schema.obj(
            properties = mapOf(
                "success" to Schema.boolean(),
                "user_description" to Schema.string(),
            ),
            optionalProperties = listOf("user_description"),
        )
val generativeModel = createGenerativeTextModel(jsonSchema)

val immediate = "You're to create a VERY detailed description of the principle particular person within the given picture. This description will likely be translated right into a immediate for a generative picture mannequin..."

val response = generativeModel.generateContent(
content material { 
       	textual content(immediate) 
             	picture(picture) 
	})
        
val jsonResponse = Json.parseToJsonElement(response.textual content!!) 
val isSuccess = jsonResponse.jsonObject["success"]?.jsonPrimitive?.booleanOrNull == true

val userDescription = jsonResponse.jsonObject["user_description"]?.jsonPrimitive?.content material

The opposite possibility within the app is to begin with a textual content immediate. You possibly can enter in particulars about your equipment, coiffure, and clothes, and let Imagen be a bit extra inventive.

Android technology by way of Imagen

We’ll use this detailed description of your picture to counterpoint the immediate used for picture technology. We’ll add further particulars round what we wish to generate and embrace the bot colour choice as a part of this too, together with the pores and skin tone chosen by the person.

val imagenPrompt = "A 3D rendered cartoonish Android mascot in a photorealistic fashion, the pose is relaxed and easy, dealing with instantly ahead [...] The bot seems as follows $userDescription [...]"

We then name the Imagen mannequin to create the bot. Utilizing this new immediate, we create a mannequin and name generateImages:

// we provide our personal fine-tuned mannequin right here however you should use "imagen-3.0-generate-002" 
val generativeModel = Firebase.ai(backend = GenerativeBackend.googleAI()).imagenModel(
            "imagen-3.0-generate-002",
            safetySettings =
            ImagenSafetySettings(
                ImagenSafetyFilterLevel.BLOCK_LOW_AND_ABOVE,
                personFilterLevel = ImagenPersonFilterLevel.ALLOW_ALL,
            ),
)

val response = generativeModel.generateImages(imagenPrompt)

val picture = response.pictures.first().asBitmap()

And that’s it! The Imagen mannequin generates a bitmap that we are able to show on the person’s display.

Finetuning the Imagen mannequin

The Imagen 3 mannequin was finetuned utilizing Low-Rank Adaptation (LoRA). LoRA is a fine-tuning method designed to cut back the computational burden of coaching giant fashions. As a substitute of updating your entire mannequin, LoRA provides smaller, trainable “adapters” that make small adjustments to the mannequin’s efficiency. We ran a positive tuning pipeline on the Imagen 3 mannequin typically accessible with Android bot belongings of various colour mixtures and completely different belongings for enhanced cuteness and enjoyable. We generated textual content captions for the coaching pictures and the image-text pairs have been used to finetune the mannequin successfully.

The present pattern app makes use of a regular Imagen mannequin, so the outcomes might look a bit completely different from the visuals on this submit. Nevertheless, the app utilizing the fine-tuned mannequin and a customized model of Firebase AI Logic SDK was demoed at Google I/O. This app will likely be launched later this 12 months and we’re additionally planning on including assist for fine-tuned fashions to Firebase AI Logic SDK later within the 12 months.

moving image of Androidify app demo turning a selfie image of a bearded man wearing a black tshirt and sunglasses, with a blue back pack into a green 3D bearded droid wearing a black tshirt and sunglasses with a blue backpack

The unique picture… and Androidifi-ed picture

ML Package

The app additionally makes use of the ML Package Pose Detection SDK to detect an individual within the digicam view, which triggers the seize button and provides visible indicators.

To do that, we add the SDK to the app, and use PoseDetection.getClient(). Then, utilizing the poseDetector, we take a look at the detectedLandmarks which are within the streaming picture coming from the Digital camera, and we set the _uiState.detectedPose to true if a nostril and shoulders are seen:

non-public droop enjoyable runPoseDetection() {
    PoseDetection.getClient(
        PoseDetectorOptions.Builder()
            .setDetectorMode(PoseDetectorOptions.STREAM_MODE)
            .construct(),
    ).use { poseDetector ->
        // Since picture evaluation is processed by ML Package asynchronously in its personal thread pool,
        // we are able to run this instantly from the calling coroutine scope as a substitute of pushing this
        // work to a background dispatcher.
        cameraImageAnalysisUseCase.analyze { imageProxy ->
            imageProxy.picture?.let { picture ->
                val poseDetected = poseDetector.detectPersonInFrame(picture, imageProxy.imageInfo)
                _uiState.replace { it.copy(detectedPose = poseDetected) }
            }
        }
    }
}

non-public droop enjoyable PoseDetector.detectPersonInFrame(
    picture: Picture,
    imageInfo: ImageInfo,
): Boolean {
    val outcomes = course of(InputImage.fromMediaImage(picture, imageInfo.rotationDegrees)).await()
    val landmarkResults = outcomes.allPoseLandmarks
    val detectedLandmarks = mutableListOf()
    for (landmark in landmarkResults) {
        if (landmark.inFrameLikelihood > 0.7) {
            detectedLandmarks.add(landmark.landmarkType)
        }
    }

    return detectedLandmarks.containsAll(
        listOf(PoseLandmark.NOSE, PoseLandmark.LEFT_SHOULDER, PoseLandmark.RIGHT_SHOULDER),
    )
}

moving image showing the camera shutter button activating when an orange droid figurine is held in the camera frame

The digicam shutter button is activated when an individual (or a bot!) enters the body.

Get began with AI on Android

The Androidify app makes an in depth use of the Gemini 2.5 Flash to validate the picture and generate an in depth description used to generate the picture. It additionally leverages the particularly fine-tuned Imagen 3 mannequin to generate pictures of Android bots. Gemini and Imagen fashions are simply built-in into the app by way of the Firebase AI Logic SDK. As well as, ML Package Pose Detection SDK controls the seize button, enabling it solely when an individual is current in entrance of the digicam.

To get began with AI on Android, go to the Gemini and Imagen documentation for Android.

Discover this announcement and all Google I/O 2025 updates on io.google beginning Might 22.

Showcasing the Energy of Cisco Companions to Ship Actual Outcomes


The wants of consumers are evolving quicker than ever — and Cisco is evolving with them.

As you already know, we’re re-architecting the way in which we design options and ship them to market — with a One Cisco strategy throughout our portfolio and AI technique — as a result of yesterday’s approaches now not assist at present’s actuality.

Prospects at present count on quicker innovation, seamless experiences, and higher impression from their know-how investments.

That’s why we’re constructing the Cisco 360 Associate Program — designed to drive actual buyer outcomes, acknowledge various companion enterprise fashions, and reward worth creation by way of functionality constructing, go-to-market energy, and deeper engagement.

Along with you — our companions — we’re creating one thing essentially new to satisfy the challenges and alternatives forward.

 

Bringing Prospects Alongside

You’ve advised us loud and clear: We have to begin bringing prospects alongside now.

And we couldn’t agree extra.

We’re excited to share that we’ve formally launched a buyer consciousness marketing campaign to assist prospects perceive the evolution underway — and the way Cisco and our companions are higher positioned than ever to assist them obtain their enterprise objectives.

What This Means for You

We all know that prospects’ expectations are altering. They want trusted guides who may help them modernize infrastructure, deploy AI options, safe their operations, and ship measurable enterprise outcomes. The Cisco 360 Associate Program — and this buyer marketing campaign — are designed to place you on the middle of that chance.

  • Acknowledged for worth creation: Not simply transactions, however the outcomes you assist prospects obtain.
  • Rewarded for functionality constructing and engagement: Targeted on the abilities and experience prospects are looking for.
  • Aligned for progress: With a framework constructed for at present’s wants and tomorrow’s prospects.

We’re dedicated to supporting you each step of the way in which as we transfer towards the official launch on February 1, 2026 — supplying you with the instruments to have interaction prospects confidently and present the distinctive worth you deliver.

Thank You

Thanks on your continued partnership, innovation, and dedication to buyer success.

Collectively, we’re main a metamorphosis that may drive mutual progress and ship higher outcomes for patrons all over the world.

 


We’d love to listen to what you assume. Ask a Query, Remark Under, and Keep Related with #CiscoPartners on social!

Cisco Companions Fb  |  @CiscoPartners X/Twitter  |  Cisco Companions LinkedIn

Share:



Constructing pleasant UIs with Compose



Constructing pleasant UIs with Compose

Posted by Rebecca Franks – Developer Relations Engineer

Androidify is a brand new pattern app we constructed utilizing the most recent finest practices for cellular apps. Beforehand, we lined all of the completely different options of the app, from Gemini integration and CameraX performance to adaptive layouts. On this publish, we dive into the Jetpack Compose utilization all through the app, constructing upon our base information of Compose so as to add pleasant and expressive touches alongside the best way!

Materials 3 Expressive

Materials 3 Expressive is an enlargement of the Materials 3 design system. It’s a set of latest options, up to date parts, and design techniques for creating emotionally impactful UX.

It’s been launched as a part of the alpha model of the Materials 3 artifact (androidx.compose.material3:material3:1.4.0-alpha10) and comprises a variety of latest parts you should utilize inside your apps to construct extra customized and pleasant experiences. Study extra about Materials 3 Expressive’s part and theme updates for extra partaking and user-friendly merchandise.

Material Expressive Component updates

Materials Expressive Element updates

Along with the brand new part updates, Materials 3 Expressive introduces a brand new movement physics system that is encompassed within the Materials theme.

In Androidify, we’ve utilized Materials 3 Expressive in a couple of alternative ways throughout the app. For instance, we’ve explicitly opted-in to the brand new MaterialExpressiveTheme and chosen MotionScheme.expressive() (that is the default when utilizing expressive) so as to add a little bit of playfulness to the app:

@Composable
enjoyable AndroidifyTheme(
   content material: @Composable () -> Unit,
) {
   val colorScheme = LightColorScheme


   MaterialExpressiveTheme(
       colorScheme = colorScheme,
       typography = Typography,
       shapes = shapes,
       motionScheme = MotionScheme.expressive(),
       content material = {
           SharedTransitionLayout {
               CompositionLocalProvider(LocalSharedTransitionScope gives this) {
                   content material()
               }
           }
       },
   )
}

A number of the new componentry is used all through the app, together with the HorizontalFloatingToolbar for the Immediate sort choice:

moving example of expressive button shapes in slow motion

The app additionally makes use of MaterialShapes in varied places, that are a preset record of shapes that enable for simple morphing between one another. For instance, try the lovable cookie form for the digital camera seize button:

Material Expressive Component updates

Digicam button with a MaterialShapes.Cookie9Sided form

Animations

Wherever potential, the app leverages the Materials 3 Expressive MotionScheme to acquire a themed movement token, making a constant movement feeling all through the app. For instance, the dimensions animation on the digital camera button press is powered by defaultSpatialSpec(), a specification used for animations that transfer one thing throughout a display screen (comparable to x,y or rotation, scale animations):

val interactionSource = keep in mind { MutableInteractionSource() }
val animationSpec = MaterialTheme.motionScheme.defaultSpatialSpec()
Spacer(
   modifier
       .indication(interactionSource, ScaleIndicationNodeFactory(animationSpec))
       .clip(MaterialShapes.Cookie9Sided.toShape())
       .measurement(measurement)
       .drawWithCache {
           //.. and so on
       },
)

Camera button scale interaction

Digicam button scale interplay

Shared ingredient animations

The app makes use of shared ingredient transitions between completely different display screen states. Final yr, we showcased how one can create shared parts in Jetpack Compose, and we’ve prolonged this within the Androidify pattern to create a enjoyable instance. It combines the brand new Materials 3 Expressive MaterialShapes, and performs a transition with a morphing form animation:

moving example of expressive button shapes in slow motion

To do that, we created a customized Modifier that takes within the goal and resting shapes for the sharedBounds transition:

@Composable
enjoyable Modifier.sharedBoundsRevealWithShapeMorph(
   sharedContentState: 
SharedTransitionScope.SharedContentState,
   sharedTransitionScope: SharedTransitionScope = 
LocalSharedTransitionScope.present,
   animatedVisibilityScope: AnimatedVisibilityScope = 
LocalNavAnimatedContentScope.present,
   boundsTransform: BoundsTransform = 
MaterialTheme.motionScheme.sharedElementTransitionSpec,
   resizeMode: SharedTransitionScope.ResizeMode = 
SharedTransitionScope.ResizeMode.RemeasureToBounds,
   restingShape: RoundedPolygon = RoundedPolygon.rectangle().normalized(),
   targetShape: RoundedPolygon = RoundedPolygon.circle().normalized(),
)

Then, we apply a customized OverlayClip to supply the morphing form, by tying into the AnimatedVisibilityScope supplied by the LocalNavAnimatedContentScope:

val animatedProgress =
   animatedVisibilityScope.transition.animateFloat(targetValueByState = targetValueByState)


val morph = keep in mind {
   Morph(restingShape, targetShape)
}
val morphClip = MorphOverlayClip(morph, { animatedProgress.worth })


return this@sharedBoundsRevealWithShapeMorph
   .sharedBounds(
       sharedContentState = sharedContentState,
       animatedVisibilityScope = animatedVisibilityScope,
       boundsTransform = boundsTransform,
       resizeMode = resizeMode,
       clipInOverlayDuringTransition = morphClip,
       renderInOverlayDuringTransition = renderInOverlayDuringTransition,
   )

View the full code snippet for this Modifer on GitHub.

Autosize textual content

With the most recent launch of Jetpack Compose 1.8, we added the flexibility to create textual content composables that routinely alter the font measurement to suit the container’s accessible measurement with the brand new autoSize parameter:

BasicText(textual content,
type = MaterialTheme.typography.titleLarge,
autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
)

That is used entrance and heart for the “Customise your personal Android Bot” textual content:

Text reads Customize your own Android Bot with an inline moving image

“Customise your personal Android Bot” textual content with inline GIF

This textual content composable is attention-grabbing as a result of it wanted to have the enjoyable dancing Android bot in the course of the textual content. To do that, we use InlineContent, which permits us to append a composable in the course of the textual content composable itself:

@Composable
personal enjoyable DancingBotHeadlineText(modifier: Modifier = Modifier) {
   Field(modifier = modifier) {
       val animatedBot = "animatedBot"
       val textual content = buildAnnotatedString {
           append(stringResource(R.string.customise))
           // Connect "animatedBot" annotation on the placeholder
           appendInlineContent(animatedBot)
           append(stringResource(R.string.android_bot))
       }
       var placeHolderSize by keep in mind {
           mutableStateOf(220.sp)
       }
       val inlineContent = mapOf(
           Pair(
               animatedBot,
               InlineTextContent(
                   Placeholder(
                       width = placeHolderSize,
                       top = placeHolderSize,
                       placeholderVerticalAlign = PlaceholderVerticalAlign.TextCenter,
                   ),
               ) {
                   DancingBot(
                       modifier = Modifier
                           .padding(prime = 32.dp)
                           .fillMaxSize(),
                   )
               },
           ),
       )
       BasicText(
           textual content,
           modifier = Modifier
               .align(Alignment.Middle)
               .padding(backside = 64.dp, begin = 16.dp, finish = 16.dp),
           type = MaterialTheme.typography.titleLarge,
           autoSize = TextAutoSize.StepBased(maxFontSize = 220.sp),
           maxLines = 6,
           onTextLayout = { consequence ->
               placeHolderSize = consequence.layoutInput.type.fontSize * 3.5f
           },
           inlineContent = inlineContent,
       )
   }
}

Composable visibility with onLayoutRectChanged

With Compose 1.8, a brand new modifier, Modifier.onLayoutRectChanged, was added. This modifier is a extra performant model of onGloballyPositioned, and contains options comparable to debouncing and throttling to make it performant inside lazy layouts.

In Androidify, we’ve used this modifier for the colour splash animation. It determines the place the place the transition ought to begin from, as we connect it to the “Let’s Go” button:

var buttonBounds by keep in mind {
   mutableStateOf(null)
}
var showColorSplash by keep in mind {
   mutableStateOf(false)
}
Field(modifier = Modifier.fillMaxSize()) {
   PrimaryButton(
       buttonText = "Let's Go",
       modifier = Modifier
           .align(Alignment.BottomCenter)
           .onLayoutRectChanged(
               callback = { bounds ->
                   buttonBounds = bounds
               },
           ),
       onClick = {
           showColorSplash = true
       },
   )
}

We use these bounds as a sign of the place to begin the colour splash animation from.

moving image of a blue color splash transition between Androidify demo screens

Study extra pleasant particulars

From enjoyable marquee animations on the outcomes display screen, to animated gradient buttons for the AI-powered actions, to the trail drawing animation for the loading display screen, this app has many pleasant touches so that you can expertise and be taught from.

animated marquee example

animated gradient button for AI powered actions example

animated loading screen example

Try the total codebase at github.com/android/androidify and be taught extra in regards to the newest in Compose from utilizing Materials 3 Expressive, the brand new modifiers, auto-sizing textual content and naturally a few pleasant interactions!

Discover this announcement and all Google I/O 2025 updates on io.google beginning Might 22.