Home Blog

Connecting a New Technology of Shifting Belongings with Cisco Extremely-Dependable Wi-fi Backhaul


What traits does a wi-fi community want to attach shifting property? Reply: it is determined by the stakes. Occasional lapses in connectivity and a bit of latency is perhaps okay when you’re catching up on electronic mail whereas using a bus, however not when you’re remotely controlling a 100,000-pound bulldozer on a steep dam slope. That’s why Aterpa, a Brazilian engineering firm employed to decommission Brazil’s tailings dams, makes use of Cisco Extremely-Dependable Wi-fi Backhaul (URWB) to remotely management heavy gear like bulldozers and excavators.

Distant management of unmanned programs can’t tolerate community latency

Following dam collapses in 2015 and 2019, giant mining corporations dedicated to stopping future catastrophes by launching applications to decommission tailings dams (tailings are mud-like byproducts of ore mining) constructed utilizing the identical development strategies because the failed ones. It’s harmful work as a result of heavy equipment wants to maneuver throughout unstable floor, and missteps can result in accidents. “Dam decharacterization, or decommissioning, is a comparatively new subject,” explains Rodrigo Campos, contract supervisor at Aterpa. “You should take away the tailings within the dam, however you’ll be able to’t permit individuals to enter due to the protection points.”

To soundly decommission dams within the Brazilian state of Minas Garais, Aterpa determined to make use of unmanned programs. The plan: operators working in a central command middle would view real-time digicam feeds and sensor information from heavy gear like excavators to remotely management these programs over the community. The difficult half was establishing a wi-fi out of doors community with the required reliability, low latency, and seamless handoffs. Present mesh options and even the newest Wi-Fi variations aren’t designed for these calls for as minimal jitter and latency can impression operations. If a video feed is delayed by even half a second, for instance, an operator may not see an impediment in time to maneuver round it or brake, presumably sending the gear toppling down the dam slope.

Extremely-reliable wi-fi for shifting gear

Aterpa discovered its reply in Cisco URWB, which gives the ultra-low latency (<10ms), seamless handoffs, and uninterrupted connectivity wanted to remotely management shifting property in high-stakes environments. “In our operations middle we’ve recreated the expertise of a standard automobile cab, full with a steering wheel and controls,” says Campos. Operators see what they’d see in the event that they have been sitting within the bodily cab on the slope, permitting them to soundly steer gear and take away tailings. “With Cisco URWB, when the distant operator points a command, the gear responds immediately,” Campos provides.

Between September 2022 and April 2025, Aterpa eliminated roughly 1.7 million cubic meters (2.22 million cubic toes) of waste with teleremote operations over Cisco URWB. The communities close to the dams are not threatened. Operators are safer and the setting is cleaner. After nearly three years of unmanned operation over Cisco URWB, we’ve seen that when the community is dependable, the operation is dependable,” says Campos. “Our working mannequin has grow to be a benchmark for the business.”

3 ways Cisco URWB stands aside

What set Cisco URWB aside from different wi-fi applied sciences for high-stakes operations like this one? First, URWB delivers near-zero (<10ms) latency. Second, when a linked system—say, an excavator, robotic, or prepare—roams between entry factors, the connection doesn’t break till the following connection is established. We name that “make-before-break” connectivity. Lastly, URWB sends duplicate copies of packets (like sensor information from shifting property and instructions from operators) over as much as eight redundant paths.

Study extra

Discover us within the World of Options at Cisco Dwell to see URWB in motion and register to the PSOIOT-1020 session.

Examine Cisco Extremely-Dependable Wi-fi Backhaul.

Share:

Android Builders Weblog: Updates to the Android XR SDK: Introducing Developer Preview 2



Android Builders Weblog: Updates to the Android XR SDK: Introducing Developer Preview 2

Posted by Matthew McCullough – VP of Product Administration, Android Developer

Since launching the Android XR SDK Developer Preview alongside Samsung, Qualcomm, and Unity final yr, we’ve been blown away by the entire pleasure we’ve been listening to from the broader Android group. Whether or not it is by coding live-streams or native Google Developer Group talks, it has been an impressive expertise taking part in the neighborhood to construct the way forward for XR collectively, and we’re simply getting began.

Immediately we’re excited to share an replace to the Android XR SDK: Developer Preview 2, filled with new options and enhancements that can assist you develop useful and pleasant immersive experiences with acquainted Android APIs, instruments and open requirements created for XR.

At Google I/O, we have now two technical classes associated to Android XR. The primary is Constructing differentiated apps for Android XR with 3D content material, which covers many options current in Jetpack SceneCore and ARCore for Jetpack XR. The long run is now, with Compose and AI on Android XR covers creating XR-differentiated UI and our imaginative and prescient on the intersection of XR with cutting-edge AI capabilities.

Android XR sessions at Google I/O 2025

Constructing differentiated apps for Android XR with 3D content material and The long run is now, with Compose and AI on Android XR

What’s new in Developer Preview 2

Because the launch of Developer Preview 1, we’ve been centered on making the APIs simpler to make use of and including new immersive Android XR options. Your suggestions has helped us form the event of the instruments, SDKs, and the platform itself.

With the Jetpack XR SDK, now you can play again 180° and 360° movies, which might be stereoscopic by encoding with the MV-HEVC specification or by encoding view-frames adjacently. The MV-HEVC customary is optimized and designed for stereoscopic video, permitting your app to effectively play again immersive movies at nice high quality. Apps constructed with Jetpack Compose for XR can use the SpatialExternalSurface composable to render media, together with stereoscopic movies.

Utilizing Jetpack Compose for XR, now you can additionally outline layouts that adapt to totally different XR show configurations. For instance, use a SubspaceModifier to specify the dimensions of a Subspace as a proportion of the machine’s really useful viewing measurement, so a panel effortlessly fills the area it is positioned in.

Materials Design for XR now helps extra element overrides for TopAppBar, AlertDialog, and ListDetailPaneScaffold, serving to your large-screen enabled apps that use Materials Design effortlessly adapt to the brand new world of XR.

An app adapts to XR using Material Design for XR with the new component overrides

An app adapts to XR utilizing Materials Design for XR with the brand new element overrides

In ARCore for Jetpack XR, now you can observe palms after requesting the suitable permissions. Arms are a set of 26 posed hand joints that can be utilized to detect hand gestures and produce a complete new stage of interplay to your Android XR apps:

moving image demonstrates how hands bring a natural input method to your Android XR experience.

Arms carry a pure enter methodology to your Android XR expertise.

For extra steering on growing apps for Android XR, try our Android XR Fundamentals codelab, the updates to our Whats up Android XR pattern undertaking, and a brand new model of JetStream with Android XR help.

The Android XR Emulator has additionally acquired updates to stability, help for AMD GPUs, and is now totally built-in throughout the Android Studio UI.

the Android XR Emulator in Android STudio

The Android XR Emulator is now built-in in Android Studio

Builders utilizing Unity have already efficiently created and ported present video games and apps to Android XR. Immediately, you possibly can improve to the Pre-Launch model 2 of the Unity OpenXR: Android XR package deal! This replace provides many efficiency enhancements comparable to help for Dynamic Refresh Price, which optimizes your app’s efficiency and energy consumption. Shaders made with Shader Graph now help SpaceWarp, making it simpler to make use of SpaceWarp to cut back compute load on the machine. Hand meshes are actually uncovered with occlusion, which allows reasonable hand visualization.

Try Unity’s improved Combined Actuality template for Android XR, which now consists of help for occlusion and protracted anchors.

We lately launched Android XR Samples for Unity, which show capabilities on the Android XR platform comparable to hand monitoring, airplane monitoring, face monitoring, and passthrough.

moving image of Google’s open-source Unity samples demonstrating platform features and showing how they’re implemented

Google’s open-source Unity samples show platform options and present how they’re carried out

The Firebase AI Logic for Unity is now in public preview! This makes it simple so that you can combine gen AI into your apps, enabling the creation of AI-powered experiences with Gemini and Android XR. The Firebase AI Logic totally helps Gemini’s capabilities, together with multimodal enter and output, and bi-directional streaming for immersive conversational interfaces. Constructed with manufacturing readiness in thoughts, Firebase AI Logic is built-in with core Firebase providers like App Verify, Distant Config, and Cloud Storage for enhanced safety, configurability, and information administration. Study extra about this on the Firebase weblog or go straight to the Gemini API utilizing Vertex AI in Firebase SDK documentation to get began.

Persevering with to construct the long run collectively

Our dedication to open requirements continues with the glTF Interactivity specification, in collaboration with the Khronos Group. which will probably be supported in glTF fashions rendered by Jetpack XR later this yr. Fashions utilizing the glTF Interactivity specification are self-contained interactive property that may have many pre-programmed behaviors, like rotating objects on a button press or altering the colour of a cloth over time.

Android XR will probably be obtainable first on Samsung’s Mission Moohan, launching later this yr. Quickly after, our companions at XREAL will launch the subsequent Android XR machine. Codenamed Mission Aura, it’s a transportable and tethered machine that offers customers entry to their favourite Android apps, together with these which have been constructed for XR. It would launch as a developer version, particularly so that you can start creating and experimenting. The perfect information? With the acquainted instruments you utilize to construct Android apps immediately, you possibly can construct for these gadgets too.

product image of XREAL’s Project Aura against a nebulous black background

XREAL’s Mission Aura

The Google Play Retailer can be preparing for Android XR. It would record supported 2D Android apps on the Android XR Play Retailer when it launches later this yr. In case you are engaged on an Android XR differentiated app, you will get it prepared for the massive launch and be one of many first differentiated apps on the Android XR Play Retailer:

And we all know a lot of you’re excited for the way forward for Android XR on glasses. We’re shaping the developer expertise now and can share extra particulars on how one can take part later this yr.

To get began creating and growing for Android XR, try developer.android.com/develop/xr the place you can find the entire instruments, libraries, and sources you have to work with the Android XR SDK. Specifically, check out our samples and codelabs.

We welcome your suggestions, ideas, and concepts as you’re serving to form Android XR. Your ardour, experience, and daring concepts are very important as we proceed to develop Android XR collectively. We sit up for seeing your XR-differentiated apps when Android XR gadgets launch later this yr!

Discover this announcement and all Google I/O 2025 updates on io.google beginning Might 22.

What’s New in Jetpack Compose



What’s New in Jetpack Compose

Posted by Nick Butcher – Product Supervisor

At Google I/O 2025, we introduced a number of options, efficiency, stability, libraries, and instruments updates for Jetpack Compose, our really useful Android UI toolkit. With Compose you may construct glorious apps that work throughout units. Compose has matured loads because it was first introduced (at Google I/O 2019!) and we’re now seeing 60% of the highest 1,000 apps within the Play Retailer equivalent to MAX and Google Drive use and like it.

New Options

Since I/O final 12 months, Compose Invoice of Supplies (BOM) model 2025.05.01 provides new options equivalent to:

    • Autofill assist that lets customers routinely insert beforehand entered private info into textual content fields.
    • Auto-sizing textual content to easily adapt textual content dimension to a dad or mum container dimension.
    • Visibility monitoring for if you want high-performance info on a composable’s place in its root container, display, or window.
    • Animate bounds modifier for lovely computerized animations of a Composable’s place and dimension inside a LookaheadScope.
    • Accessibility checks in assessments that allow you to construct a extra accessible app UI via automated a11y testing.

LookaheadScope {
    Field(
        Modifier
            .animateBounds(this@LookaheadScope)
            .width(if(inRow) 100.dp else 150.dp)
            .background(..)
            .border(..)
    )
}

moving image of animate bounds modifier in action

For extra particulars on these options, learn What’s new within the Jetpack Compose April ’25 launch and take a look at these talks from Google I/O:

In case you’re seeking to check out new Compose performance, the alpha BOM gives new options that we’re engaged on together with:

    • Pausable Composition (see beneath)
    • Updates to LazyLayout prefetch
    • Context Menus
    • New modifiers: onFirstVisible, onVisbilityChanged, contentType
    • New Lint checks for steadily altering values and components that needs to be remembered in composition

Please check out the alpha options and present suggestions to assist form the way forward for Compose.

Materials Expressive

At Google I/O, we unveiled Materials Expressive, Materials Design’s newest evolution that helps you make your merchandise much more partaking and simpler to make use of. It is a complete addition of recent parts, types, movement and customization choices that enable you to construct lovely wealthy UIs. The Material3 library within the newest alpha BOM accommodates most of the new expressive parts so that you can check out.

moving image of material expressive design example

Study extra to begin constructing with Materials Expressive.

Adaptive layouts library

Creating adaptive apps throughout kind elements together with telephones, foldables, tablets, desktop, automobiles and Android XR is now simpler with the most recent enhancements to the Compose adaptive layouts library. The secure 1.1 launch provides assist for predictive again gestures for smoother transitions and pane enlargement for extra versatile two pane layouts on bigger screens. Moreover, the 1.2 (alpha) launch provides extra flexibility for the way panes are displayed, including methods for reflowing and levitating.

moving image of compose adaptive layouts updates in the Google Play app

Compose Adaptive Layouts Updates within the Google Play app

Study extra about constructing adaptive android apps with Compose.

Efficiency

With every launch of Jetpack Compose, we proceed to prioritize efficiency enhancements. The most recent secure launch consists of vital rewrites and enhancements to a number of sub-systems together with semantics, focus and textual content optimizations. Better of all these can be found to you just by upgrading your Compose dependency; no code modifications required.

bar chart of internal benchmarks for performance run on a Pixel 3a device from January to May 2023 measured by jank rate

Inside benchmark, run on a Pixel 3a

We proceed to work on additional efficiency enhancements, notable modifications within the newest alpha BOM embrace:

    • Pausable Composition permits compositions to be paused, and their work break up up over a number of frames.
    • Background textual content prefetch allows textual content format caches to be pre-warmed on a background thread, enabling quicker textual content format.
    • LazyLayout prefetch enhancements enabling lazy layouts to be smarter about how a lot content material to prefetch, profiting from pausable composition.

Collectively these enhancements eradicate practically all jank in an inside benchmark.

Stability

We have heard from you that upgrading your Compose dependency may be difficult, encountering bugs or behaviour modifications that forestall you from staying on the most recent model. We have invested considerably in enhancing the steadiness of Compose, working carefully with the numerous Google app groups constructing with Compose to detect and forestall points earlier than they even make it to a launch.

Google apps develop towards and launch with snapshot builds of Compose; as such, Compose is examined towards the lots of of hundreds of Google app assessments and any Compose points are instantly actioned by our crew. Now we have just lately invested in rising the cadence of updating these snapshots and now replace them every day from Compose tip-of-tree, which implies we’re receiving suggestions quicker, and are in a position to resolve points lengthy earlier than they attain a public launch of the library.

Jetpack Compose additionally depends on @Experimental annotations to mark APIs which are topic to vary. We heard your suggestions that some APIs have remained experimental for a very long time, decreasing your confidence within the stability of Compose. Now we have invested in stabilizing experimental APIs to offer you a extra stable API floor, and diminished the variety of experimental APIs by 32% within the final 12 months.

Now we have additionally heard that it may be exhausting to debug Compose crashes when your personal code doesn’t seem within the stack hint. Within the newest alpha BOM, we’ve got added a brand new opt-in function to offer extra diagnostic info. Observe that this doesn’t presently work with minified builds and comes at a efficiency value, so we suggest solely utilizing this function in debug builds.

class App : Utility() {
   override enjoyable onCreate() {
        // Allow just for debug taste to keep away from perf affect in launch
        Composer.setDiagnosticStackTraceEnabled(BuildConfig.DEBUG)
   }
}

Libraries

We all know that to construct nice apps, you want Compose integration within the libraries that work together together with your app’s UI.

A core library that powers any Compose app is Navigation. You informed us that you simply typically encountered limitations when managing state hoisting and instantly manipulating the again stack with the present Compose Navigation answer. We went again to the drawing-board and utterly reimagined how a navigation library ought to combine with the Compose psychological mannequin. We’re excited to introduce Navigation 3, a brand new artifact designed to empower you with better management and simplify complicated navigation flows.

We’re additionally investing in Compose assist for CameraX and Media3, making it simpler to combine digital camera seize and video playback into your UI with Compose idiomatic parts.

@Composable
non-public enjoyable VideoPlayer(
    participant: Participant?, // from media3
    modifier: Modifier = Modifier
) {
    Field(modifier) {
        PlayerSurface(participant) // from media3-ui-compose
        participant?.let {
            // customized play-pause button UI
            val playPauseButtonState = rememberPlayPauseButtonState(it) // from media3-ui-compose
            MyPlayPauseButton(playPauseButtonState, Modifier.align(BottomEnd).padding(16.dp))
        }
    }
}

To be taught extra, see the media3 Compose documentation and the CameraX samples.

Instruments

We proceed to enhance the Android Studio instruments for creating Compose UIs. The newest Narwhal canary consists of:

    • Resizable Previews immediately present you ways your Compose UI adapts to totally different window sizes
    • Preview navigation enhancements utilizing clickable names and parts
    • Studio Labs 🧪: Compose preview technology with Gemini rapidly generate a preview
    • Studio Labs 🧪: Rework UI with Gemini change your UI with pure language, instantly from preview.
    • Studio Labs 🧪: Picture attachment in Gemini generate Compose code from photographs.

For extra info learn What’s new in Android growth instruments.

moving image of resizable preview in Jetpack Compose

Resizable Preview

New Compose Lint checks

The Compose alpha BOM introduces two new annotations and related lint checks that will help you to put in writing appropriate and performant Compose code. The @FrequentlyChangingValue annotation and FrequentlyChangedStateReadInComposition lint examine warns in conditions the place operate calls or property reads in composition would possibly trigger frequent recompositions. For instance, frequent recompositions would possibly occur when studying scroll place values or animating values. The @RememberInComposition annotation and RememberInCompositionDetector lint examine warns in conditions the place constructors, features, and property getters are known as instantly inside composition (e.g. the TextFieldState constructor) with out being remembered.

Completely satisfied Composing

We proceed to put money into offering the options, efficiency, stability, libraries and instruments that it’s worthwhile to construct glorious apps. We worth your enter so please share suggestions on our newest updates or what you’d prefer to see subsequent.

Discover this announcement and all Google I/O 2025 updates on io.google beginning Could 22.

What’s new in Watch Faces



What’s new in Watch Faces

Posted by Garan Jenkin – Developer Relations Engineer

Put on OS has a thriving watch face ecosystem that includes quite a lot of designs that additionally goals to attenuate battery impression. Builders have embraced the simplicity of making watch faces utilizing Watch Face Format – within the final 12 months, the variety of revealed watch faces utilizing Watch Face Format has grown by over 180%*.

At present, we’re persevering with our funding and asserting model 4 of the Watch Face Format, accessible as a part of Put on OS 6. These updates enable builders to precise even better ranges of creativity by means of the brand new options we’ve added. And we’re supporting marketplaces, which supplies flexibility and management to builders and extra alternative for customers.

On this weblog publish we’ll cowl key new options, try the documentation for extra particulars of adjustments launched in latest variations.

Supporting marketplaces with Watch Face Push

We’re additionally asserting a very new API, the Watch Face Push API, geared toward builders who need to create their very own watch face marketplaces.

Watch Face Push, accessible on gadgets working Put on OS 6 and above, works completely with watch faces that use the Watch Face Format watch faces.

We’ve partnered with well-known watch face builders – together with Facer, TIMEFLIK, WatchMaker, Pujie, and Recreative – in designing this new API. We’re excited that every one of those builders will likely be bringing their distinctive watch face experiences to Put on OS 6 utilizing Watch Face Push.

Three mobile devices representing watch face marketplace apps for watches running Wear OS 6

From left to proper, Facer, Recreative and TIMEFLIK watch faces have been creating market apps to work with watches working Put on OS 6.

Watch faces managed and deployed utilizing Watch Face Push are all written utilizing Watch Face Format. Builders publish these watch faces in the identical means as publishing by means of Google Play, although there are some further checks the developer should make that are described within the Watch Face Push steerage.

A flow diagram demonstrating the flow of information from Cloud-based storage to the user's phone where the app is installed, then transferred to be installed on a wearable device using the Wear OS App via the Watch Face Push API

The Watch Face Push API covers solely the watch a part of this typical market system diagram – because the app developer, you could have management and duty for the cellphone app and cloud parts, in addition to for constructing the Put on OS app utilizing Watch Face Push. You’re additionally in charge of the phone-watch communications, for which we advocate utilizing the Knowledge Layer APIs.

Including Watch Face Push to your undertaking

To start out utilizing Watch Face Push on Put on OS 6, embrace the next dependency in your Put on OS app:

// Guarantee newest model is utilized by checking the repository
implementation("androidx.put on.watchface:watchface-push:1.3.0-alpha07")

Declare the required permission in your AndroidManifest.xml:

"com.google.put on.permission.PUSH_WATCH_FACES" />

Receive a Watch Face Push shopper:

val supervisor = WatchFacePushManagerFactory.createWatchFacePushManager(context)

You’re now prepared to begin utilizing the Watch Face Push API, for instance to listing the watch faces you could have already put in, or add a brand new watch face:

// Listing current watch faces, put in by this app
val listResponse = supervisor.listWatchFaces()

// Add a watch face
supervisor.addWatchFace(watchFaceFileDescriptor, validationToken)

Understanding Watch Face Push

Whereas the fundamentals of the Watch Face Push API are straightforward to grasp and entry by means of the WatchFacePushManager interface, it’s vital to think about a number of different elements when working with the API in apply to construct an efficient market app, together with:

    • Setting energetic watch faces – By a further permission, the app can set the energetic watch face. Study tips on how to combine this characteristic, in addition to tips on how to deal with the completely different permission eventualities.

To be taught extra about utilizing Watch Face Push, see the steerage and reference documentation.

Updates to Watch Face Format

Images

Out there from Watch Face Format v4

The brand new Images ingredient permits the watch face to include user-selectable images. The ingredient helps each particular person images and a gallery of images. For a gallery of images, builders can select whether or not the images advance robotically or when the person faucets the watch face.

a wearable device and small screen mobile device side by side demonstrating how a user may configure photos for the watch face through the Companion app on the mobile device

Configuring images by means of the watch Companion app

The person is ready to choose the images of their alternative by means of the companion app, making this a good way to incorporate true personalization in your watch face. To make use of this characteristic, first add the required configuration:


  "myPhoto" configType="SINGLE"/>

Then use the Images ingredient inside any PartImage, in the identical means as you’ll for an Picture ingredient:


  "[CONFIGURATION.myPhoto]"
          defaultImageResource="placeholder_photo"/>

For particulars on tips on how to help a number of images, and tips on how to configure the completely different change behaviors, check with the Images part of the steerage and reference, in addition to the GitHub samples.

Transitions

Out there from Watch Face Format v4

Watch Face Format now helps transitions when exiting and getting into ambient mode.

moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit

State transition animation: Instance utilizing an overshoot impact in revealing the seconds digits

That is achieved by means of the present Variant tag. For instance, the hours and minutes within the above watch face are animated as follows:


  "AMBIENT" goal="x" worth="100" interpolation="OVERSHOOT" />

   

By default, the animation takes the complete extent of allowed time for the transition. The brand new interpolation attribute controls the animation impact – on this case using OVERSHOOT provides a playful expertise.

The seconds are carried out in a separate DigitalClock ingredient, which exhibits using the brand new period attribute:


  "AMBIENT" goal="alpha" worth="0" period="0.5"/>
   

The period attribute takes a price between 0.0 and 1.0, with 1.0 representing the complete extent of the allowed time. On this instance, by utilizing a price of 0.5, the seconds animation is faster – taking half the allowed time, compared to the hours and minutes, which take your entire transition interval.

For extra particulars on utilizing transitions, see the steerage documentation, in addition to the reference documentation for Variant.

Coloration Transforms

Out there from Watch Face Format v4

We’ve prolonged the usefulness of the Remodel ingredient by permitting shade to be reworked on the vast majority of components the place it’s an attribute, and likewise permitting tintColor to be reworked on Group and Half* components similar to PartDraw and PartText.

The principle exceptions to this addition are the clock components, DigitalClock and AnalogClock, and likewise ComplicationSlot, which don’t at present help Remodel.

Along with extending the listing of transformable attributes to incorporate colours, we’ve additionally added a handful of helpful capabilities for manipulating shade:

To see these in motion, let’s contemplate an instance.

The Climate knowledge supply offers the present UV index by means of [WEATHER.UV_INDEX]. When representing the UV index, these values are usually additionally assigned a shade:

moving image demonstrating an overshoot effect adjusting the time on a watch face to reveal the seconds digit

We need to signify this data as an Arc, not solely exhibiting the worth, but in addition utilizing the suitable shade. We will obtain this as follows:

"0" centerY="0" top="420" width="420"
  startAngle="165" endAngle="165" course="COUNTER_CLOCKWISE">
  "endAngle"
    worth="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
  "20" shade="#ffffff" cap="ROUND">
    "shade"
      worth="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
  

Let’s break this down:

    • The primary Remodel restricts the UV index to the vary 0.0 to 11.0 and adjusts the sweep of the Arc in response to that worth.
    • The second Remodel makes use of the brand new extractColorFromWeightedColors operate.
        • The first argument is our listing of colours
        • The second argument is a listing of weights – you possibly can see from the chart above that inexperienced covers 3 values, whereas orange solely covers 2, so we use weights to signify this.
        • The third argument is whether or not or to not interpolate the colour values. On this case we need to stick strictly to the colour conference for UV index, so that is false.
        • Lastly within the fourth argument we coerce the UV worth into the vary 0.0 to 1.0, which is used as an index into our weighted colours.

The consequence seems like this:

side by side quadrants of watch face examples showing using the new color functions in applying color transforms to a Stroke in an Arc

Utilizing the brand new shade capabilities in making use of shade transforms to a Stroke in an Arc.

In addition to with the ability to present uncooked colours and weights to those capabilities, they can be used with values from issues, similar to HR, temperature or steps objective. For instance, to make use of the colour vary laid out in a objective complication:

"shade"
    worth="extractColorFromColors(
        [COMPLICATION.GOAL_PROGRESS_COLORS],
        [COMPLICATION.GOAL_PROGRESS_COLOR_INTERPOLATE],
        [COMPLICATION.GOAL_PROGRESS_VALUE] /    
            [COMPLICATION.GOAL_PROGRESS_TARGET_VALUE]
)"/>

Introducing the Reference ingredient

Out there from Watch Face Format v4

The brand new Reference ingredient lets you check with any transformable attribute from one a part of your watch face scene in different elements of the scene tree.

In our UV index instance above, we’d additionally just like the textual content labels to make use of the identical shade scheme.

We may carry out the identical shade remodel calculation as on our Arc, utilizing [WEATHER.UV_INDEX], however that is duplicative work which may result in inconsistencies, for instance if we modify the precise shade hues in a single place however not the opposite.

Returning to the Arc definition, let’s create a Reference to the colour:

"0" centerY="0" top="420" width="420"
  startAngle="165" endAngle="165" course="COUNTER_CLOCKWISE">
  "endAngle"
    worth="165 - 40 * (clamp(11, 0.0, 11.0) / 11.0)" />
  "20" shade="#ffffff" cap="ROUND">
    "shade" title="uv_color" defaultValue="#ffffff" />
    "shade"
      worth="extractColorFromWeightedColors(#97d700 #FCE300 #ff8200 #f65058 #9461c9, 3 3 2 3 1, false, clamp([WEATHER.UV_INDEX] + 0.5, 0.0, 12.0) / 12.0)" />
  

The colour of the Arc is calculated from the comparatively advanced extractColorFromWeightedColors operate. To keep away from repeating this elsewhere in our watch face, we’ve got added a Reference ingredient, which takes as its supply the Stroke shade.

Let’s now take a look at how we will eat this worth in a PartText elsewhere within the watch face. We gave the Reference the title uv_color, so we will merely check with this in any expression:

"0" y="225" width="450" top="225">
  "225" centerY="0" width="420" top="420"
    startAngle="120" endAngle="90"
    align="START" course="COUNTER_CLOCKWISE">
    "SYNC_TO_DEVICE" measurement="24">
      "shade" worth="[REFERENCE.uv_color]" />
      
    
  


Because of this, the colour of the Arc and the UV numeric worth are actually coordinated:

side by side quadrants of watch face examples showing Coordinating colors across elements using the Reference element

Coordinating colours throughout components utilizing the Reference ingredient

For extra particulars on tips on how to use the Reference ingredient, check with the Reference steerage.

Textual content autosizing

Out there from Watch Face Format v3

Generally the precise size of the textual content to be proven on the watch face can range, and as a developer you need to stability with the ability to show textual content that’s each legible, but in addition full.

Auto-sizing textual content might help clear up this drawback, and will be enabled by means of the isAutoSize attribute launched to the Textual content ingredient:

"CENTER" isAutoSize="true">

Having set this attribute, textual content will then robotically match the accessible house, beginning on the most measurement laid out in your Font ingredient, and with a minimal measurement of 12.

For example, step depend may vary from tens or a whole bunch by means of to many hundreds, and the brand new isAutoSize attribute allows greatest use of the accessible house for each potential worth:

side by side examples of text sizing adjustments on watch face using isAutosize

Making the perfect use of the accessible textual content house by means of isAutoSize

For extra particulars on isAutoSize, see the Textual content reference.

Android Studio help

For builders working in Android Studio, we’ve added help to make working with Watch Face Format simpler, together with:

    • Run configuration help
    • Auto-complete and useful resource reference
    • Lint checking

That is accessible from Android Studio Canary model 2025.1.1 Canary 10.

Study Extra

To be taught extra about constructing watch faces, please check out the next sources:

We’ve additionally lately launched a codelab for Watch Face Format and have up to date samples on GitHub to showcase new options. The challenge tracker is out there for offering suggestions.

We’re excited to see the watch face experiences that you just create and share!

Discover this announcement and all Google I/O 2025 updates on io.google beginning Might 22.

* Google Play knowledge for interval 2025-03-24 to 2025-03-23

The emperors of AI coding instruments don’t have any garments – and it’s making a productiveness delusion


For 3 years, I’ve watched the AI coding revolution unfold with a mixture of fascination and frustration. As somebody who’s constructed and led AI engineering and product groups at Google, SwiftKey, Yahoo, and past, I’m calling it: we’re collectively falling for a productiveness mirage. 

We’re celebrating typing pace whereas ignoring the precise bottlenecks that cripple software program improvement and most annoy dev groups. 

Builders solely spend a few hours each day writing code, they usually hate this. They spend most of their time “not making their beer style higher”, and doing soul-sucking boring work. Scaffolding tasks, managing tickets, refining tales, fixing bugs, dealing with tech debt, writing assessments, fixing construct points… you solely want to take a look at the Stack Overflow weblog to listen to the outcry. But, as a society, we’re pouring tens of millions into making these 1-2 hours marginally sooner – these hours the place devs even have probably the most enjoyable.

 The issue with dopamine

We’re witnessing what I name “the 80% drawback” throughout the trade – a dopamine hit that hides the true ache. What do I imply by this? I used to joke that, with dangerous administration, “the primary 80% takes 20% of the time, and the remaining 20% takes… 80% of the time”. Effectively, we’re managing AI badly. Present AI instruments get you 80% of the way in which shortly, making a dopamine hit of productiveness. It’s the final 20% that wastes all of your time financial savings. It’s like sprinting the primary 19 miles of a marathon and feeling nice, however then having completely no legs to complete off the remaining 7 miles you’ve left.

It’s that 20% that comprises a lot of the refined bugs as a result of AI has made a litany of minute errors which have piled up, with out oversight. It’s  this closing 20% that represents the precise design problem requiring human experience – and truthfully, that ought to have been finished in collaboration with the AI, not after the AI. The result’s a psychological mirage the place builders really feel productive initially, however group velocity stays stubbornly unchanged. Even worse, you’re build up technical debt within the type of poor high quality that, over time, means your product and know-how begins to slowly crumble below the AI code-slop.

This tunnel imaginative and prescient is baffling if you happen to’re a reliable supervisor. To attract on one other analogy – think about Toyota revolutionizing manufacturing by solely optimizing how shortly employees insert screws, whereas ignoring all the manufacturing line. We’d chortle at such restricted considering. But that is exactly what’s taking place with AI coding instruments.

A current Wired survey of 730 builders captured this completely, with many seeing AI as “helpful, however clueless”—primarily a hyperefficient intern that also can’t deal with context, edge circumstances, or actual problem-solving. This matches precisely what I’ve noticed in enterprise environments.

Talking with CTOs throughout the trade, I’ve but to search out one who can reveal constant, measurable enchancment in supply metrics from their AI investments. The place are the SEC filings exhibiting dependable 20+% speedups? They don’t exist, as a result of we’re optimizing the unsuitable issues.

As an alternative, we’re reworking builders into glorified secretaries for AI – manually ferrying context between techniques, copy-pasting specs, and cleansing up hallucinated code. The bitter irony is that instruments meant to remove tedious work have created a brand new type of drudgery, eliminated the enjoyable, and even created new messes. You’re not utilizing the AI—you’re serving it. It’s straightforward to really feel productive initially, however it’s not sustainable except you convey full context to the AI… at which level you’re primarily working for the AI, not the opposite approach round.

Affected by purchaser’s regret?

Given the frenzy of spending over the previous few years, I can’t simply complain, so right here’s some remedial recommendation to these CTOs who didn’t preserve the receipt on these AI coding instruments:

First, demand measurement past vainness metrics. Focus completely on time from a well-written ticket to deployment – that’s the one throughput that issues. Don’t ask devs whether or not they “really feel extra productive” as a result of they’ve outsourced considering to AIs.

Second, prioritize high quality alongside pace. You possibly can’t settle for that writing refined defects into your code sooner is an effective trade-off. Context is every part in engineering, and instruments that may’t entry and perceive your full improvement context will at all times ship subpar outcomes. Instruments ought to uncover the context for you; why are you chasing down context for the AI? Are the instruments assessments and structure docs mechanically? Are they working what they write in opposition to your assessments mechanically and fixing the problems? Are they working your linters or following your most elementary coding necessities?

Third, widen your scope of optimization, don’t slender it. This feels counterintuitive – we’re taught to ship in skinny slices. However the biggest system enhancements come from world optimizations, not native ones. It’s like my expertise constructing IoT units: fairly than squeezing 15% higher efficiency by optimizing a power-hungry GPS chip, we solved the native drawback by fixing it globally: we added a 2-cent movement sensor and a 5-cent low-power processor that triggered the GPS solely when wanted, reworking battery life solely.

The actually transformative alternative lies in eradicating total steps out of your course of, not optimizing particular person ones. Why are we paying senior engineers £150k to manually create branches and scaffold boilerplate code? Why can we care in any respect about typing pace now? 

Think about Stripe and Netflix – their aggressive benefit comes not from typing code sooner however from ruthlessly eliminating handoffs and bottlenecks between groups. Stripe invested closely in streamlining code evaluations, testing automation, and deployments between engineering, product, QA, and operations groups. Netflix centered on automated supply pipelines and chaos engineering practices that minimized bottlenecks between dev, ops, and deployment groups, enabling fast world deployments.

This isn’t nearly effectivity – it’s a strategic benefit. Whereas others have a good time marginal coding good points, firms addressing the total improvement lifecycle are getting into markets sooner and responding to buyer wants earlier than opponents even perceive the necessities. It’s the distinction between market management and irrelevance. And you’ve got a little bit of time earlier than your opponents get up and transfer on this earlier than you do – however time is shortly working out.

The trail ahead is evident: deal with AI as a system that completes total duties, not as a glorified autocomplete. Measure success by means of significant DORA metrics, not traces of code generated. And demand that AI adapts to your group’s established processes, not vice versa.

The query isn’t whether or not AI will rework software program improvement. It completely will. The query is whether or not we’ll optimize what really issues. And whether or not you’re main or following.