19.6 C
New York
Saturday, September 7, 2024

3 enjoyable experiments to strive to your subsequent Android app, utilizing Google AI Studio



3 enjoyable experiments to strive to your subsequent Android app, utilizing Google AI Studio

Posted by Paris Hsu – Product Supervisor, Android Studio

We shared an thrilling dwell demo from the Developer Keynote at Google I/O 2024 the place Gemini reworked a wireframe sketch of an app’s UI into Jetpack Compose code, instantly inside Android Studio. Whereas we’re nonetheless refining this characteristic to ensure you get a terrific expertise inside Android Studio, it is constructed on high of foundational Gemini capabilities which you’ll experiment with as we speak in Google AI Studio.

Particularly, we’ll delve into:

    • Turning designs into UI code: Convert a easy picture of your app’s UI into working code.
    • Sensible UI fixes with Gemini: Obtain strategies on the best way to enhance or repair your UI.
    • Integrating Gemini prompts in your app: Simplify advanced duties and streamline person experiences with tailor-made prompts.

Observe: Google AI Studio provides numerous general-purpose Gemini fashions, whereas Android Studio makes use of a customized model of Gemini which has been particularly optimized for developer duties. Whereas which means these general-purpose fashions could not provide the identical depth of Android data as Gemini in Android Studio, they supply a enjoyable and fascinating playground to experiment and acquire perception into the potential of AI in Android growth.

Experiment 1: Turning designs into UI code

First, to show designs into Compose UI code: Open the chat immediate part of Google AI Studio, add a picture of your app’s UI display (see instance beneath) and enter the next immediate:

“Act as an Android app developer. For the picture supplied, use Jetpack Compose to construct the display in order that the Compose Preview is as near this picture as attainable. Additionally be certain to incorporate imports and use Material3.”

Then, click on “run” to execute your question and see the generated code. You’ll be able to copy the generated output instantly into a brand new file in Android Studio.

Image uploaded: Designer mockup of an application's detail screen

Picture uploaded: Designer mockup of an utility’s element display

Moving image showing a custom chat prompt being created from the imagev provided in Google AI Studio

Google AI Studio customized chat immediate: Picture → Compose

Moving image showing running the generated code in Android Studio

Working the generated code (with minor fixes) in Android Studio

With this experiment, Gemini was in a position to infer particulars from the picture and generate corresponding code parts. For instance, the unique picture of the plant element display featured a “Care Directions” part with an expandable icon — Gemini’s generated code included an expandable card particularly for plant care directions, showcasing its contextual understanding and code era capabilities.

Experiment 2: Sensible UI fixes with Gemini in AI Studio

Impressed by “Circle to Search“, one other enjoyable experiment you possibly can strive is to “circle” drawback areas on a screenshot, together with related Compose code context, and ask Gemini to recommend applicable code fixes.

You’ll be able to discover with this idea in Google AI Studio:

    1. Add Compose code and screenshot: Add the Compose code file for a UI display and a screenshot of its Compose Preview, with a purple define highlighting the difficulty—on this case, gadgets within the Backside Navigation Bar that ought to be evenly spaced.

Example: Preview with problem area highlighted

Instance: Preview with drawback space highlighted

Screenshot of Google AI Studio: Smart UI Fixes with Gemini

Google AI Studio: Sensible UI Fixes with Gemini

Screenshot of Example: Generated code fixed by Gemini

Instance: Generated code fastened by Gemini

Example: Preview with fixes applied

Instance: Preview with fixes utilized

Experiment 3: Integrating Gemini prompts in your app

Gemini can streamline experimentation and growth of customized app options. Think about you wish to construct a characteristic that offers customers recipe concepts based mostly on a picture of the elements they’ve readily available. Up to now, this could have concerned advanced duties like internet hosting a picture recognition library, coaching your personal ingredient-to-recipe mannequin, and managing the infrastructure to assist all of it.

Now, with Gemini, you possibly can obtain this with a easy, tailor-made immediate. Let’s stroll by the best way to add this “Cook dinner Helper” characteristic into your Android app for instance:

    1. Discover the Gemini immediate gallery: Uncover instance prompts or craft your personal. We’ll use the “Cook dinner Helper” immediate.

Gemini prompt gallery in Google AI for Developers

Google AI for Builders: Immediate Gallery

    2. Open and experiment in Google AI Studio: Take a look at the immediate with completely different pictures, settings, and fashions to make sure the mannequin responds as anticipated and the immediate aligns together with your objectives.

Moving image showing the Cook Helper prompt in Google AI for Developers

Google AI Studio: Cook dinner Helper immediate

    3. Generate the combination code: When you’re happy with the immediate’s efficiency, click on “Get code” and choose “Android (Kotlin)”. Copy the generated code snippet.

Screengrab of using 'Get code' to obtain a Kotlin snippet in Google AI Studio

Google AI Studio: get code – Android (Kotlin)

    4. Combine the Gemini API into Android Studio: Open your Android Studio undertaking. You’ll be able to both use the new Gemini API app template supplied inside Android Studio or observe this tutorial. Paste the copied generated immediate code into your undertaking.

That is it – your app now has a functioning Cook dinner Helper characteristic powered by Gemini. We encourage you to experiment with completely different instance prompts and even create your personal customized prompts to reinforce your Android app with highly effective Gemini options.

Our strategy on bringing AI to Android Studio

Whereas these experiments are promising, it is vital to do not forget that massive language mannequin (LLM) expertise continues to be evolving, and we’re studying alongside the best way. LLMs will be non-deterministic, that means they’ll generally produce surprising outcomes. That is why we’re taking a cautious and considerate strategy to integrating AI options into Android Studio.

Our philosophy in the direction of AI in Android Studio is to enhance the developer and guarantee they continue to be “within the loop.” Specifically, when the AI is making strategies or writing code, we wish builders to have the ability to fastidiously audit the code earlier than checking it into manufacturing. That is why, for instance, the brand new Code Ideas characteristic in Canary mechanically brings up a diff view for builders to preview how Gemini is proposing to switch your code, moderately than blindly making use of the adjustments instantly.

We wish to be certain these options, like Gemini in Android Studio itself, are completely examined, dependable, and actually helpful to builders earlier than we carry them into the IDE.

What’s subsequent?

We invite you to strive these experiments and share your favourite prompts and examples with us utilizing the #AndroidGeminiEra tag on X and LinkedIn as we proceed to discover this thrilling frontier collectively. Additionally, be certain to observe Android Developer on LinkedIn, Medium, YouTube, or X for extra updates! AI has the potential to revolutionize the best way we construct Android apps, and we won’t wait to see what we are able to create collectively.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles