Home Blog Page 3

Navigating COPPA Compliance: A Safety-Centered Information for Okay-12 and Libraries


The Kids’s On-line Privateness Safety Act (COPPA) is a federal regulation designed to guard the web privateness of kids below 13. For colleges and libraries, complying with COPPA is crucial—not solely to satisfy authorized obligations but additionally to safeguard the privateness and safety of younger customers in at present’s digital world.

Understanding COPPA’s Fundamentals

COPPA requires web sites and on-line providers to safe verifiable parental consent earlier than accumulating, utilizing, or sharing private data from youngsters below 13. This consists of names, addresses, e-mail addresses, and chronic identifiers (e.g., cookies). The regulation applies to any service concentrating on youngsters or knowingly accumulating their knowledge.

Key Safety Practices for COPPA Compliance

Instructional establishments and libraries should prioritize robust safety measures to guard youngsters’s private knowledge. Crucial steps embrace:

  • Knowledge Minimization: Gather solely what’s mandatory for the said objective.
  • Entry Controls: Limit knowledge entry to licensed personnel.
  • Encryption: Defend knowledge in transit and at relaxation with robust encryption strategies.
  • Common Safety Assessments: Establish and deal with vulnerabilities.
  • Vendor Administration: Guarantee third-party distributors adjust to COPPA requirements.

Sensible Steps for Compliance

To fulfill COPPA necessities, comply with these actionable steps:

  1. Evaluation Privateness Insurance policies: Clearly define how youngsters’s knowledge is collected, used, and guarded. Make your insurance policies accessible and simple to know.
  2. Implement Parental Consent Mechanisms: Use FTC-approved strategies corresponding to e-mail verification or bodily mail to acquire verifiable parental consent.
  3. Present Workers Coaching: Prepare staff on COPPA necessities and finest practices for securing delicate knowledge.
  4. Put together an Incident Response Plan: Develop a structured plan for dealing with knowledge breaches or unauthorized entry.
  5. Monitor Methods Repeatedly: Usually test for safety threats to keep up compliance and shield person knowledge.

Cisco Options for COPPA Compliance

Cisco’s safety portfolio affords instruments to assist Okay-12 colleges and libraries meet COPPA necessities. Applied sciences like Safe Firewall, Umbrella, and Safe Endpoint present superior menace safety, content material filtering, and knowledge loss prevention. These options allow establishments to guard youngsters’s on-line privateness whereas sustaining compliance.

  • Encryption: Protects delicate knowledge throughout transmission.
  • Entry Controls: Limits knowledge entry to licensed customers.
  • Menace Detection: Identifies and mitigates potential breaches.
  • Parental Controls: Ensures compliance with privateness requirements.

Strengthen Your Safety Posture At the moment

By adopting proactive safety measures and leveraging Cisco’s options, colleges and libraries can shield youngsters’s privateness, preserve compliance, and foster belief of their communities.

Get began at present!

Share:

Akshita Iyer, Founder and CEO of Ome – Interview Collection


Akshita Iyer, Founder and CEO of Ome, is a neuroscience graduate from Duke College whose entrepreneurial path was sparked by a private expertise with a kitchen fireplace. This pivotal second led to the creation of Ome, an organization targeted on reimagining the cooking expertise by way of sensible residence innovation. Beneath her management, Ome launched the patented Good Knob, an answer designed to make kitchens safer and extra automated. Iyer additionally serves on the UL 858 Technical Committee, the place she works alongside trade specialists to uphold security requirements for family home equipment.

Ome is the world’s first Good Knob that replaces your range knobs so as to add real-time distant management, computerized shut-off, and voice integration—making cooking safer and stress-free. Appropriate with most fuel and electrical stoves, Ome helps stop kitchen fires and presents hands-free management with out requiring a subscription.

Are you able to inform us a bit about your private journey from neuroscience at Duke to changing into the founding father of a sensible residence tech firm?

Actually, I by no means anticipated to finish up within the kitchen equipment house. I studied neuroscience at Duke with plans to go to med college. After commencement, I labored at a hospital to realize expertise, however throughout that point, I received hooked on Shark Tank. I used to be fascinated — not simply by the merchandise, however by the folks. So lots of them weren’t conventional entrepreneurs. They have been simply common folks fixing private issues. That basically caught with me.

Then one thing occurred in my own residence: my mother, who was recognized with Parkinson’s, by accident left the range on and began a kitchen fireplace. Fortunately nobody was harm, but it surely was a wake-up name. I began trying to find an answer and assumed one thing easy already existed to unravel this — however I shortly realized how antiquated and underserved the kitchen house is.

Each different a part of the house had gotten smarter — sensible thermostats, locks, lights — however the range, arguably essentially the most harmful equipment in the home, had been left behind.

That’s when all the things clicked. If there have been retrofit options for doorbells and thermostats, why not the range? I didn’t have a tech background, however I couldn’t shake the thought and thought, “how onerous may or not it’s?” (Spoiler: fairly onerous.) However I dove in anyway — and that’s how Ome was born.

What have been the largest challenges you confronted within the early days of constructing Ome, and the way did you overcome them?

Switching from neuroscience and a med college observe to founding a tech firm was an enormous leap. I used to be a first-time entrepreneur, figuring all the things out as I went. One of many greatest early hurdles was studying learn how to increase capital. I had no expertise within the startup world, and initially I assumed having a robust thought and a compelling product can be sufficient. I shortly discovered that wasn’t the case. I needed to learn to pitch, who to pitch to, and learn how to tailor the message.

For instance, I bear in mind pitching to an investor whose thesis was to spend money on B2B SaaS startups — retrospectively, utterly the incorrect match — and his confusion was palpable. That taught me early on that I wanted to do my homework and perceive not simply learn how to inform our story, however who I used to be telling it to. Fundraising turned as a lot about psychology and technique because it was in regards to the product.

After which the pandemic hit. Our provide chain got here to a standstill. We couldn’t get parts, manufacturing slowed, and we had early supporters ready for a product we couldn’t ship on time. That was a defining second for us. We received actually trustworthy with our clients — transparency turned our north star. Internally, we arrange a conflict room, re-evaluated suppliers, and located various options to get manufacturing again on observe, even at restricted capability.

Trying again, these challenges formed the resilience of our firm. We discovered to adapt quick, talk clearly, and remedy one drawback at a time. And that mindset nonetheless drives how we function right this moment.

Are you able to give us a fast breakdown of how the Ome Good Knob works and what makes it distinctive within the sensible kitchen house?

Ome is the world’s first Good Range Knob designed to make cooking safer, simpler, and extra intuitive. It’s the one retrofit gadget that replaces your present range knobs and may be put in in simply minutes on nearly all fuel or electrical stovetops and ranges. As soon as put in, it turns your range into a sensible equipment which you can monitor and management in real-time, from wherever.

What units Ome aside is that it tackles one of the missed issues within the sensible residence: unattended, distracted cooking, which is the main reason behind home fires. Our expertise presents options like computerized shut-off in case of absence, security locks, built-in timers, and hands-free voice management through Amazon Alexa — giving customers peace of thoughts each time they cook dinner.

However we’re not simply constructing a safer range. Our imaginative and prescient is to create a completely linked kitchen — integrating auxiliary sensors, sensible recipe content material, and even grocery supply to streamline and simplify your entire cooking expertise.

What additionally makes Ome distinctive is our affect past particular person households. Our platform features a centralized dashboard that permits property managers in multi-unit buildings — like dorms, condos, and senior dwelling communities — to observe range exercise remotely and proactively stop hazards. It’s not nearly client comfort; it’s about public security, particularly in environments the place a single incident can have an effect on many lives.

Whereas lots of sensible kitchen devices add to counter muddle or solely supply area of interest options, Ome delivers one thing that’s each important and seamless — a better, safer, and extra intuitive kitchen, beginning with essentially the most used cooking equipment.

You’ve described your view of AI within the residence as “ambient intelligence” reasonably than app-centric. What does that imply in apply, and why is it necessary?

To me, ambient intelligence means tech that blends into your life — quietly working within the background, studying out of your conduct, and stepping in solely when wanted. Most sensible residence merchandise right this moment are very app-driven. It’s important to open an app, press a button, regulate settings. That’s probably not sensible — it’s simply distant management with further steps.

With Ome, we need to flip that script and pave the trail towards a future the place your private home anticipates dangers and takes motion earlier than you even discover an issue. Think about a range that shuts off if left on too lengthy, or detects patterns that may point out an issue, with out you lifting a finger. One of the best expertise doesn’t interrupt your life. It enhances it quietly, and retains you secure with out being intrusive.

The sensible residence market is full of flashy devices — how does Ome push again towards that novelty bias in favor of simplicity and belief?

We deliberately constructed one thing easy. The Ome Good Knob doesn’t attempt to be flashy. It appears to be like and feels just like the factor it replaces and that’s the purpose. It installs simply, works quietly, and provides actual worth.

In security, much less is extra. Individuals don’t need one other app to handle or a sophisticated interface. They need options that really feel pure, that simply work. They need peace of thoughts. We’ve heard so many customers say, “This simply is sensible.” That’s the form of suggestions we attempt for.

In your view, what does the trade get incorrect about what folks really need or want from sensible residence expertise?

I feel there’s an inclination to over-engineer — extra options, extra management, extra complexity. However most individuals simply need much less: much less friction, much less threat, much less stuff to handle.

Good properties ought to allow independence, not require fixed consideration. Particularly for older adults or busy households, expertise must be supportive, not worrying. That’s what we concentrate on: constructing tech that serves actual wants in intuitive methods.

You are a part of the UL 858 Technical Committee, serving to outline security requirements. How does that work affect your choices at Ome?

Being on the UL 858 Technical Committee has been extremely eye-opening. It’s one factor to develop new expertise. It’s one other to sit down on the desk the place security requirements are literally being written. UL 858 focuses on stopping hazards like unintentional activation and overheating in electrical ranges, and an enormous a part of that work is determining how security evolves as home equipment get smarter.

That perspective instantly shapes our strategy at Ome. We made a really deliberate choice that our Good Knob requires a handbook push-and-turn to activate — not as a result of it’s fashionable, however as a result of it aligns with a core security precept: intentional use. We don’t enable distant begin, as a result of whenever you’re coping with fireplace and fuel, there’s no room for ambiguity about person intent.

Being in these discussions — the place the results of a design choice are measured in lives and property — reinforces our dedication to real-world security over flashy options. It’s helped us keep clear on our priorities: constructing tech that’s not simply sensible, however secure by design, reliable by default, and all the time grounded within the realities of the house.

Why was it necessary so that you can create a retrofit resolution as an alternative of constructing a brand new sensible equipment from scratch?

From day one, accessibility and sustainability have been core to our mission and that’s precisely why we selected to construct a retrofit product.

Changing a complete range simply to make it sensible isn’t reasonable for most individuals. It’s costly, a serious buying choice, and usually isn’t needed until your equipment breaks otherwise you’re constructing a brand new home. We noticed a possibility to do one thing extra considerate: create an answer that works with what folks have already got. With Ome, upgrading your range takes minutes — no transforming, no steep studying curve, and no want to switch one thing that also works.

There’s additionally a deeper sustainability angle right here that’s usually missed. So many trendy home equipment are designed to get replaced — brief product cycles, software program that ages out shortly, parts that fail early. We’re pushing towards that pattern. By extending the lifetime of present home equipment, we’re lowering waste and providing a better various.

To us, innovation isn’t about ranging from scratch. It’s about rethinking what’s potential with what’s already in place to make properties safer, smarter, and extra sustainable within the course of.

How do you assume the broader sensible residence panorama will evolve within the subsequent 5 years, particularly by way of AI integration?

Over the following 5 years, I feel we’ll see a basic shift from “sensible” properties to really clever properties, and AI would be the driver of that change. But it surely gained’t appear like what we see right this moment. It gained’t be about extra screens or extra management panels. The longer term is ambient — quiet, contextual, and deeply intuitive.

AI will grow to be higher at studying how we dwell from our routines, preferences, and patterns of conduct. As a substitute of fixed notifications and handbook inputs, we’ll see methods that step in mechanically — adjusting, helping, and defending us in ways in which really feel invisible however significant. Consider a house that senses threat and responds immediately, with out ready for a command.

This shift can even redefine what “sensible” really means. Proper now, there’s a flood of linked gadgets that don’t add actual worth. Over time, I consider we’ll see the market right itself, transferring from gimmicks to purpose-driven innovation. Security, getting old in place, accessibility, and sustainability will grow to be the benchmarks that matter.

The businesses that thrive on this subsequent part gained’t be those chasing traits. They’ll be those quietly constructing belief, fixing actual ache factors, and creating tech that blends into the background however makes an actual distinction in on a regular basis life.

Thanks for the nice interview, readers who want to be taught extra ought to go to Ome

Freebeat Evaluation: The Best Option to Make Viral Music Movies


Do you know that a big a part of human communication (55%) is nonverbal? This contains what we see in physique language and facial expressions.

Which means if you share a publish, a music, or a message on-line, these watching are connecting extra to what they see than what they hear. However right here’s the factor: not everybody has the time, instruments, or data to flip their concepts into scroll-stopping movies.

That’s the place Freebeat is available in.

Freebeat is your shortcut to reworking any music or textual content into dynamic, beat-matching movies. The most effective half? No modifying expertise is required.

On this Freebeat evaluate, I will focus on the professionals and cons, what it’s, who it is best for, and its key options. Then, I will present you the way I used Freebeat to generate clips for the music “Flowers” by Miley Cyrus.

I will end the article by evaluating Freebeat with my high three alternate options (Riffusion, Udio, and Soundraw). By the tip, you will know if Freebeat is best for you!

Verdict

Freebeat is a enjoyable and simple instrument for turning music or textual content into beat-matching movies. It is excellent for all talent ranges. Nonetheless, gradual free-plan speeds, extra clip prices, and restricted customization might frustrate these needing extra management.

Execs and Cons

  • Works for all talent ranges with a easy interface.
  • The AI mechanically turns music, textual content, or photographs into movies.
  • Simply use music from Spotify, YouTube, TikTok, and extra.
  • Visuals mechanically match the music beat completely.
  • Edit particular person clip prompts and regenerate.
  • Utilizing Freebeat on smartphones or tablets will be troublesome.
  • Restricted customization might make superior customers really feel restricted.
  • There’s no sandbox to check earlier than publishing.
  • Video era takes a very long time on the free plan.
  • Regenerating particular person clips prices extra credit.
  • You may inform the clips are generated with AI.

What’s Freebeat?

The Freebeat homepage.

Freebeat is an AI instrument that turns music into visuals, together with lyric movies, dance movies, and extra. It mechanically syncs music with animations and scenes to create partaking social media or streaming content material.

With Freebeat’s newest improve (Freebeat.ai 3.0), the AI instrument has expanded past simply music-to-video instruments. It now gives textual content, picture, and effects-driven video era. This implies you may simply flip absolutely anything (scripts, images, and music tracks) into partaking movies.

The platform helps music from just about all over the place: Spotify, SoundCloud, YouTube, Suno, TikTok, you title it. In any other case, you may add your individual information.

In the event you’re a content material creator making an attempt to construct a social media presence, you may create every little thing from TikToks to YouTube movies. However I am not saying Freebeat is ideal. The clips it generates can look distorted, so that you would possibly have to regenerate if the primary consequence is not fairly proper.

Who’s Freebeat Finest For?

Here is who Freebeat AI is greatest for:

  • Musicians and music producers can use Freebeat to rapidly and simply generate lyric movies, dance movies, or music movies completely synced to their tracks.
  • Content material creators and influencers can use Freebeat to make partaking movies for TikTok, YouTube, Instagram, and different platforms in seconds.
  • Social media managers and digital entrepreneurs can use Freebeat to supply branded video content material for campaigns.
  • Storytellers and entrepreneurs can use Freebeat to show their scripts, images, and concepts into dynamic movies with minimal effort.
  • Rookies in video modifying can use Freebeat’s intuitive workflow to create skilled movies with none prior modifying expertise.

Freebeat Key Options

Listed here are Freebeat’s key options:

  • One-Click on AI Video Creation: Immediately flip music into partaking movies.
  • A number of Video Sorts: Generate dance movies, lyric movies, and music movies.
  • Computerized Beat Synchronization: The AI syncs clips completely to the rhythm of your music.
  • Help for Main Music Platforms: Appropriate with Spotify, YouTube, TikTok, SoundCloud, Suno, Udio, Steady Audio, Riffusion, and native information.
  • Music to Dance Video: Auto-generate dance movies with AI-choreographed avatars and dance types.
  • Music to Lyrics Video: Immediately create animated lyric movies synced to music rhythm.
  • AI Video Generator: Produce movies from textual content or photographs with results and transitions.
  • AI Video Impact: Add numerous AI results (humorous, pageant, inventive, and so on.)
  • Free Inventory Footage: Entry 10,000+ free HD video clips for backgrounds and B-roll.
  • Auto Caption: Robotically generate captions in your uploaded movies.
  • Caption Translation: Translate captions into 31+ languages for international attain.
  • Library: An organized repository for all of your generated movies.

Use Freebeat

Here is how I used Freebeat to generate a music video:

  1. Begin Making Free Movies
  2. Drop a Music Hyperlink
  3. Select a Part of the Music
  4. Select What You Wish to Create
  5. Add a Background Thought & Generate
  6. View the Music Video
  7. Regenerate Clips & Obtain the Video

Step 1: Begin Making Free Movies

Selecting "Make Free Videos" on the Freebeat homepage.

I began by going to freebeat.ai and choosing “Make Free Movies” on the highest proper.

Step 2: Drop a Music Hyperlink

pasting a music link into Freebeat.

Step one was pasting a music hyperlink. The hyperlink may very well be from any of the next platforms:

  • YouTube
  • Spotify
  • Udio
  • TikTok
  • SoundCloud
  • Steady Audio
  • Riffusion
  • Suno

Step 3: Select a Part of the Music

Selecting a part of the song uploaded to Freebeat and sampling it.

I pasted a YouTube hyperlink of “Flowers” by Miley Cyrus. From there, I dragged my choice to the a part of the music I needed (between 15 and 30 seconds), and I sampled it by hitting the play button. I chosen the primary a part of the refrain.

Step 4: Select What You Wish to Create

Creating a music video with Freebeat.

Subsequent, I had to decide on what I needed to create subsequent:

  • Music Video: Flip my music into artwork
  • Dance Video: Dancify my music with characters
  • Lyric Video: Convey music lyrics to life

I made a decision to decide on the “Music Video” possibility.

Step 5: Add a Background Thought & Generate

Adding a background idea and generating a video with Freebeat.

Choosing “Music Video” opened a brand new window, the place I might:

  • Select my mannequin (quick or customized)
  • Describe what I needed to see within the background
  • Select the side ratio (16:9, 4:3, 1:1, 9:16)
  • Choose the decision (720p)
  • Take away the watermark (solely on paid plans)

I saved every little thing on default and added a background concept that I felt match with the theme of my music. I hit “Generate,” which consumed 30 credit.

Using Freebeat to generate a video.

Instantly, Freebeat started producing my video. However I will be trustworthy, the era took longer than anticipated.

Freebeat put me consistent with 12 initiatives forward of me. The one technique to get sooner generations was by upgrading my account.

Step 6: View the Music Video

A while later, I obtained an e-mail from Freebeat telling me my video had been efficiently generated! I chosen it from my music movies.

Step 7: Regenerate Clips & Obtain the Video

Regenerating a clip on Freebeat.

After choosing the video, I went by way of every of the clips. I had two choices:

  1. If I used to be proud of the video, I might obtain it.
  2. If I used to be sad with sure clips, I might regenerate them. Regenerating a person clip consumed three credit.

Waiting in line to regenerate a clip with Freebeat.

One of many clips in my video was inappropriate, so I hit “Regenerate.” I used to be disenchanted to see that regenerating my clip put me again in line for era, which took some time.

Here is how my music video generated with Freebeat turned out:

Freebeat did a terrific job producing clips that aligned with the music’s essence and synchronized nicely with the music’s beat.

Nonetheless, the clips are considerably distorted and clearly AI-generated. In the event you’re sad with how particular person clips turned out, you may at all times edit the immediate and regenerate the clip.

General, Freebeat made it enjoyable and simple to show a music right into a customized music video. Nonetheless, the lengthy era wait and occasional want to repair clips made the method slower than anticipated.

High 3 Freebeat Alternate options

Listed here are the very best Freebeat alternate options I would suggest.

Riffusion

Riffusion homepage.

The primary Freebeat various I’d suggest is Riffusion. Riffusion is an AI music generator that permits you to create full songs immediately from textual content prompts. Simply describe the sound, temper, or style you need, and Riffusion generates unique music in actual time. You may even customise the devices.

On the one hand, Riffusion stands out with its real-time text-to-music era, superior customization, and the power to remix or modify outputs on the fly. It is AI learns your musical preferences over time, and it even options stem separation for isolating devices in skilled workflows.

In the meantime, Freebeat makes a speciality of turning your music, scripts, or textual content prompts into beat-synced music movies, lyric visuals, and dance clips. You may even create branded video content material for social media and advertising.

For customizable AI music era and experimentation, select Riffusion. For quick, professional-quality music video creation and branded content material, select Freebeat!

Learn my Riffusion evaluate or go to Riffusion!

Udio

The subsequent Freebeat various I’d suggest is Udio. Udio is an AI music generator that creates customized studio-quality songs from easy textual content prompts. It covers every little thing from instrumentals to full tracks with vocals and even customized cowl artwork.

Udio stands out with its modifying options. You may lengthen, remix, and inpaint tracks. Which means you may tweak particular sections of a music with out altering the remainder. Udio additionally permits you to export particular person stems to be used in skilled DAWs, and its AI can generate matching cowl artwork in your music.

In the meantime, Freebeat focuses on turning music into partaking visible, lyric, and dance movies with branded content material.

For exact management over music era, superior modifying, {and professional} audio exports, select Udio. For quick video content material synced to music, select Freebeat!

Learn my Udio evaluate or go to Udio!

Soundraw

The ultimate Freebeat various I’d suggest is Soundraw. Soundraw is an AI music generator that creates distinctive, royalty-free tracks by selecting the temper, style, and size you need. Its user-friendly interface permits even newcomers to generate music rapidly. You may additional customise your tracks by adjusting music construction, depth, and instrumentation.

Soundraw stands out with its customization options and wide selection of genres and moods. You may fine-tune components like tempo and instrumentation, rearrange music sections, and personalize the music to suit your venture completely. Soundraw additionally ensures all tracks are royalty-free and unique, addressing copyright considerations for business use.

In the meantime, Freebeat is valued for its means to effectively generate partaking branded music movies.

For producing customizable royalty-free music tracks, select Soundraw. For rapidly producing music and synchronized video content material for social media or advertising, select Freebeat!

Freebeat Evaluation: The Proper Device For You?

Freebeat shocked me with how enjoyable and simple it was to show songs into dynamic, beat-synced movies. It dealt with every little thing mechanically, from producing clips that match with the theme to syncing them as much as the beat of the music.

Nonetheless, the lengthy wait occasions on the free plan and the necessity to spend further credit to repair dangerous clips slowed me down. Additionally, the clips that have been generated have been typically distorted, and you might inform they have been made with AI.

Whereas the outcomes general have been spectacular, they generally felt tough across the edges. This makes Freebeat greatest for quick, mild initiatives versus polished, high-end productions.

For a easy, beginner-friendly technique to carry your music or scripts to life visually, Freebeat is value making an attempt. However when you want deep customization, sooner speeds, or skilled controls, you would possibly really feel a bit boxed in.

In the event you’re interested in the very best Freebeat alternate options, this is what I would suggest:

  • Riffusion is greatest for musicians and creators who need prompt, customizable AI-generated music with remixing and stem separation.
  • Udio is greatest for these wanting exact music modifying, high quality exports, and AI-generated cowl artwork.
  • Soundraw is greatest for creators needing royalty-free music with management over temper, depth, and instrumentation.

Thanks for studying my Freebeat evaluate! I hope you discovered it useful.

Freebeat gives 50 credit without spending a dime to strive their instruments simply by signing up. Strive Freebeat for your self and see the way you prefer it!

Pulumi IDP gives builders with sooner entry to self-service cloud infrastructure provisioning


Pulumi, a supplier of infrastructure as code (IaC) options, introduced a brand new inner developer platform (IDP) for delivering cloud infrastructure for builders. 

Pulumi IDP permits platform groups to publish constructing blocks within the type of Parts, Templates, and Insurance policies. These constructing blocks incorporate greatest practices with normal configurations and enforcement of safety, compliance, price, and operational guidelines.

Builders can then entry these constructing blocks to provision and handle cloud functions and infrastructure. Pulumi meets builders the place they’re by enabling them to entry the IDP by way of a no-code interface, low-code YAML-based CI/CI pipelines, IaC of their most popular language, or a REST API. 

Builders can set up tasks into Providers, which Pulumi describes as logical containers of cloud infrastructure, configuration, secrets and techniques, documentation, and observability dashboards, equivalent to an online utility, microservice, Jupyter pocket book, or information pipeline. 

Pulumi IDP additionally options safeguards for day 2 operations and past, equivalent to drift and coverage detection and remediation, auditing of outdated elements and templates, and alter administration throughout updates. 

Different capabilities of the IDP embody approval workflows, a visible import software for bringing unmanaged cloud infrastructure into Pulumi, and a sophisticated IAM system.

The brand new providing is on the market as a managed SaaS resolution or it may be self-hosted. It’s presently out there as a public preview, free for Pulumi prospects, and might be usually out there later this yr. 

“CTOs, CIOs, and engineering leaders inform us that the tempo of innovation is quicker than ever,” stated Joe Duffy, co-founder and CEO of Pulumi. “To succeed, builders should transfer quick – with out breaking issues. Pulumi IDP is the cloud infrastructure platform trendy groups have been asking for: infrastructure-first, multi-cloud, immensely highly effective and versatile, with built-in safety and full visibility and controls. It turns the cloud right into a aggressive benefit.”

Constructing pleasant Android digicam and media experiences



Constructing pleasant Android digicam and media experiences

Posted by Donovan McMurray, Mayuri Khinvasara Khabya, Mozart Louis, and Nevin Mital – Developer Relations Engineers

Hiya Android Builders!

We’re the Android Developer Relations Digital camera & Media group, and we’re excited to carry you one thing just a little completely different at this time. Over the previous a number of months, we’ve been exhausting at work writing pattern code and constructing demos that showcase methods to benefit from all the good potential Android affords for constructing pleasant person experiences.

A few of these efforts can be found so that you can discover now, and a few you’ll see later all year long, however for this weblog publish we thought we’d share among the learnings we gathered whereas going by this train.

Seize your favourite Android plush or rubber duck, and browse on to see what we’ve been as much as!

Future-proof your app with Jetpack

Nevin Mital

Certainly one of our focuses for the previous a number of years has been bettering the developer instruments accessible for video enhancing on Android. This led to the creation of the Jetpack Media3 Transformer APIs, which supply options for each single-asset and multi-asset video enhancing preview and export. As we speak, I’d wish to give attention to the Composition demo app, a pattern app that showcases among the multi-asset enhancing experiences that Transformer permits.

I began by including a customized video compositor to display how one can organize enter video sequences into completely different layouts to your remaining composition, comparable to a 2×2 grid or a picture-in-picture overlay. You may customise this by implementing a VideoCompositorSettings and overriding the getOverlaySettings technique. This object can then be set when constructing your Composition with setVideoCompositorSettings.

Right here is an instance for the 2×2 grid structure:

object : VideoCompositorSettings {
  ...

  override enjoyable getOverlaySettings(inputId: Int, presentationTimeUs: Lengthy): OverlaySettings {
    return when (inputId) {
      0 -> { // First sequence is positioned within the high left
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Center of overlay
          .setBackgroundFrameAnchor(-0.5f, 0.5f) // High-left part of background
          .construct()
      }

      1 -> { // Second sequence is positioned within the high proper
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Center of overlay
          .setBackgroundFrameAnchor(0.5f, 0.5f) // High-right part of background
          .construct()
      }

      2 -> { // Third sequence is positioned within the backside left
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Center of overlay
          .setBackgroundFrameAnchor(-0.5f, -0.5f) // Backside-left part of background
          .construct()
      }

      3 -> { // Fourth sequence is positioned within the backside proper
        StaticOverlaySettings.Builder()
          .setScale(0.5f, 0.5f)
          .setOverlayFrameAnchor(0f, 0f) // Center of overlay
          .setBackgroundFrameAnchor(0.5f, -0.5f) // Backside-right part of background
          .construct()
      }

      else -> {
        StaticOverlaySettings.Builder().construct()
      }
    }
  }
}

Since getOverlaySettings additionally supplies a presentation time, we will even animate the structure, comparable to on this picture-in-picture instance:

moving image of picture in picture on a mobile device

Subsequent, I spent a while migrating the Composition demo app to make use of Jetpack Compose. With difficult enhancing flows, it may assist to benefit from as a lot display house as is obtainable, so I made a decision to make use of the supporting pane adaptive structure. This fashion, the person can fine-tune their video creation on the preview display, and export choices are solely proven on the identical time on a bigger show. Under, you may see how the UI dynamically adapts to the display measurement on a foldable machine, when switching from the outer display to the interior display and vice versa.

What’s nice is that by utilizing Jetpack Media3 and Jetpack Compose, these options additionally carry over seamlessly to different units and kind components, comparable to the brand new Android XR platform. Proper out-of-the-box, I used to be in a position to run the demo app in House House with the 2D UI I already had. And with some small updates, I used to be even in a position to adapt the UI particularly for XR with options comparable to a number of panels, and to take additional benefit of the additional house, an Orbiter with playback controls for the enhancing preview.

moving image of suportive pane adaptive layout

What’s nice is that by utilizing Jetpack Media3 and Jetpack Compose, these options additionally carry over seamlessly to different units and kind components, comparable to the brand new Android XR platform. Proper out-of-the-box, I used to be in a position to run the demo app in House House with the 2D UI I already had. And with some small updates, I used to be even in a position to adapt the UI particularly for XR with options comparable to a number of panels, and to take additional benefit of the additional house, an Orbiter with playback controls for the enhancing preview.

moving image of sequential composition preview in Android XR

Orbiter(
  place = OrbiterEdge.Backside,
  offset = EdgeOffset.interior(offset = MaterialTheme.spacing.normal),
  alignment = Alignment.CenterHorizontally,
  form = SpatialRoundedCornerShape(CornerSize(28.dp))
) {
  Row (horizontalArrangement = Association.spacedBy(MaterialTheme.spacing.mini)) {
    // Playback management for rewinding by 10 seconds
    FilledTonalIconButton({ viewModel.seekBack(10_000L) }) {
      Icon(
        painter = painterResource(id = R.drawable.rewind_10),
        contentDescription = "Rewind by 10 seconds"
      )
    }
    // Playback management for play/pause
    FilledTonalIconButton({ viewModel.togglePlay() }) {
      Icon(
        painter = painterResource(id = R.drawable.rounded_play_pause_24),
        contentDescription = 
            if(viewModel.compositionPlayer.isPlaying) {
                "Pause preview playback"
            } else {
                "Resume preview playback"
            }
      )
    }
    // Playback management for forwarding by 10 seconds
    FilledTonalIconButton({ viewModel.seekForward(10_000L) }) {
      Icon(
        painter = painterResource(id = R.drawable.forward_10),
        contentDescription = "Ahead by 10 seconds"
      )
    }
  }
}

Jetpack libraries unlock premium performance incrementally

Donovan McMurray

Not solely do our Jetpack libraries have you ever lined by working persistently throughout present and future units, however additionally they open the doorways to superior performance and customized behaviors to assist all sorts of app experiences. In a nutshell, our Jetpack libraries intention to make the widespread case very accessible and simple, and it has hooks for including extra customized options later.

We’ve labored with many apps who’ve switched to a Jetpack library, constructed the fundamentals, added their vital customized options, and truly saved developer time over their estimates. Let’s check out CameraX and the way this incremental growth can supercharge your course of.

// Arrange CameraX app with preview and picture seize.
// Notice: setting the decision selector is elective, and if not set,
// then a default 4:3 ratio will likely be used.
val aspectRatioStrategy = AspectRatioStrategy(
  AspectRatio.RATIO_16_9, AspectRatioStrategy.FALLBACK_RULE_NONE)
var resolutionSelector = ResolutionSelector.Builder()
  .setAspectRatioStrategy(aspectRatioStrategy)
  .construct()

personal val previewUseCase = Preview.Builder()
  .setResolutionSelector(resolutionSelector)
  .construct()
personal val imageCaptureUseCase = ImageCapture.Builder()
  .setResolutionSelector(resolutionSelector)
  .setCaptureMode(ImageCapture.CAPTURE_MODE_MINIMIZE_LATENCY)
  .construct()

val useCaseGroupBuilder = UseCaseGroup.Builder()
  .addUseCase(previewUseCase)
  .addUseCase(imageCaptureUseCase)

cameraProvider.unbindAll()

digicam = cameraProvider.bindToLifecycle(
  this,  // lifecycleOwner
  CameraSelector.DEFAULT_BACK_CAMERA,
  useCaseGroupBuilder.construct(),
)

After organising the essential construction for CameraX, you may arrange a easy UI with a digicam preview and a shutter button. You need to use the CameraX Viewfinder composable which shows a Preview stream from a CameraX SurfaceRequest.

// Create preview
Field(
  Modifier
    .background(Coloration.Black)
    .fillMaxSize(),
  contentAlignment = Alignment.Heart,
) {
  surfaceRequest?.let {
    CameraXViewfinder(
      modifier = Modifier.fillMaxSize(),
      implementationMode = ImplementationMode.EXTERNAL,
      surfaceRequest = surfaceRequest,
     )
  }
  Button(
    onClick = onPhotoCapture,
    form = CircleShape,
    colours = ButtonDefaults.buttonColors(containerColor = Coloration.White),
    modifier = Modifier
      .peak(75.dp)
      .width(75.dp),
  )
}

enjoyable onPhotoCapture() {
  // Not proven: defining the ImageCapture.OutputFileOptions for
  // your saved photographs
  imageCaptureUseCase.takePicture(
    outputOptions,
    ContextCompat.getMainExecutor(context),
    object : ImageCapture.OnImageSavedCallback {
      override enjoyable onError(exc: ImageCaptureException) {
        val msg = "Photograph seize failed."
        Toast.makeText(context, msg, Toast.LENGTH_SHORT).present()
      }

      override enjoyable onImageSaved(output: ImageCapture.OutputFileResults) {
        val savedUri = output.savedUri
        if (savedUri != null) {
          // Do one thing with the savedUri if wanted
        } else {
          val msg = "Photograph seize failed."
          Toast.makeText(context, msg, Toast.LENGTH_SHORT).present()
        }
      }
    },
  )
}

You’re already on monitor for a strong digicam expertise, however what for those who needed so as to add some additional options to your customers? Including filters and results are simple with CameraX’s Media3 impact integration, which is one of many new options launched in CameraX 1.4.0.

Right here’s how easy it’s so as to add a black and white filter from Media3’s built-in results.

val media3Effect = Media3Effect(
  utility,
  PREVIEW or IMAGE_CAPTURE,
  ContextCompat.getMainExecutor(utility),
  {},
)
media3Effect.setEffects(listOf(RgbFilter.createGrayscaleFilter()))
useCaseGroupBuilder.addEffect(media3Effect)

The Media3Effect object takes a Context, a bitwise illustration of the use case constants for focused UseCases, an Executor, and an error listener. You then set the checklist of results you need to apply. Lastly, you add the impact to the useCaseGroupBuilder we outlined earlier.

moving image of sequential composition preview in Android XR

(Left) Our digicam app with no filter utilized. 
 (Proper) Our digicam app after the createGrayscaleFilter was added.

There are lots of different built-in results you may add, too! See the Media3 Impact documentation for extra choices, like brightness, colour lookup tables (LUTs), distinction, blur, and lots of different results.

To take your results to yet one more stage, it’s additionally attainable to outline your individual results by implementing the GlEffect interface, which acts as a manufacturing unit of GlShaderPrograms. You may implement a BaseGlShaderProgram’s drawFrame() technique to implement a customized impact of your individual. A minimal implementation ought to inform your graphics library to make use of its shader program, bind the shader program’s vertex attributes and uniforms, and situation a drawing command.

Jetpack libraries meet you the place you might be and your app’s wants. Whether or not that be a easy, fast-to-implement, and dependable implementation, or customized performance that helps the vital person journeys in your app stand out from the remaining, Jetpack has you lined!

Jetpack affords a basis for revolutionary AI Options

Mayuri Khinvasara Khabya

Simply as Donovan demonstrated with CameraX for seize, Jetpack Media3 supplies a dependable, customizable, and feature-rich resolution for playback with ExoPlayer. The AI Samples app builds on this basis to please customers with useful and enriching AI-driven additions.

In at this time’s quickly evolving digital panorama, customers count on extra from their media purposes. Merely enjoying movies is now not sufficient. Builders are consistently looking for methods to boost person experiences and supply deeper engagement. Leveraging the ability of Synthetic Intelligence (AI), notably when constructed upon sturdy media frameworks like Media3, affords thrilling alternatives. Let’s check out among the methods we will remodel the best way customers work together with video content material:

    • Empowering Video Understanding: The core concept is to make use of AI, particularly multimodal fashions just like the Gemini Flash and Professional fashions, to research video content material and extract significant info. This goes past merely enjoying a video; it is about understanding what’s within the video and making that info readily accessible to the person.
    • Actionable Insights: The aim is to remodel uncooked video into summaries, insights, and interactive experiences. This permits customers to shortly grasp the content material of a video and discover particular info they want or be taught one thing new!
    • Accessibility and Engagement: AI helps make movies extra accessible by offering options like summaries, translations, and descriptions. It additionally goals to extend person engagement by interactive options.

A Glimpse into AI-Powered Video Journeys

The next instance demonstrates potential video journies enhanced by synthetic intelligence. This pattern integrates a number of parts, comparable to ExoPlayer and Transformer from Media3; the Firebase SDK (leveraging Vertex AI on Android); and Jetpack Compose, ViewModel, and StateFlow. The code will likely be accessible quickly on Github.

moving images of examples of AI-powered video journeys

(Left) Video summarization  
 (Proper) Thumbnails timestamps and HDR body extraction

There are two experiences specifically that I’d like to spotlight:

    • HDR Thumbnails: AI might help establish key moments within the video that would make for good thumbnails. With these timestamps, you need to use the brand new ExperimentalFrameExtractor API from Media3 to extract HDR thumbnails from movies, offering richer visible previews.
    • Textual content-to-Speech: AI can be utilized to transform textual info derived from the video into spoken audio, enhancing accessibility. On Android you can even select to play audio in several languages and dialects thus enhancing personalization for a wider viewers.

Utilizing the best AI resolution

At present, solely cloud fashions assist video inputs, so we went forward with a cloud-based resolution.Iintegrating Firebase in our pattern empowers the app to:

    • Generate real-time, concise video summaries routinely.
    • Produce complete content material metadata, together with chapter markers and related hashtags.
    • Facilitate seamless multilingual content material translation.

So how do you truly work together with a video and work with Gemini to course of it? First, ship your video as an enter parameter to your immediate:

val promptData =
"Summarize this video within the type of high 3-4 takeaways solely. Write within the type of bullet factors. Do not assume if you do not know"

val generativeModel = Firebase.vertexAI.generativeModel("gemini-2.0-flash")
_outputText.worth = OutputTextState.Loading

viewModelScope.launch(Dispatchers.IO) {
    strive {
        val requestContent = content material {
            fileData(videoSource.toString(), "video/mp4")
            textual content(immediate)
        }
        val outputStringBuilder = StringBuilder()

        generativeModel.generateContentStream(requestContent).gather { response ->
            outputStringBuilder.append(response.textual content)
            _outputText.worth = OutputTextState.Success(outputStringBuilder.toString())
        }

        _outputText.worth = OutputTextState.Success(outputStringBuilder.toString())

    } catch (error: Exception) {
        _outputText.worth = error.localizedMessage?.let { OutputTextState.Error(it) }
    }
}

Discover there are two key parts right here:

    • FileData: This element integrates a video into the question.
    • Immediate: This asks the person what particular help they want from AI in relation to the offered video.

In fact, you may finetune your immediate as per your necessities and get the responses accordingly.

In conclusion, by harnessing the capabilities of Jetpack Media3 and integrating AI options like Gemini by Firebase, you may considerably elevate video experiences on Android. This mix permits superior options like video summaries, enriched metadata, and seamless multilingual translations, in the end enhancing accessibility and engagement for customers. As these applied sciences proceed to evolve, the potential for creating much more dynamic and clever video purposes is huge.

Go above-and-beyond with specialised APIs

Mozart Louis

Android 16 introduces the brand new audio PCM Offload mode which might scale back the ability consumption of audio playback in your app, resulting in longer playback time and elevated person engagement. Eliminating the ability nervousness enormously enhances the person expertise.

Oboe is Android’s premiere audio api that builders are in a position to make use of to create excessive efficiency, low latency audio apps. A brand new function is being added to the Android NDK and Android 16 known as Native PCM Offload playback.

Offload playback helps save battery life when enjoying audio. It really works by sending a big chunk of audio to a particular a part of the machine’s {hardware} (a DSP). This permits the CPU of the machine to enter a low-power state whereas the DSP handles enjoying the sound. This works with uncompressed audio (like PCM) and compressed audio (like MP3 or AAC), the place the DSP additionally takes care of decoding.

This may end up in vital energy saving whereas enjoying again audio and is ideal for purposes that play audio within the background or whereas the display is off (suppose audiobooks, podcasts, music and so on).

We created the pattern app PowerPlay to display methods to implement these options utilizing the most recent NDK model, C++ and Jetpack Compose.

Listed here are crucial components!

First order of enterprise is to guarantee the machine helps audio offload of the file attributes you want. Within the instance beneath, we’re checking if the machine assist audio offload of stereo, float PCM file with a pattern price of 48000Hz.

       val format = AudioFormat.Builder()
            .setEncoding(AudioFormat.ENCODING_PCM_FLOAT)
            .setSampleRate(48000)
            .setChannelMask(AudioFormat.CHANNEL_OUT_STEREO)
            .construct()

        val attributes =
            AudioAttributes.Builder()
                .setContentType(AudioAttributes.CONTENT_TYPE_MUSIC)
                .setUsage(AudioAttributes.USAGE_MEDIA)
                .construct()
       
        val isOffloadSupported = 
            if (Construct.VERSION.SDK_INT >= Construct.VERSION_CODES.Q) {
                AudioManager.isOffloadedPlaybackSupported(format, attributes)
            } else {
                false
            }

        if (isOffloadSupported) {
            participant.initializeAudio(PerformanceMode::POWER_SAVING_OFFLOADED)
        }

As soon as we all know the machine helps audio offload, we will confidently set the Oboe audio streams’ efficiency mode to the brand new efficiency mode possibility, PerformanceMode::POWER_SAVING_OFFLOADED.

// Create an audio stream
        AudioStreamBuilder builder;
        builder.setChannelCount(mChannelCount);
        builder.setDataCallback(mDataCallback);
        builder.setFormat(AudioFormat::Float);
        builder.setSampleRate(48000);

        builder.setErrorCallback(mErrorCallback);
        builder.setPresentationCallback(mPresentationCallback);
        builder.setPerformanceMode(PerformanceMode::POWER_SAVING_OFFLOADED);
        builder.setFramesPerDataCallback(128);
        builder.setSharingMode(SharingMode::Unique);
           builder.setSampleRateConversionQuality(SampleRateConversionQuality::Medium);
        End result consequence = builder.openStream(mAudioStream);

Now when audio is performed again, it is going to be offloading audio to the DSP, serving to save energy when enjoying again audio.

There’s extra to this function that will likely be lined in a future weblog publish, absolutely detailing out the entire new accessible APIs that may assist you to optimize your audio playback expertise!

What’s subsequent

In fact, we had been solely in a position to share the tip of the iceberg with you right here, so to dive deeper into the samples, take a look at the next hyperlinks:

Hopefully these examples have impressed you to discover what new and interesting experiences you may construct on Android. Tune in to our session at Google I/O in a pair weeks to be taught much more about use-cases supported by options like Jetpack CameraX and Jetpack Media3!