Home Blog Page 14

Netflix App Testing At Scale. Learn the way Netflix handled the… | by Jose Alcérreca | Android Builders | Apr, 2025


That is a part of the Testing at scale sequence of articles the place we requested business consultants to share their testing methods. On this article, Ken Yee, Senior Engineer at Netflix, tells us concerning the challenges of testing a playback app at a large scale and the way they’ve developed the testing technique because the app was created 14 years in the past!

Testing at Netflix repeatedly evolves. With a purpose to totally perceive the place it’s going and why it’s in its present state, it’s additionally necessary to know the historic context of the place it has been.

The Android app was began 14 years in the past. It was initially a hybrid software (native+webview), but it surely was transformed over to a completely native app due to efficiency points and the issue in with the ability to create a UI that felt/acted really native. As with most older functions, it’s within the means of being transformed to Jetpack Compose. The present codebase is roughly 1M traces of Java/Kotlin code unfold throughout 400+ modules and, like most older apps, there may be additionally a monolith module as a result of the unique app was one large module. The app is dealt with by a staff of roughly 50 folks.

At one level, there was a devoted cell SDET (Software program Developer Engineer in Check) staff that dealt with writing all machine assessments by following the same old circulate of working with builders and product managers to know the options they had been testing to create take a look at plans for all their automation assessments. At Netflix, SDETs had been builders with a concentrate on testing; they wrote Automation assessments with Espresso or UIAutomator; additionally they constructed frameworks for testing and built-in third occasion testing frameworks. Function Builders wrote unit assessments and Robolectric assessments for their very own code. The devoted SDET staff was disbanded a number of years in the past and the automation assessments at the moment are owned by every of the characteristic subteams; there are nonetheless 2 supporting SDETs who assist out the assorted groups as wanted. QA (High quality Assurance) manually assessments releases earlier than they’re uploaded as a remaining “smoke take a look at”.

Within the media streaming world, one attention-grabbing problem is the large ecosystem of playback units utilizing the app. We prefer to assist a very good expertise on low reminiscence/gradual units (e.g. Android Go units) whereas offering a premium expertise on greater finish units. For foldables, some don’t report a hinge sensor. We assist units again to Android 7.0 (API24), however we’re setting our minimal to Android 9 quickly. Some manufacturer-specific variations of Android even have quirks. In consequence, bodily units are an enormous a part of our testing

As talked about, characteristic builders now deal with all elements of testing their options. Our testing layers appear like this:

Check Pyramid displaying layers from backside to prime of: unit assessments, screenshot assessments, E2E automation assessments, smoke assessments

Nevertheless, due to our heavy utilization of bodily machine testing and the legacy components of the codebase, our testing pyramid appears to be like extra like an hourglass or inverted pyramid relying on which a part of the code you’re in. New options do have this extra typical testing pyramid form.

Our screenshot testing can be carried out at a number of ranges: UI part, UI display structure, and machine integration display structure. The primary two are actually unit assessments as a result of they don’t make any community calls. The final is an alternative choice to most handbook QA testing.

Unit assessments are used to check enterprise logic that isn’t depending on any particular machine/UI conduct. In older components of the app, we use RxJava for asynchronous code and streams are examined. Newer components of the app use Kotlin Flows and Composables for state flows that are a lot simpler to purpose about and take a look at in comparison with RxJava.

Frameworks we use for unit testing are:

  • Strikt: for assertions as a result of it has a fluent API like AssertJ however is written for Kotlin
  • Turbine: for the lacking items in testing Kotlin Flows
  • Mockito: for mocking any advanced courses not related for the present Unit of code being examined
  • Hilt: for substituting take a look at dependencies in our Dependency Injection graph
  • Robolectric: for testing enterprise logic that has to work together in a roundabout way with Android providers/courses (e.g., parcelables or Providers)
  • A/B take a look at/characteristic flag framework: permits overriding an automation take a look at for a particular A/B take a look at or enabling/disabling a particular characteristic

Builders are inspired to make use of plain unit assessments earlier than switching to Hilt or Robolectric as a result of execution time goes up 10x with every step when going from plain unit assessments -> Hilt -> Robolectric. Mockito additionally slows down builds when utilizing inline mocks, so inline mocks are discouraged. Gadget assessments are a number of orders of magnitude slower than any of those sorts unit assessments. Pace of testing is necessary in giant codebases.

As a result of unit assessments are blocking in our CI pipeline, minimizing flakiness is extraordinarily necessary. There are typically two causes for flakiness: leaving some state behind for the following take a look at and testing asynchronous code.

JVM (Java Digital Machine) Unit take a look at courses are created as soon as after which the take a look at strategies in every class are referred to as sequentially; instrumented assessments compared are run from the beginning and the one time it can save you is APK set up. Due to this, if a take a look at technique leaves some modified world state behind in dependent courses, the following take a look at technique can fail. World state can take many types together with recordsdata on disk, databases on disk, and shared courses. Utilizing dependency injection or recreating something that’s modified solves this challenge.

With asynchronous code, flakiness can at all times occur as a number of threads change various things. Check Dispatchers (Kotlin Coroutines) or Check Schedulers (RxJava) can be utilized to regulate time in every thread to make issues deterministic when testing a particular race situation. It will make the code much less sensible and presumably miss some take a look at situations, however will forestall flakiness within the assessments.

Screenshot testing frameworks are necessary as a result of they take a look at what’s seen vs. testing conduct. In consequence, they’re one of the best alternative for handbook QA testing of any screens which are static (animations are nonetheless troublesome to check with most screenshot testing frameworks until the framework can management time).

We use quite a lot of frameworks for screenshot testing:

  • Paparazzi: for Compose UI elements and display layouts; community calls can’t be made to obtain photographs, so you must use static picture assets or a picture loader that attracts a sample for the requested photographs (we do each)
  • Localization screenshot testing: captures screenshots of screens within the operating app in all locales for our UX groups to confirm manually
  • Gadget screenshot testing: machine testing used to check visible conduct of the operating app

Espresso accessibility testing: that is additionally a type of screenshot testing the place the sizes/colours of assorted components are checked for accessibility; this has additionally been considerably of a ache level for us as a result of our UX staff has adopted the WCAG 44dp normal for minimal contact dimension as an alternative of Android’s 48dp.

Lastly, we’ve got machine assessments. As talked about, these are magnitudes slower than assessments that may run on the JVM. They’re a alternative for handbook QA and used to smoke take a look at the general performance of the app.

Nevertheless, since operating a completely working app in a take a look at has exterior dependencies (backend, community infra, lab infra), the machine assessments will at all times be flaky in a roundabout way. This can’t be emphasised sufficient: regardless of having retries, machine automation assessments will at all times be flaky over an prolonged time frame. Additional beneath, we’ll cowl what we do to deal with a few of this flakiness.

We use these frameworks for machine testing:

  • Espresso: majority of machine assessments use Espresso which is Android’s predominant instrumentation testing framework for consumer interfaces
  • PageObject take a look at framework: inner screens are written as PageObjects that assessments can management to ease migration from XML layouts to Compose (see beneath for extra particulars)
  • UIAutomator: a small “smoke take a look at” set of assessments makes use of UIAutomator to check the totally obfuscated binary that may get uploaded to the app retailer (a.okay.a., Launch Candidate assessments)
  • Efficiency testing framework: measures load instances of assorted screens to examine for any regressions
  • Community seize/playback framework: permits playback of recorded API calls to cut back instability of machine assessments
  • Backend mocking framework: assessments can ask the backend to return particular outcomes; for instance, our dwelling web page has content material that’s fully pushed by advice algorithms so a take a look at can’t deterministically search for particular titles until the take a look at asks the backend to return particular movies in particular states (e.g. “leaving quickly”) and particular rows full of particular titles (e.g. a Coming Quickly row with particular movies)
  • A/B take a look at/characteristic flag framework: permits overriding an automation take a look at for a particular A/B take a look at or enabling/disabling a particular characteristic
  • Analytics testing framework: used to confirm a sequence of analytics occasions from a set of display actions; analytics are probably the most susceptible to breakage when screens are modified so this is a vital factor to check.

The PageObject design sample began as an online sample, however has been utilized to cell testing. It separates take a look at code (e.g. click on on Play button) from screen-specific code (e.g. the mechanics of clicking on a button utilizing Espresso). Due to this, it helps you to summary the take a look at from the implementation (suppose interfaces vs. implementation when writing code). You’ll be able to simply substitute the implementation as wanted when migrating from XML Layouts to Jetpack Compose layouts however the take a look at itself (e.g. testing login) stays the identical.

Along with utilizing PageObjects to outline an abstraction over screens, we’ve got an idea of “Check Steps”. A take a look at consists of take a look at steps. On the finish of every step, our machine lab infra will routinely create a screenshot. This offers builders a storyboard of screenshots that present the progress of the take a look at. When a take a look at step fails, it’s additionally clearly indicated (e.g., “couldn’t click on on Play button”) as a result of a take a look at step has a “abstract” and “error description” subject.

Inside of a device lab cage
Inside a tool lab cage

Netflix was in all probability one of many first corporations to have a devoted machine testing lab; this was earlier than third occasion providers like Firebase Check Lab had been obtainable. Our lab infrastructure has quite a lot of options you’d count on to have the ability to do:

  • Goal particular varieties of units
  • Seize video from operating a take a look at
  • Seize screenshots whereas operating a take a look at
  • Seize all logs

Fascinating machine tooling options which are uniquely Netflix:

  • Mobile tower so we are able to take a look at wifi vs. mobile connections; Netflix has their very own bodily mobile tower within the lab that the units are configured to connect with.
  • Community conditioning so gradual networks might be simulated
  • Automated disabling of system updates to units to allow them to be locked at a particular OS degree
  • Solely makes use of uncooked adb instructions to put in/run assessments (all this infrastructure predates frameworks like Gradle Managed Gadgets or Flank)
  • Working a set of automated assessments in opposition to an A/B assessments
  • Check {hardware}/software program for verifying {that a} machine doesn’t drop frames for our companions to confirm their units assist Netflix playback correctly; we even have a qualification program for units to ensure they assist HDR and different codecs correctly.

In the event you’re inquisitive about extra particulars, have a look at Netflix’ tech weblog.

As talked about above, take a look at flakiness is likely one of the hardest issues about inherently unstable machine assessments. Tooling needs to be constructed to:

  • Decrease flakiness
  • Determine causes of flakes
  • Notify groups that personal the flaky assessments

Tooling that we’ve constructed to handle the flakiness:

  • Routinely identifies the PR (Pull Request) batch {that a} take a look at began to fail in and notifies PR authors that they brought about a take a look at failure
  • Checks might be marked steady/unstable/disabled as an alternative of utilizing @Ignore annotations; that is used to disable a subset of assessments quickly if there’s a backend challenge in order that false positives are usually not reported on PRs
  • Automation that figures out whether or not a take a look at might be promoted to Steady by utilizing spare machine cycles to routinely consider take a look at stability
  • Automated IfTTT (If This Then That) guidelines for retrying assessments or ignoring non permanent failures or repairing a tool
  • Failure report allow us to simply filter failures in response to what machine maker, OS, or cage the machine is in, e.g. this reveals how typically a take a look at fails over a time frame for these environmental elements:
Check failures over time grouped by environmental elements like staging/prod backend, OS model, cellphone/pill
  • Failure report lets us triage error historical past to determine the commonest failure causes for a take a look at together with screenshots:
  • Checks might be manually set as much as run a number of instances throughout units or OS variations or machine sorts (cellphone/pill) to breed flaky assessments

We’ve a typical PR (Pull Request) CI pipeline that runs unit assessments (contains Paparazzi and Robolectric assessments), lint, ktLint, and Detekt. Working roughly 1000 machine assessments is a part of the PR course of. In a PR, a subset of smoke assessments can be run in opposition to the totally obfuscated app that may be shipped to the app retailer (the earlier machine assessments run in opposition to {a partially} obfuscated app).

Further machine automation assessments are run as a part of our post-merge suite. Every time batches of PRs are merged, there may be further protection supplied by automation assessments that can’t be run on PRs as a result of we attempt to maintain the PR machine automation suite below half-hour.

As well as, there are Every day and Weekly suites. These are run for for much longer automation assessments as a result of we attempt to maintain our post-merge suite below 120 minutes. Automation assessments that go into these are sometimes lengthy operating stress assessments (e.g., are you able to watch a season of a sequence with out the app operating out of reminiscence and crashing?).

In an ideal world, you’ve infinite assets to do all of your testing. In the event you had infinite units, you would run all of your machine assessments in parallel. In the event you had infinite servers, you would run all of your unit assessments in parallel. In the event you had each, you would run the whole lot on each PR. However in the actual world, you’ve a balanced strategy that runs “sufficient” assessments on PRs, postmerge, and so on. to stop points from getting out into the sphere so your prospects have a greater expertise whereas additionally holding your groups productive.

Protection on units is a set of tradeoffs. On PRs, you wish to maximize protection however decrease time. On post-merge/Every day/Weekly, time is much less necessary.

When testing on units, we’ve got a two dimensional matrix of OS model vs. machine sort (cellphone/pill). Structure points are pretty frequent, so we at all times run assessments on cellphone+pill. We’re nonetheless including automation to foldables, however they’ve their very own challenges like with the ability to take a look at layouts earlier than/after/in the course of the folding course of.

On PRs, we usually run what we name a “slender grid” which suggests a take a look at can run on any OS model. On Postmerge/Every day/Weekly, we run what we name a “full grid” which suggests a take a look at runs on each OS model. The tradeoff is that if there may be an OS-specific failure, it could appear like a flaky take a look at on a PR and gained’t be detected till later.

Testing repeatedly evolves as you be taught what works or new applied sciences and frameworks change into obtainable. We’re at the moment evaluating utilizing emulators to hurry up our PRs. We’re additionally evaluating utilizing Roborazzi to cut back device-based screenshot testing; Roborazzi permits testing of interactions whereas Paparazzi doesn’t. We’re increase a modular “demo app” system that permits for feature-level testing as an alternative of app-level testing. Enhancing app testing by no means ends…

CVE Program rescued on the final minute after issues over dropping its authorities funding


The destiny of the CVE Program—a database that catalogs publicly disclosed safety vulnerabilities—was unknown over the previous 24 hours. 

Yesterday, it was leaked that the maintainer of the CVE Program, MITRE, despatched a letter to CVE board members, saying that funding for the CVE program was set to run out at present, April 16. 

“If a break in service have been to happen, we anticipate a number of impacts to CVE, together with deterioration of nationwide vulnerability databases and advisories, device distributors, incident response operations, and all method of crucial infrastructure,” the letter stated.

Many of the funding comes from the U.S. Cybersecurity and Infrastructure Safety Agent (CISA), which on the time the letter was printed has not renewed the contract. Thankfully, this morning, CISA did renew its contract with MITRE, making certain the continuation of the CVE program.  

Ariadne Conill, co-founder and distinguished engineer at Edera, commented that the lack of this system can be catastrophic. “Each vulnerability administration technique around the globe at present is closely dependent and structured across the CVE system and its identifiers,” she stated. 

As well as, a brand new basis has been fashioned to additional make sure the “long-term viability, stability, and independence of this system.” 

The CVE Basis was based by lively CVE board members, who’ve been engaged on this for the previous 12 months as a result of they have been involved about this system being reliant on a single authorities sponsor. 

“CVE, as a cornerstone of the worldwide cybersecurity ecosystem, is just too vital to be weak itself,” stated Kent Landfield, an officer of the Basis. “Cybersecurity professionals across the globe depend on CVE identifiers and knowledge as a part of their each day work—from safety instruments and advisories to risk intelligence and response. With out CVE, defenders are at an enormous drawback in opposition to world cyber threats.”

The CVE Basis plans to launch extra data over the subsequent a number of days about its construction, transition planning, and alternatives for involvement. 

Cosmic Industries will get funding to automate, speed up photo voltaic set up

0


Cosmic Industries has secured $4 million in funding to develop synthetic intelligence-driven robots geared toward expediting the development of important infrastructure, with an preliminary deal with large-scale photo voltaic power installations. The corporate mentioned it intends to deal with labor shortages and streamline the event of U.S. power infrastructure, starting with solar energy initiatives.

The U.S. faces a $9.1 trillion infrastructure funding deficit, with $2 trillion wanted for power, mentioned Cosmic Robotics. On the similar time, photo voltaic initiatives, that are essential for powering energy-intensive industries like AI information facilities, are stalled by labor shortages and reliance on handbook set up. The firm mentioned that is hindering the nation’s capability to fulfill hovering power calls for regardless of provide chain stability.

Cosmic Industries has unveiled its first job-site robotic, the Cosmic-1A, particularly engineered for the calls for of large-scale photo voltaic installations, the place effectivity and accuracy are key. Designed to reinforce the present workforce, this robotic is designed to deal with the bodily intensive duties in photo voltaic set up, promising to halve labor bills and greater than double day by day manufacturing charges.

close up of the cosmic robot gripper, holding a single solar panel.

The Cosmic Robotics gripper can elevate and place a single photo voltaic panel, putting it with the mandatory precision. | Credit score: Cosmic Industries

Particle processes information in actual time

On the coronary heart of the robotic’s capabilities is Particle, Cosmic’s proprietary AI platform. This real-time decision-making system processes sensor information to offer actionable insights, automating high quality assurance, asset monitoring, and workflow administration to keep up mission momentum. Particle goals to allow on-site adaptability, mitigate pricey delays, and pave the way in which for absolutely autonomous building.

Departing from typical automation methods that search to impose inflexible factory-like processes on building websites, Cosmic is growing AI-driven instruments outfitted with multi-modal sensors and superior AI-perception software program. These instruments are designed to navigate the complexities of real-world building environments, making certain dependable operation in difficult situations equivalent to mud, warmth, rain, and dirt.

Based in 2023, Cosmic Industries is led by CEO James Emerick, who brings deep expertise in field-deployed building automation from his work at Constructed Robotics and Autodesk Analysis. Chief Know-how Officer Lewis Jones helped launch the world’s first 3D-printed rocket at Relativity House.

They’re joined by a rising group of engineering leaders from Google, Amazon, SpaceX, NASA, and different frontier tech corporations – bringing collectively aerospace-grade reliability, mission-critical programs considering, and hard-won discipline deployment expertise.

“Building is the muse of society—each street, energy plant, and information heart is constructed by knowledgeable palms within the discipline, I grew up on building websites with my grandfather, and many of the instruments he used 50 years in the past are nonetheless in use right now,” famous Emerick. “At Cosmic, we’re constructing the following era of building instruments, designed to make distinctive crews much more productive –unlocking effectivity good points the business desperately wants.”


SITE AD for the 2025 Robotics Summit registration.
Register now so you do not miss out!


Funding to construct Cosmic Industries momentum

Big Ventures led the funding spherical, with participation from MaC Enterprise Capital and HCVC, together with notable angel buyers equivalent to Azeem Azhar, Aarthi Ramamurthy, and Nate Williams.

“Cosmic’s group has the uncommon mixture of deep building experience and aerospace-grade engineering—precisely the form of group that builds generational corporations,” mentioned Madelene Larsson, principal at Big Ventures. “James brings the hard-won expertise of engineering on the job web site, and Lewis brings the form of technical excellence that may redefine how we construct essential infrastructure. First, they are going to speed up photo voltaic deployment, with a long-term imaginative and prescient to expedite how we construct all infrastructure at scale.”

Cosmic Robotics mentioned its momentum has been acknowledged by business leaders, with an award from the U.S. Division of Vitality’s American-Made Photo voltaic Prize and funding from the JLL Basis. The corporate mentioned it’s now gearing as much as deliver Particle and the Cosmic-1A system to among the most formidable infrastructure initiatives within the nation.

As strain mounts to ship clear power at pace and scale, the corporate mentioned it’s constructing the instruments—and the group—that may outline the way it will get achieved.

Planning permission secured for UK’s first carbon seize enabled cement works



Planning permission secured for UK’s first carbon seize enabled cement works
Aerial picture of Padeswood Cement Works (picture credit score: RSK).

Cement maker Heidelberg Supplies UK has secured planning permission for its Padeswood carbon seize and storage (CCS) challenge in North Wales – mentioned to be the primary carbon seize enabled cement works within the UK – in what appears a landmark challenge for the cement business. Environmental options agency RSK, which supported the bid, provides additional element

Planning permission was obtained from the Welsh Authorities on 4 April 2025 – 5 months forward of schedule.

Padeswood carbon seize and storage challenge, which can connect with the HyNet North West challenge, goals to be the primary web zero cement facility within the UK and a world exemplar for the deployment of carbon seize expertise at an present cement works website. As soon as operational, it should seize and retailer roughly 800,000 tonnes of CO2 every year, capturing as much as 95% of CO2 emissions from the prevailing Padeswood cement kiln.

RSK Setting Principal Environmental Advisor Harry Cross mentioned the challenge is the furthest superior carbon seize challenge at an operational cement works within the UK by far.

He mentioned: “We’re proud to have achieved planning permission for this essential and rewarding challenge that demonstrates the UK main the best way on deploying CCS within the cement business.

“Our work right here noticed RSK Setting performing as atmosphere, consents and allowing lead, together with coordinating the environmental influence evaluation (EIA) and challenge managing the event of nationwide significance (DNS) utility. We had been additionally in a position to attract on the abilities and expertise of 12 extra RSK Group companies, together with Joanna Berlyn from Stephenson Halliday as planning lead and Copper Consultancy as communications lead for the programme of group engagement and session.”

It’s estimated that the challenge will create as much as 500 extra jobs through the development part and also will create round 50 direct, long-term operational employment alternatives. The challenge additionally proposes the creation of 4 new ponds, 9 hibernacula and 17 refugia (locations the place amphibians can relaxation through the day and escape from predators and the solar and, in winter, the place they may hibernate). It’s envisaged that the planting of blended deciduous woodland and the enhancement of grassland will cowl an space of round 10.13 ha and can enhance its worth for excellent crested newt (Triturus cristatus) foraging and supply wider biodiversity advantages for different protected species.

Cross mentioned the challenge is a powerful instance of how RSK can draw on its broad vary of multidisciplinary expertise to attain a significant infrastructure planning utility. The mixed RSK staff compiled and submitted greater than 100 paperwork as a part of its contribution to the planning submission. The 13 companies concerned within the DNS utility had been:

  1. RSK Setting (EIA coordination, geographic info system and allowing)
  2. Stephenson Halliday (planning and panorama & visible)
  3. ADAS Land (land referencing)
  4. Copper Consultancy (communications and public affairs)
  5. Nature Optimistic (local weather)
  6. RSK Acoustics (noise and vibration)
  7. RSK Biocensus (biodiversity)
  8. RSK Air High quality (air high quality)
  9. RSK ADAS (arboriculture and soils)
  10. RSK Geosciences (land and soils, materials belongings and waste)
  11. RSK Land and Growth Engineering (water)
  12. Headland Archaeology (cultural heritage)
  13. SCP Transport (site visitors and transport)

He mentioned that all through the programme, RSK Setting labored intently with the challenge entrance finish engineering design (FEED) staff, advising on design necessities for planning.

A large industrial structure set against a blue sky, presumably a building at Padeswood Cement Works

RSK Setting collaborated with different RSK Group companies to organize and submit the EIA scoping report in late 2022. A scoping route from Planning and Setting Selections Wales (PEDW) was obtained in April 2023.

He mentioned: “Our collaboration continued because the EIA developed and work on the environmental assertion started in 2023 via to 2024. 9 environmental issue assessments had been undertaken, together with panorama and visible, biodiversity, local weather and noise and vibration, to know the influence of the challenge on the atmosphere and suggest mitigation and enhancements to offset the impacts. Findings had been reported within the environmental assertion.

“Alongside this, Copper Consultancy coordinated a programme of group engagement and session and organised two pre-consultation occasions in 2022, an additional three on-line and 6 non-statutory and statutory pre-application session occasions at native venues in 2023 and 2024, respectively. The planning utility was submitted on 27 September 2024.”

Heidelberg Supplies UK Chief Govt Officer Simon Willis mentioned: “That is implausible information and a brings our plans to create the UK’s first web zero cement works a step nearer.

“Cement is important to the UK’s transition to web zero. It’s basic to the event of the whole lot from new offshore wind farms to nuclear energy stations, to low carbon infrastructure, and the hundreds of inexperienced jobs these initiatives will create.

“Our Padeswood CCS challenge will deliver important inward funding and alternative to the area, boosting the North Wales economic system and securing the way forward for a whole bunch of expert jobs.

“As soon as operational, it should additionally present web zero constructing supplies for main initiatives throughout the nation and can act as an exemplar for sustainable cement manufacturing within the UK and throughout the globe.”

ios – What’s the right syntax to proceed in app for customized intent?


I’ve a customized intent. When my app is unable to finish the decision of a parameter throughout the app extension, I would like to have the ability to proceed throughout the app. I’m unable to determine what the right goal C syntax is to allow the execution to proceed with the app. Here’s what I’ve tried:

completion([[PickWoodIntentResponse init] initWithCode:PickWoodIntentResponseCodeContinueInApp userActivity:nil]);

This ends in the next error:

Implicit conversion from enumeration kind 'enum PickWoodIntentResponseCode' to completely different enumeration kind 'INAnswerCallIntentResponseCode' (aka 'enum INAnswerCallIntentResponseCode')

I do not know why it’s referring to the enum kind of ‘INAnswerCallIntentResponseCode’ which is unrelated to my app.

I’ve additionally tried:

PickWoodIntentResponse *response = [[PickWoodIntentResponse init] initWithCode:PickWoodIntentResponseCodeContinueInApp userActivity:nil];
completion(response);

however that ends in 2 errors:

Implicit conversion from enumeration kind 'enum PickWoodIntentResponseCode' to completely different enumeration kind 'INAnswerCallIntentResponseCode' (aka 'enum INAnswerCallIntentResponseCode')

and

Incompatible pointer sorts passing 'PickWoodIntentResponse *' to parameter of kind 'INStringResolutionResult *'

The related autogenerated code offered to me with the creation of my intent is as follows:

@class PickWoodIntentResponse;
@protocol PickWoodIntentHandling 
- (void)resolveVarietyForPickWood:(PickWoodIntent *)intent withCompletion:(void (^)(INStringResolutionResult *resolutionResult))completion NS_SWIFT_NAME(resolveVariety(for:with:)) API_AVAILABLE(ios(13.0), macos(11.0), watchos(6.0));
@finish

typedef NS_ENUM(NSInteger, PickWoodIntentResponseCode) {
  PickWoodIntentResponseCodeUnspecified = 0,
  PickWoodIntentResponseCodeReady,
  PickWoodIntentResponseCodeContinueInApp,
  PickWoodIntentResponseCodeInProgress,
  PickWoodIntentResponseCodeSuccess,
  PickWoodIntentResponseCodeFailure,
  PickWoodIntentResponseCodeFailureRequiringAppLaunch
}

@interface PickWoodIntentResponse : INIntentResponse
- (instancetype)init NS_UNAVAILABLE;
- (instancetype)initWithCode:(PickWoodIntentResponseCode)code userActivity:(nullable NSUserActivity *)userActivity NS_DESIGNATED_INITIALIZER;
@property (readonly, NS_NONATOMIC_IOSONLY) PickWoodIntentResponseCode code;
@finish

Am I overlooking one thing? What could be the correct syntax to have throughout the completion block to fulfill the compiler?