Most knowledge leaks occur when knowledge is in a weak state — whereas in use or throughout processing. This vulnerability can compromise industries like finance, healthcare, authorities, and protection, the place confidentiality is essential. Innovation and collaboration within the software program business will also be impaired. Nevertheless, sustainable options, similar to confidential computing, encrypt and shield delicate knowledge throughout processing, decreasing the danger of unauthorized entry.
As the top of rising applied sciences, I began working with confidential computing a few years in the past. By means of my analysis and hands-on initiatives, it grew to become clear to me that confidential computing has immense potential to considerably improve safety for weak industries, because it secures knowledge in use.
Three Causes Why Companies Ought to Take into account Confidential Computing
1. Complying with rules and avoiding penalties
A number of compliance necessities and rules, such because the Basic Knowledge Safety Regulation (GDPR), mandate sturdy knowledge safety all through its lifecycle. This ensures organizations implement safety measures applicable to knowledge processing dangers. Newly proposed compliance requirements explicitly insist on securing knowledge in use as nicely. In January 2023, the Digital Operational Resilience Act (DORA) launched Article 6, emphasizing knowledge safety for monetary establishments by mandating encryption for knowledge at relaxation, in transit, and in use, the place related.
The identical can be true for the healthcare business. The Well being Insurance coverage Portability and Accountability Act (HIPAA) mandates administrative, bodily, and technical safeguards to guard the confidentiality, integrity, and availability of protected well being data (PHI), together with securing knowledge throughout processing.
Confidential computing’s capacity to safe buyer and transaction knowledge is a boon for industries like finance and healthcare which are consistently beneath scrutiny for knowledge safety. It ensures adherence to those rules by utilizing hardware-based safe enclaves to isolate delicate knowledge and computations and shield knowledge in use. This prevents unauthorized entry throughout knowledge processing and allows organizations to keep away from hefty fines and penalties by assembly regulatory tips like GDPR and the California Client Privateness Act (CCPA). The expertise may assist healthcare and retail platforms adjust to requirements like HIPAA and PCI-DSS.
As well as, confidential computing helps keep credibility and fosters innovation and collaboration. This potential for innovation might be a strong differentiator for any business.
2. Securing public cloud-based infrastructure
Public clouds are weak to malicious assaults. In 2023, the pharmaceutical business misplaced a mean of $4.82 million attributable to cyberattacks, highlighting the demand for higher privateness and knowledge safety. Infrastructure-level multitenancy segregates computing situations and introduces issues similar to noisy neighbors and hypervisor vulnerabilities, probably resulting in unauthorized knowledge entry and superior malware assaults.
To safe public cloud environments, organizations should belief the cloud supplier’s host OS, hypervisor, {hardware}, firmware, and orchestration system. Confidential computing makes use of trusted execution environments (TEEs) to deal with these safety issues and set up protected reminiscence areas or safety enclaves. Its distant attestation ensures workload integrity by making personal knowledge invisible to cloud suppliers and stopping unauthorized entry of system directors, infrastructure house owners, service suppliers, the host OS and hypervisor, or different purposes on the host.
Scalability and elasticity are different key advantages of cloud computing. Most purposes and workloads run on digital machines or containers, with trendy architectures favoring containers. Confidential computing choices permit present VM or container-based purposes to be migrated with out code adjustments by lifting and shifting the workload. I efficiently piloted this strategy to enhance GitOps safety for a shopper, migrating the CI/CD pipeline from public runners to confidential computing environments.
Confidential computing enhances safety and can be more likely to scale back the barrier to cloud adoption for security-demanding workloads.
3. Adopting AI/ML and GenAI securely
In a current Code42 survey, 89% of the respondents of the respondents mentioned that new AI instruments are making their knowledge extra weak. AI fashions require a steady inflow of information, making them vulnerable to assaults and knowledge leaks. Confidential computing addresses this by defending coaching knowledge and securing delicate datasets throughout each mannequin coaching and inferencing. This expertise ensures that AI fashions solely study from licensed knowledge, offering enterprises with full management over their knowledge and enhancing safety.
There are nonetheless gaps in ideas and use circumstances in generative AI (GenAI), however they don’t seem to be deterring firms from adopting a measured and incremental strategy to rolling out GenAI. GenAI fashions study from varied inputs, together with prompts and coaching knowledge. Whereas interacting with different GenAI instruments like observability and monitoring, packaging, DevOps and GitOps instruments, and many others., they’ll unintentionally expose or transmit unauthorized data.
Such prospects have prompted international locations to launch rules for higher privateness. Confidential digital machines (VMs) and containers are efficient options. We skilled this firsthand when a shopper opted to deploy retrieval augmented technology (RAG) GenAI, guaranteeing adherence to knowledge locality and confidentiality necessities. The answer was carried out utilizing an area LLM and operational instruments, together with domestically arrange knowledge shops and instruments. The method ensured the confidentiality of the prompts and LLM responses.
Conclusion
Cloud environments are fairly profitable, as they supply higher agility and wider entry to computing assets at a diminished prices. Regardless of these benefits, each private and non-private clouds are vulnerable to knowledge breaches. Confidential computing addresses this challenge by safeguarding knowledge in use, making it an important element of cloud safety. Moreover, it helps firms adjust to the regulatory necessities. As 5G and AI applied sciences advance, confidential computing will grow to be much more accessible and efficient.
Effectively, after all I can’t say “no” – all of the extra so as a result of, right here, you may have an abbreviated and condensed model of the chapter on this matter within the forthcoming e book from CRC Press, Deep Studying and Scientific Computing with R torch. By means of comparability with the earlier publish that used torch, written by the creator and maintainer of torchaudio, Athos Damiani, important developments have taken place within the torch ecosystem, the top end result being that the code bought lots simpler (particularly within the mannequin coaching half). That stated, let’s finish the preamble already, and plunge into the subject!
Inspecting the info
We use the speech instructions dataset (Warden (2018)) that comes with torchaudio. The dataset holds recordings of thirty completely different one- or two-syllable phrases, uttered by completely different audio system. There are about 65,000 audio recordsdata total. Our job will probably be to foretell, from the audio solely, which of thirty potential phrases was pronounced.
Particular person tensor values are centered at zero, and vary between -1 and 1. There are 16,000 of them, reflecting the truth that the recording lasted for one second, and was registered at (or has been transformed to, by the dataset creators) a charge of 16,000 samples per second. The latter info is saved in pattern$sample_rate:
[1] 16000
All recordings have been sampled on the identical charge. Their size nearly at all times equals one second; the – very – few sounds which can be minimally longer we will safely truncate.
Lastly, the goal is saved, in integer kind, in pattern$label_index, the corresponding phrase being out there from pattern$label:
pattern$labelpattern$label_index
[1] "chicken"
torch_tensor
2
[ CPULongType{} ]
How does this audio sign “look?”
library(ggplot2)df<-knowledge.body( x =1:size(pattern$waveform[1]), y =as.numeric(pattern$waveform[1]))ggplot(df, aes(x =x, y =y))+geom_line(dimension =0.3)+ggtitle(paste0("The spoken phrase "", pattern$label, "": Sound wave"))+xlab("time")+ylab("amplitude")+theme_minimal()
The spoken phrase “chicken,” in time-domain illustration.
What we see is a sequence of amplitudes, reflecting the sound wave produced by somebody saying “chicken.” Put otherwise, we’ve right here a time collection of “loudness values.” Even for specialists, guessing which phrase resulted in these amplitudes is an unattainable job. That is the place area data is available in. The skilled might not be capable of make a lot of the sign on this illustration; however they could know a option to extra meaningfully characterize it.
Two equal representations
Think about that as an alternative of as a sequence of amplitudes over time, the above wave have been represented in a means that had no details about time in any respect. Subsequent, think about we took that illustration and tried to get better the unique sign. For that to be potential, the brand new illustration would someway must include “simply as a lot” info because the wave we began from. That “simply as a lot” is obtained from the Fourier Remodel, and it consists of the magnitudes and part shifts of the completely different frequencies that make up the sign.
How, then, does the Fourier-transformed model of the “chicken” sound wave look? We acquire it by calling torch_fft_fft() (the place fft stands for Quick Fourier Remodel):
The size of this tensor is similar; nevertheless, its values usually are not in chronological order. As a substitute, they characterize the Fourier coefficients, equivalent to the frequencies contained within the sign. The upper their magnitude, the extra they contribute to the sign:
magazine<-torch_abs(dft[1, ])df<-knowledge.body( x =1:(size(pattern$waveform[1])/2), y =as.numeric(magazine[1:8000]))ggplot(df, aes(x =x, y =y))+geom_line(dimension =0.3)+ggtitle(paste0("The spoken phrase "",pattern$label,"": Discrete Fourier Remodel"))+xlab("frequency")+ylab("magnitude")+theme_minimal()
The spoken phrase “chicken,” in frequency-domain illustration.
From this alternate illustration, we might return to the unique sound wave by taking the frequencies current within the sign, weighting them in line with their coefficients, and including them up. However in sound classification, timing info should absolutely matter; we don’t actually need to throw it away.
Combining representations: The spectrogram
In reality, what actually would assist us is a synthesis of each representations; some type of “have your cake and eat it, too.” What if we might divide the sign into small chunks, and run the Fourier Remodel on every of them? As you’ll have guessed from this lead-up, this certainly is one thing we will do; and the illustration it creates is known as the spectrogram.
With a spectrogram, we nonetheless preserve some time-domain info – some, since there’s an unavoidable loss in granularity. However, for every of the time segments, we find out about their spectral composition. There’s an essential level to be made, although. The resolutions we get in time versus in frequency, respectively, are inversely associated. If we cut up up the indicators into many chunks (referred to as “home windows”), the frequency illustration per window won’t be very fine-grained. Conversely, if we need to get higher decision within the frequency area, we’ve to decide on longer home windows, thus dropping details about how spectral composition varies over time. What seems like a giant drawback – and in lots of circumstances, will probably be – gained’t be one for us, although, as you’ll see very quickly.
First, although, let’s create and examine such a spectrogram for our instance sign. Within the following code snippet, the dimensions of the – overlapping – home windows is chosen in order to permit for cheap granularity in each the time and the frequency area. We’re left with sixty-three home windows, and, for every window, acquire 200 fifty-seven coefficients:
fft_size<-512window_size<-512energy<-0.5spectrogram<-transform_spectrogram( n_fft =fft_size, win_length =window_size, normalized =TRUE, energy =energy)spec<-spectrogram(pattern$waveform)$squeeze()dim(spec)
[1] 257 63
We are able to show the spectrogram visually:
bins<-1:dim(spec)[1]freqs<-bins/(fft_size/2+1)*pattern$sample_ratelog_freqs<-log10(freqs)frames<-1:(dim(spec)[2])seconds<-(frames/dim(spec)[2])*(dim(pattern$waveform$squeeze())[1]/pattern$sample_rate)picture(x =as.numeric(seconds), y =log_freqs, z =t(as.matrix(spec)), ylab ='log frequency [Hz]', xlab ='time [s]', col =hcl.colours(12, palette ="viridis"))major<-paste0("Spectrogram, window dimension = ", window_size)sub<-"Magnitude (sq. root)"mtext(aspect =3, line =2, at =0, adj =0, cex =1.3, major)mtext(aspect =3, line =1, at =0, adj =0, cex =1, sub)
The spoken phrase “chicken”: Spectrogram.
We all know that we’ve misplaced some decision in each time and frequency. By displaying the sq. root of the coefficients’ magnitudes, although – and thus, enhancing sensitivity – we have been nonetheless in a position to acquire an affordable end result. (With the viridis colour scheme, long-wave shades point out higher-valued coefficients; short-wave ones, the other.)
Lastly, let’s get again to the essential query. If this illustration, by necessity, is a compromise – why, then, would we need to make use of it? That is the place we take the deep-learning perspective. The spectrogram is a two-dimensional illustration: a picture. With pictures, we’ve entry to a wealthy reservoir of strategies and architectures: Amongst all areas deep studying has been profitable in, picture recognition nonetheless stands out. Quickly, you’ll see that for this job, fancy architectures usually are not even wanted; a simple convnet will do an excellent job.
Coaching a neural community on spectrograms
We begin by making a torch::dataset() that, ranging from the unique speechcommand_dataset(), computes a spectrogram for each pattern.
spectrogram_dataset<-dataset( inherit =speechcommand_dataset, initialize =operate(...,pad_to=16000,sampling_rate=16000,n_fft=512,window_size_seconds=0.03,window_stride_seconds=0.01,energy=2){self$pad_to<-pad_toself$window_size_samples<-sampling_rate*window_size_secondsself$window_stride_samples<-sampling_rate*window_stride_secondsself$energy<-energyself$spectrogram<-transform_spectrogram( n_fft =n_fft, win_length =self$window_size_samples, hop_length =self$window_stride_samples, normalized =TRUE, energy =self$energy)tremendous$initialize(...)}, .getitem =operate(i){merchandise<-tremendous$.getitem(i)x<-merchandise$waveform# make sure that all samples have the identical size (57)# shorter ones will probably be padded,# longer ones will probably be truncatedx<-nnf_pad(x, pad =c(0, self$pad_to-dim(x)[2]))x<-x%>%self$spectrogram()if(is.null(self$energy)){# on this case, there's an extra dimension, in place 4,# that we need to seem in entrance# (as a second channel)x<-x$squeeze()$permute(c(3, 1, 2))}y<-merchandise$label_indexlisting(x =x, y =y)})
Within the parameter listing to spectrogram_dataset(), be aware energy, with a default worth of two. That is the worth that, except instructed in any other case, torch’s transform_spectrogram() will assume that energy ought to have. Beneath these circumstances, the values that make up the spectrogram are the squared magnitudes of the Fourier coefficients. Utilizing energy, you may change the default, and specify, for instance, that’d you’d like absolute values (energy = 1), another constructive worth (akin to 0.5, the one we used above to show a concrete instance) – or each the actual and imaginary components of the coefficients (energy = NULL).
Show-wise, after all, the complete complicated illustration is inconvenient; the spectrogram plot would wish an extra dimension. However we might properly wonder if a neural community might revenue from the extra info contained within the “complete” complicated quantity. In any case, when lowering to magnitudes we lose the part shifts for the person coefficients, which could include usable info. In reality, my checks confirmed that it did; use of the complicated values resulted in enhanced classification accuracy.
Let’s see what we get from spectrogram_dataset():
ds<-spectrogram_dataset( root ="~/.torch-datasets", url ="speech_commands_v0.01", obtain =TRUE, energy =NULL)dim(ds[1]$x)
[1] 2 257 101
Now we have 257 coefficients for 101 home windows; and every coefficient is represented by each its actual and imaginary components.
Subsequent, we cut up up the info, and instantiate the dataset() and dataloader() objects.
train_ids<-pattern(1:size(ds), dimension =0.6*size(ds))valid_ids<-pattern(setdiff(1:size(ds),train_ids), dimension =0.2*size(ds))test_ids<-setdiff(1:size(ds),union(train_ids, valid_ids))batch_size<-128train_ds<-dataset_subset(ds, indices =train_ids)train_dl<-dataloader(train_ds, batch_size =batch_size, shuffle =TRUE)valid_ds<-dataset_subset(ds, indices =valid_ids)valid_dl<-dataloader(valid_ds, batch_size =batch_size)test_ds<-dataset_subset(ds, indices =test_ids)test_dl<-dataloader(test_ds, batch_size =64)b<-train_dl%>%dataloader_make_iter()%>%dataloader_next()dim(b$x)
[1] 128 2 257 101
The mannequin is a simple convnet, with dropout and batch normalization. The true and imaginary components of the Fourier coefficients are handed to the mannequin’s preliminary nn_conv2d() as two separate channels.
With thirty courses to tell apart between, a ultimate validation-set accuracy of ~0.94 seems like a really respectable end result!
We are able to verify this on the take a look at set:
consider(fitted, test_dl)
loss: 0.2373
acc: 0.9324
An attention-grabbing query is which phrases get confused most frequently. (After all, much more attention-grabbing is how error chances are associated to options of the spectrograms – however this, we’ve to depart to the true area specialists. A pleasant means of displaying the confusion matrix is to create an alluvial plot. We see the predictions, on the left, “move into” the goal slots. (Goal-prediction pairs much less frequent than a thousandth of take a look at set cardinality are hidden.)
Alluvial plot for the complex-spectrogram setup.
Wrapup
That’s it for in the present day! Within the upcoming weeks, count on extra posts drawing on content material from the soon-to-appear CRC e book, Deep Studying and Scientific Computing with R torch. Thanks for studying!
The brand new Solix docking station help the Solix robotic by autonomously refilling chemical substances in area. | Credit score: Solinftec
Brazilian robotics firm Solinftec launched its autonomous Solix agricultural spraying robotic just a little over a yr in the past. As its identify may suggest, Solix makes use of onboard solar energy. It’s designed for cultivation missions together with plant scouting and focused spraying.
The robotic makes use of synthetic intelligence with imaginative and prescient cameras and different sensors to traverse fields. Solinftec stated it will possibly successfully management weeds and bugs whereas optimizing chemical pesticides and herbicides. The firm goals to cut back chemical utilization by as much as 95%.
On the Farm Progress Present as we speak in Boone, Iowa, Solinftec introduced its latest agricultural robotics improvement: the Solix docking station. It’s designed to increase the robotic’s utility by permitting it to refill chemical substances, fertilizers, and organic merchandise within the area.
The docking station is autonomous, solar-powered, and built-in with Solix’s software program platform. This new expertise permits steady area administration by guaranteeing the robotic can entry the mandatory merchandise for twenty-four/7 operations, stated Solinftec.
The station additionally incorporates scouting knowledge from the rising season to make sure the correct merchandise can be found for day-to-day execution.
North America is a vital and increasing marketplace for Solix, and that is the robotic‘s second yr of operations within the U.S. Solinftec has grown its employees and operations inside the U.S. throughout that interval.
“Every area has distinctive traits that permit for various weeds to look within the area, relying on the stage of the crop,” defined Guilherme Guiné, chief working officer for North America at Solinftec. “We designed the docking station to permit Solix to decide on the product that will likely be used based mostly on the popularity of weeds by our synthetic intelligence system, ALICE AI, enabling the usage of particular merchandise for every distinctive scenario.”
Solinftec is validating remaining options and ideas because the docking station nears manufacturing.
“We plan to equip the station with a number of merchandise and permit the robotic to use the check idea within the area,” shared Guiné. “Solix has the power to make use of a small quantity of product in a bit of the sector and verify the crop’s response.”
“Solix can then monitor that space and broaden the scope of utility based mostly on outcomes,” he added. “With this, we will enhance the pace of adoption of recent merchandise on a big scale, contemplating the variety of every area, area, and season.”
The Solix docking station is totally self-contained and designed to help steady operations. | Credit score: Solinftec
Solix docking station analyzes area knowledge
The docking station makes use of knowledge concerning the area to help the robotic in growing farm productiveness, stated Solinftec. The robotic surveys all the area at a plant-by-plant stage, offering knowledge and knowledge on crop well being, weed infestation, and particular pests.
When Solix stops for a refill, the docking station hundreds the right merchandise and portions onto the robotic. The docking station can be in a position to flush the robotic’s tanks autonomously and safely in order that chemical substances aren’t blended on board, Solinftec famous.
The docking station could be refilled whereas the Solix robotic continues its mission within the area, it added.
Editor’s word: The Subject Robotics observe at RoboBusiness 2024 will function audio system from FarmWise, u-blox, Bishop-Wisecarver, Forcen, Level One Navigation, Blue River Expertise, and SKA Robotics. Register now to attend the occasion, which will likely be on Oct. 16 and 17 in Santa Clara, Calif.
Sewage air pollution in floor waters within the UK and Eire is likely one of the most high-profile environmental points within the nation. Water trade analysis organisation UKWIR is main a raft of modern sewerage tasks designed to remodel the way in which water corporations handle this situation within the coming five-year Asset Administration Plan interval for England and Wales – AMP8, which begins on 1 April 2025.
“UKWIR’s analysis programme goals to create a future the place sewerage administration is not only a vital service, however a key contributor in the direction of a sustainable and wholesome setting,” mentioned Jenni Hughes, UKWIR strategic programme supervisor.
“Beforehand, UKWIR analysis has centered on getting a deeper understanding of the networks, as a result of we have to perceive what’s occurring within the present community earlier than we are able to make significant, long-term enhancements.
“Now our focus shifts to the longer term, with societal wants, environmental safety and resilient infrastructure on the centre. The newest wave of analysis provides water corporations the instruments and information they should navigate the continued challenges of sewage administration into the following asset administration interval and past.”
Defending rivers and seas The influence of sewage and stormwater discharge, agricultural runoff, and concrete air pollution on river ecosystems within the UK and Eire is a key space of analysis for UKWIR.
Dr Nick Mills is UKWIR’s programme lead on the organisation’s Large Query 6: How will we obtain sustainable and resilient sewerage and drainage by 2050? He’s additionally director of setting & innovation at Southern Water.
Mills says, “To optimise the advantages for each folks and nature, we’d like a data-driven river technique that comprehensively analyses threats to river well being. This method ought to maintain all sectors accountable, whereas concurrently empowering them to determine options for constructing and sustaining wholesome rivers.
“There’s a number of give attention to storm overflows within the media and from most people. They’re a legacy asset that the sector is trying to section out by a mix of nature-based and sustainable drainage methods, and data-driven engineering approaches.
“Extra broadly, decreasing storm overflows requires pressing, collaborative motion from water corporations, councils, property house owners and the general public,” added Mills.
Beating blockages Sewer blockages are a serious concern within the UK, with an estimated 200,000 occurring yearly, and FOG – fat, oils and grease – cited because the trigger in round 75% of circumstances. A build-up of FOG hinders the sleek operation of sewer methods and wastewater remedy works (WwTWs), shortens the lifespan of essential belongings and will increase upkeep prices.
This burden finally falls on water corporations, which can be pressured to boost costs for patrons. Moreover, FOG blockages may cause sewer overflows, making a public well being hazard and impacting the setting.
UKWIR tasks aiming to deal with this embody:
Nature as a stakeholder Defending and enhancing waters, and the wildlife and communities they assist requires a mix of gray and inexperienced options. For water corporations seeking to create better social worth, nature-based options (NbS) comparable to sustainable drainage methods (SuDS) can supply a strong and cost-effective method.
Nevertheless, there’s presently restricted sector-wide knowledge on the advantages, reliability and cost-effectiveness, in comparison with conventional engineered and technological options.To handle this situation, UKWIR presently has a analysis challenge geared toward enhancing the understanding of the whole-life value, carbon footprint and supply of retrofit SuDS.
Information and insights Water corporations within the UK are being urged to be extra clear with the general public about sewage spill knowledge by the knowledge commissioner.
Using data-driven strategies, together with synthetic intelligence and predictive analytics, is a promising method to proactively determine and deal with blockages in sewer methods, finally enhancing environmental safety, public well being and compliance with rules.
Latest UKWIR tasks embody: · Modelling sewer inlet capability restrictions · Quantifying, managing and speaking the variations in storm overflow spill knowledge between occasion length modelling (EDM) outputs and hydraulic mannequin prediction
Working with clients Buyer behaviour may also play a big function in decreasing the quantity of sewage coming into watercourses within the UK.
Buyer-caused blockages in sewers are a serious situation within the UK. Yearly in England and Wales, water corporations spend hundreds of thousands of kilos coping with over 300,000 blockages – 1000’s of which see folks’s properties and belongings ruined by sewer flooding.
A not too long ago printed UKWIR challenge – Studying and proposals from buyer behaviour campaigns on blockage discount – highlights the necessity for efficient buyer campaigns to cut back blockages, and emphasises the significance of tailor-made campaigns, a unified nationwide method, and collaboration amongst water corporations by way of a proposed nationwide working group centered on altering buyer behaviour relating to blockages.
UKWIR has introduced the path of journey for analysis tasks from now to 2050. The refreshed technique goals to convey collectively world developments in water administration with impactful analysis to deal with UK-specific trade challenges recognized by the UKWIR Large Questions and in depth stakeholder engagement.
For extra info, go to: ukwir.org/ukwir-announce-new-research-strategy
I’ve an app for which I’ve registered a customized url scheme (I do know common urls are most popular lately however this was a enterprise choice). Now, I need to open the app by scanning a QR-Code after which present the related view by studying the parameters from the url. This is a pattern hyperlink within the QR-Code:
my-schema://open-view/123
That is the code I am working with proper now:
@predominant
struct MyApp: App {
@State personal var viewAction: ViewAction? = nil
var physique: some Scene {
WindowGroup {
NavigationView {
SomeView(motion: viewAction) // `viewAction` is being set in `handleOpenURL` and passes the retrieved information to subviews – works high quality
...
}
.onContinueUserActivity(NSUserActivityTypeBrowsingWeb) { userActivity in
guard let url = userActivity.webpageURL else { return }
handleOpenURL(url)
}
.onOpenURL(carry out: handleOpenURL)
}
}
As you possibly can see, I am utilizing each .onContinueUserActivity and .onOpenURL. That is as a result of typically, the previous known as for QR-Code scanning and the latter for any tapped urls, and typically each name .onOpenURL – idk why.
Now, for navigation, I am utilizing a NavigationView and all of it works high quality. Nevertheless, I am experiencing some hassle with NavigationView, and because it’s deprecated in favor for NavigationStack, I made a decision to modify to that.
When utilizing NavigationStack, the app nonetheless opens high quality from each a tapped hyperlink and a scanned QR-Code. Nevertheless, neither.onContinueUserActivity or .onOpenURL are being known as when scanning the QR-Code. Solely when tapping on the hyperlink from the QR-Code, the strategies are known as.
TL;DR
when utilizing NavigationStack, scanning a QR-Code to open the app does NOT name the strategies I should be known as so I can retrieve the url parameters and open the respective view.
I don’t know what this has to do with the navigation methodology, however that is what I discovered.