Home Blog Page 3870

Fixing “Reference to captured var in concurrently-executing code” in Swift


Printed on: July 31, 2024

When you begin migrating to the Swift 6 language mode, you may almost definitely activate strict concurrency first. As soon as you’ve got accomplished this there will likely be a number of warings and errors that you’re going to encounter and these errors could be complicated at instances.

I will begin by saying that having a strong understanding of actors, sendable, and knowledge races is a big benefit while you need to undertake the Swift 6 language mode. Just about all the warnings you may get in strict concurrency mode will let you know about potential points associated to working code concurrently. For an in-depth understanding of actors, sendability and knowledge races I extremely advocate that you simply check out my Swift Concurrency course which is able to get you entry to a sequence of movies, workouts, and my Sensible Swift Concurrency ebook with a single buy.

WIth that out of the way in which, let’s check out the next warning that you simply would possibly encounter in your mission:

Reference to captured var in concurrently-executing code

This warning tells you that you simply’re capturing a variable inside a physique of code that can run asynchornously. For instance, the next code will outcome on this warning:

var process = NetworkTask(
    urlsessionTask: urlSessionTask
)

add(fromTask: urlSessionTask, metaData: metaData, completion: { end in
    Process {
        await process.sendResult(outcome) // Reference to captured var 'process' in concurrently-executing code; that is an error within the Swift 6 language mode
    }
})

The process variable that we create a few traces earlier is mutable. Because of this we will assign a distinct worth to that process at any time and that might end in inconsistencies in our knowledge. For instance, if we assign a brand new worth to the process earlier than the closure begins working, we would have captured the outdated process which could possibly be sudden.

Since strict concurrency is supposed to assist us make it possible for our code runs as freed from surprises as doable, Swift desires us to make it possible for we seize a continuing worth as a substitute. On this case, I am not mutating process anyway so it is protected to make it a let:

let process = NetworkTask(
    urlsessionTask: urlSessionTask
)

add(fromTask: urlSessionTask, metaData: metaData, completion: { end in
    Process {
        await process.sendResult(outcome)
    }
})

This transformation removes the warning as a result of the compiler now is aware of for certain that process will not be given a brand new worth at some sudden time.

One other method to repair this error can be to make in express seize within the completion closure that I am passing. This seize will occur instantly as a let so Swift will know that the captured worth is not going to change unexpectedly.

var process = NetworkTask(
    urlsessionTask: urlSessionTask
)

add(fromTask: urlSessionTask, metaData: metaData, completion: { [task] end in
    Process {
        await process.sendResult(outcome.mapError({ $0 as any Error }))
    }
})

Altenatively, you could possibly make an express fixed seize earlier than your Process runs:

var process = NetworkTask(
    urlsessionTask: urlSessionTask
)

let theTask = process
add(fromTask: urlSessionTask, metaData: metaData, completion: { end in
    Process {
        await theTask.sendResult(outcome)
    }
})

This isn’t as elegant however could be wanted in instances the place you do need to cross your variable to a chunk of concurrently executing code however you additionally need it to be a mutable property for different objects. It is primarily the very same factor as making a seize in your completion closure (or straight within the process if there is not any further wrapping closures concerned).

Once you first encounter this warning it could be instantly apparent why you are seeing this error and the way you need to repair it. In digital all instances it signifies that you must both change your var to a let or that you must carry out an express seize of your variable both by making a shadowing let or via a seize listing on the primary concurrent little bit of code that accesses your variable. Within the case of the instance on this put up that is the completion closure however for you it could be straight on the Process.

The great, the unhealthy, and the algorithmic

0


Synthetic Intelligence (AI) is a sizzling matter in the mean time. It’s in all places. You in all probability already use it day-after-day. That chatbot you’re speaking to about your misplaced parcel? Powered by conversational AI. The ‘really helpful’ gadgets lined up underneath your most steadily introduced Amazon purchases? Pushed by AI/ML (machine studying) algorithms. You may even use generative AI to assist write your LinkedIn posts or emails. 

However the place does the road cease? When AI tackles monotonous and repetitive duties, in addition to analysis and create content material at a a lot sooner tempo than any human might, why would we even want people in any respect? Is the ‘human ingredient’ truly required for a enterprise to perform? Let’s dig deeper into the advantages, challenges, and dangers concerning the most effective particular person (or entity?) for the job: robotic or human?

Why AI works

AI has the ability to optimize enterprise processes and cut back time spent on duties that eat into workers’ normal productiveness and enterprise output throughout their working day. Already, firms are adopting AI for a number of capabilities, whether or not that be reviewing resumes for job purposes, figuring out anomalies in buyer datasets, or writing content material for social media.

And, they’ll do all this in a fraction of the time it will take for people. In circumstances the place early analysis and intervention are every little thing, the deployment of AI can have a vastly optimistic influence throughout the board. For instance, an AI-enhanced blood take a look at might reportedly assist predict Parkinson’s illness as much as seven years earlier than the onset of signs – and that’s simply the tip of the iceberg.

Due to their potential to uncover patterns in huge quantities of information, AI applied sciences can even help the work of legislation enforcement businesses, together with by serving to them determine and predict possible crime scenes and tendencies. AI-driven instruments even have a job to play in combatting crime and different threats within the on-line realm and in serving to cybersecurity professionals do their jobs extra successfully.

AI’s potential to avoid wasting companies time and money is nothing new. Give it some thought: the much less time workers spend on tedious duties reminiscent of scanning paperwork and importing information, the extra time they’ll spend on enterprise technique and development. In some instances, full-time contracts might not be wanted, so the enterprise would spend much less cash on overheads (understandably, this isn’t nice for employment charges).

AI-based techniques might also assist remove the chance of human error. There may be the saying ‘we’re solely human’ for a purpose. All of us could make errors, particularly after 5 coffees, solely three hours of sleep, and a looming deadline forward. AI-based techniques can work across the clock with out ever getting drained. In a manner, they’ve a stage of reliability you’ll not get with even probably the most detail-orientated and methodological human.

The restrictions of AI

Make no mistake, nevertheless: on nearer inspection, issues do get somewhat extra difficult. Whereas AI techniques can decrease errors related to fatigue and distraction, they don’t seem to be infallible. AI, too, could make errors and ‘hallucinate’; i.e., spout falsehoods whereas presenting it as if it had been appropriate, particularly if there are points with the information it was skilled on or with the algorithm itself. In different phrases, AI techniques are solely nearly as good as the information they’re skilled on (which requires human experience and oversight).

Carrying on this theme, whereas people can declare to be goal, we’re all vulnerable to unconscious bias primarily based on our personal lived experiences, and it’s onerous, inconceivable even, to show that off. AI doesn’t inherently create bias; quite, it will possibly amplify current biases current within the information it’s skilled on. Put in another way, an AI software skilled with clear and unbiased information can certainly produce purely data-driven outcomes and treatment biased human decision-making. Saying that, that is no imply feat and making certain equity and objectivity in AI techniques requires steady effort in information curation, algorithm design, and ongoing monitoring.

The great, the unhealthy, and the algorithmic

A examine in 2022 confirmed that 54% of know-how leaders acknowledged to be very or extraordinarily involved about AI bias. We’ve already seen the disastrous penalties that utilizing biased information can have on companies. For instance, from the usage of bias datasets from a automobile insurance coverage firm in Oregon, girls are charged roughly 11.4% extra for his or her automobile insurance coverage than males – even when every little thing else is precisely the identical! This may simply result in a broken popularity and lack of clients.

With AI being consumed expansive datasets, this brings up the query of privateness. In the case of private information, actors with malicious intent might be able to discover methods to bypass the privateness protocols and entry this information. Whereas there are methods to create a safer information setting throughout these instruments and techniques, organizations nonetheless must be vigilant about any gaps of their cybersecurity with this additional information floor space that AI entails.

Moreover, AI can’t perceive feelings in the way in which (most) people do. People on the opposite aspect of an interplay with AI might really feel a scarcity of empathy and understanding that they may get from an actual ‘human’ interplay. This may influence buyer/person expertise as proven by the sport, World of Warcraft, which misplaced hundreds of thousands of gamers by changing their customer support workforce – who was once actual individuals who would even go into the sport themselves to indicate gamers easy methods to carry out actions – with AI bots that lack that humor and empathy.

With its restricted dataset, AI’s lack of context could cause points round information interpretation. For instance, cybersecurity consultants might have a background understanding of a selected menace actor, enabling them to determine and flag warning indicators {that a} machine might not if it doesn’t align completely with its programmed algorithm. It’s these intricate nuances which have the potential for large penalties additional down the road, for each the enterprise and its clients.

So whereas AI might lack context and understanding of its enter information, people lack an understanding of how their AI techniques work. When AI operates in ‘black packing containers’, there is no such thing as a transparency into how or why the software has resulted within the output or selections it has offered. Being unable to determine the ‘workings out’ behind the scenes could cause individuals to query its validity. Moreover, if one thing goes improper or its enter information is poisoned, this ‘black field’ situation makes it onerous to determine, handle and clear up the difficulty.

Why we want individuals

People aren’t excellent. However in the case of speaking and resonating with individuals and making essential strategic selections, absolutely people are the most effective candidates for the job?

Not like AI, individuals can adapt to evolving conditions and assume creatively. With out the predefined guidelines, restricted datasets, and prompts AI makes use of, people can use their initiative, information, and previous experiences to deal with challenges and clear up issues in actual time.

That is notably essential when making moral selections, and balancing enterprise (or private) targets with societal influence. For instance, AI instruments utilized in hiring processes might not contemplate the broader implications of rejecting candidates primarily based on algorithmic biases, and the additional penalties this might have on office variety and inclusion.

Because the output from AI is created from algorithms, it additionally runs the chance of being formulaic. Think about generative AI used to jot down blogs, emails, and social media captions: repetitive sentence buildings could make copy clunky and fewer participating to learn. Content material written by people will most probably have extra nuances, perspective, and, let’s face it, character. Particularly for model messaging and tone of voice, it may be onerous to imitate an organization’s communication type utilizing the strict algorithms AI follows.

With that in thoughts, whereas AI may be capable of present a listing of potential model names for instance, it’s the individuals behind the model who actually perceive their audiences and would know what would resonate greatest. And with human empathy and the power to ‘learn the room’, people can higher join with others, fostering stronger relationships with clients, companions, and stakeholders. That is notably helpful in customer support. As talked about later, poor customer support can result in misplaced model loyalty and belief.

Final however not least, people can adapt shortly to evolving situations. In case you want an pressing firm assertion a couple of latest occasion or must pivot away from a marketing campaign’s explicit focused message, you want a human. Re-programming and updating AI instruments takes time, which is probably not applicable in sure conditions.

What’s the reply?

The best method to cybersecurity is to not rely solely on AI or people however to make use of the strengths of each. This might imply utilizing AI to deal with large-scale information evaluation and processing whereas counting on human experience for decision-making, strategic planning, and communications. AI must be used as a software to assist and improve your workforce, not change it.

AI lies on the coronary heart of ESET merchandise, enabling our cybersecurity consultants to place their consideration into creating the most effective options for ESET clients. Find out how ESET leverages AI and machine studying for enhanced menace detection, investigation, and response.

DiskWarrior Overview | Macworld

0


Apple to separate App Retailer crew into two in main reorganization

0


Apple store logo
The App Retailer crew at Apple will bear some main modifications.
Picture: Ed Hardy/Cult of Mac

Matt Fischer, the VP of the App Retailer since 2010, will go away Apple in October of this yr. He reported on to Phil Schiller.

His departure will observe a significant reorganization of the App Retailer crew to keep away from additional regulatory scrutiny. One will oversee its personal App Retailer, whereas the opposite will handle third-party marketplaces.

App Retailer crew to separate into two

The App Retailer has grow to be a profitable enterprise for Apple, raking in billions of {dollars} in fee each quarter. Nonetheless, this has additionally subjected the corporate’s unfair and monopolistic enterprise practices to regulatory scrutiny. The European Fee’s Digital Markets Act pressured Apple to permit app sideloading on the iPhone. It additionally opened the gate for third-party marketplaces, at the least within the EU.

A Bloomberg report claims Apple will reorganize the App Retailer crew to stop additional regulatory points. After Fischer’s departure, Carson Silve will head the brand new App Retailer crew. He has been at Apple since 2012 and is at the moment the App Retailer’s Senior Director of Enterprise Administration. Ann Thai, who joined the corporate in 2010, will lead the choice app distribution crew.

As for Fischer, he introduced his departure in an inner electronic mail to his crew on Wednesday. He famous that the reorganization gave him the chance to step away from the Cupertino firm, because it had been on his thoughts for a while now.

The corporate has not formally introduced its plans to restructure the App Retailer crew internally.

EU may slap Apple with enormous fines

Apple has been going through wrath from regulators resulting from its monopolistic App Retailer guidelines. Whereas it made some modifications to adjust to the EU’s DMA, the Fee believes the corporate doesn’t absolutely adjust to them.

Apple may face enormous fines for breaking the ‘steering’ guidelines underneath the act, which may be as excessive as 10% of the corporate’s world turnover.



Important Flaw in WordPress LiteSpeed Cache Plugin Permits Hackers Admin Entry


Aug 22, 2024Ravie LakshmananWeb site Safety / Vulnerability

Important Flaw in WordPress LiteSpeed Cache Plugin Permits Hackers Admin Entry

Cybersecurity researchers have disclosed a crucial safety flaw within the LiteSpeed Cache plugin for WordPress that would allow unauthenticated customers to realize administrator privileges.

“The plugin suffers from an unauthenticated privilege escalation vulnerability which permits any unauthenticated customer to realize Administrator stage entry after which malicious plugins could possibly be uploaded and put in,” Patchstack’s Rafie Muhammad stated in a Wednesday report.

The vulnerability, tracked as CVE-2024-28000 (CVSS rating: 9.8), has been patched in model 6.4 of the plugin launched on August 13, 2024. It impacts all variations of the plugin, together with and prior to six.3.0.1.

Cybersecurity

LiteSpeed Cache is without doubt one of the most generally used caching plugins in WordPress with over 5 million lively installations.

In a nutshell, CVE-2024-28000 makes it attainable for an unauthenticated attacker to spoof their person ID and register as an administrative-level person, successfully granting them privileges to take over a susceptible WordPress web site.

The vulnerability is rooted in a person simulation function within the plugin that makes use of a weak safety hash that suffers from using a trivially guessable random quantity because the seed.

Particularly, there are just one million attainable values for the safety hash because of the truth that the random quantity generator is derived from the microsecond portion of the present time. What’s extra, the random quantity generator just isn’t cryptographically safe and the generated hash is neither salted nor tied to a selected request or a person.

“That is because of the plugin not correctly limiting the function simulation performance permitting a person to set their present ID to that of an administrator, if they’ve entry to a sound hash which might be discovered within the debug logs or by brute power,” Wordfence stated in its personal alert.

“This makes it attainable for unauthenticated attackers to spoof their person ID to that of an administrator, after which create a brand new person account with the administrator function using the /wp-json/wp/v2/customers REST API endpoint.”

Cybersecurity

It is necessary to notice that the vulnerability can’t be exploited on Home windows-based WordPress installations because of the hash era perform’s reliance on a PHP methodology referred to as sys_getloadavg() that is not applied on Home windows.

“This vulnerability highlights the crucial significance of making certain the power and unpredictability of values which are used as safety hashes or nonces,” Muhammad stated.

With a beforehand disclosed flaw in LiteSpeed Cache (CVE-2023-40000, CVSS rating: 8.3) exploited by malicious actors, it is crucial that customers transfer rapidly to replace their situations to the newest model.

Discovered this text attention-grabbing? Comply with us on Twitter and LinkedIn to learn extra unique content material we publish.