Home Blog Page 3772

ios – Fetch contacts by telephone quantity smart in Swift


Predicates are for filtering the fetched CNContact objects, to restrict the outcomes to people who match some standards. For instance, you would possibly use a predicate to fetch solely contacts with telephone numbers:

request.predicate = NSPredicate(format: "phoneNumbers.@rely > 0")

But it surely received’t “break up” a single CNContact into a number of entries, one for every telephone quantity. It’s a must to do this your self. You need to use the sample that Joakim outlined (+1). Or, personally, I would use a flatMap technique that takes an array of return values and builds an array from that:

extension CNContactStore {
    /// Flat map
    ///
    /// - Parameters:
    ///   - request: The `CNContactFetchRequest`.
    ///   - rework: A closure that returns an array of values to be appended to the ultimate outcomes.
    /// - Returns: A flat array of all the outcomes.

    func flatMap(request: CNContactFetchRequest, _ rework: (CNContact, UnsafeMutablePointer) -> [T]) throws -> [T] {
        var outcomes: [T] = []

        strive enumerateContacts(with: request) { contact, cease in
            outcomes += rework(contact, cease)
        }

        return outcomes
    }

    /// Map
    ///
    /// - Parameters:
    ///   - request: The `CNContactFetchRequest`.
    ///   - rework:  A closure that returns a price to be appended to the ultimate outcomes.
    /// - Returns: A flat array of all the outcomes.

    func map(request: CNContactFetchRequest, _ rework: (CNContact, UnsafeMutablePointer) -> T) throws -> [T] {
        var outcomes: [T] = []

        strive enumerateContacts(with: request) { contact, cease in
            outcomes.append(rework(contact, cease))
        }

        return outcomes
    }
}

That’s each a flatMap (to your use-case the place you might wish to return a number of objects for a given CNContact), in addition to a extra conventional map rendition (not used right here, however is my typical use-case).

Anyway, you possibly can then use it like so:

let keys = [
    CNContactPhoneNumbersKey as CNKeyDescriptor,
    CNContactFormatter.descriptorForRequiredKeys(for: .fullName)
]

let request = CNContactFetchRequest(keysToFetch: keys)
request.predicate = NSPredicate(format: "phoneNumbers.@rely > 0")
request.sortOrder = .userDefault

let formatter = CNContactFormatter()
formatter.fashion = .fullName

do {
    let outcomes = strive retailer.flatMap(request: request) { contact, _ in
        contact.phoneNumbers.compactMap { telephone -> Contact? in
            guard let identify = formatter.string(from: contact) else { return nil }
            return Contact(fullName: identify, phoneNumber: telephone.worth.stringValue)
        }
    }

    …
} catch {
    print("Did not fetch contacts with telephone numbers:", error)
}

That returned:

Contact(fullName: "John Appleseed", phoneNumber: "888-555-5512")
Contact(fullName: "John Appleseed", phoneNumber: "888-555-1212")
Contact(fullName: "Kate Bell", phoneNumber: "(555) 564-8583")
Contact(fullName: "Kate Bell", phoneNumber: "(415) 555-3695")

Massive on prevention, even greater on AI

0


Tons of of cybersecurity professionals, analysts and decision-makers got here collectively earlier this month for ESET World 2024, a convention that showcased the corporate’s imaginative and prescient and technological developments and featured a variety of insightful talks concerning the newest developments in cybersecurity and past.

The subjects ran the gamut, however it’s secure to say that the topics that resonated essentially the most included ESET’s cutting-edge menace analysis and views on synthetic intelligence (AI). Let’s now briefly take a look at some classes that lined the subject that’s on everybody’s lips nowadays – AI.

Again to fundamentals

First off, ESET Chief Expertise Officer (CTO) Juraj Malcho gave the lay of the land, providing his tackle the important thing challenges and alternatives afforded by AI. He wouldn’t cease there, nevertheless, and went on to hunt solutions to a number of the elementary questions surrounding AI, together with “Is it as revolutionary because it’s claimed to be?”.

Massive on prevention, even greater on AI
Juraj Malcho, Chief Expertise Officer, ESET

The present iterations of AI expertise are largely within the type of giant language fashions (LLMs) and numerous digital assistants that make the tech really feel very actual. Nevertheless, they’re nonetheless moderately restricted, and we should completely outline how we need to use the tech so as to empower our personal processes, together with its makes use of in cybersecurity.

For instance, AI can simplify cyber protection by deconstructing complicated assaults and lowering useful resource calls for. That means, it enhances the safety capabilities of short-staffed enterprise IT operations.

Demystifying AI

Juraj Jánošíokay, Director of Synthetic Intelligence at ESET, and Filip Mazán, Sr. Supervisor of Superior Risk Detection and AI at ESET, went on to current a complete view into the world of AI and machine studying, exploring their roots and distinguishing options.

Juraj Jánošík, ESET’s Director of Artificial Intelligence, and Filip Mazan, Sr. Manager of Advanced Threat Detection
Juraj Jánošík, ESET’s Director of Synthetic Intelligence, and Filip Mazan, Sr. Supervisor of Superior Risk Detection

Mr. Man demonstrated how they’re essentially primarily based on human biology, whereby the AI networks mimic some facets of how organic neurons perform to create synthetic neural networks with various parameters. The extra complicated the community, the better its predictive energy, resulting in developments seen in digital assistants like Alexa and LLMs like ChatGPT or Claude.

Later, Mr. Mazán highlighted that as AI fashions develop into extra complicated, their utility can diminish. As we strategy the recreation of the human mind, the rising variety of parameters necessitates thorough refinement. This course of requires human oversight to consistently monitor and finetune the mannequin’s operations.

And pigs might fly (generative AI models can be masterfully artistic)
And pigs would possibly fly … (generative AI fashions may be masterfully inventive)

Certainly, leaner fashions are generally higher. Mr. Mazán described how ESET’s strict use of inner AI capabilities leads to quicker and extra correct menace detection, assembly the necessity for speedy and exact responses to all method of threats.

He additionally echoed Mr. Malcho and highlighted a number of the limitations that beset giant language fashions (LLMs). These fashions work primarily based on prediction and contain connecting meanings, which might get simply muddled and end in hallucinations. In different phrases, the utility of those fashions solely goes up to now.

Different limitations of present AI tech

Moreover, Mr. Jánošík continued to deal with different limitations of latest AI:

  • Explainability: Present fashions include complicated parameters, making their decision-making processes obscure. In contrast to the human mind, which operates on causal explanations, these fashions perform via statistical correlations, which aren’t intuitive to people.
  • Transparency: High fashions are proprietary (walled gardens), with no visibility into their interior workings. This lack of transparency means there is not any accountability for a way these fashions are configured or for the outcomes they produce.
  • Hallucinations: Generative AI chatbots usually generate believable however incorrect data. These fashions can exude excessive confidence whereas delivering false data, resulting in mishaps and even authorized points, comparable to after Air Canada’s chatbot introduced false data a couple of low cost to a passenger.

Fortunately, the bounds additionally apply to the misuse of AI expertise for malicious actions. Whereas chatbots can simply formulate plausible-sounding messages to assist spearphishing or enterprise e-mail compromise assaults, they aren’t that well-equipped to create harmful malware. This limitation is because of their propensity for “hallucinations” – producing believable however incorrect or illogical outputs – and their underlying weaknesses in producing logically related and useful code. In consequence, creating new, efficient malware sometimes requires the intervention of an precise knowledgeable to right and refine the code, making the method tougher than some would possibly assume.

Lastly, as identified by Mr. Jánošík, AI is simply one other device that we have to perceive and use responsibly.

The rise of the clones

Within the subsequent session, Jake Moore, World Cybersecurity Advisor at ESET, gave a style of what’s at the moment potential with the best instruments, from the cloning of RFID playing cards and hacking CCTVs to creating convincing deepfakes – and the way it can put company information and funds in danger.

Amongst different issues, he confirmed how simple it’s to compromise the premises of a enterprise by utilizing a well known hacking gadget to repeat worker entrance playing cards or to hack (with permission!) a social media account belonging to the corporate’s CEO. He went on to make use of a device to clone his likeness, each facial and voice, to create a convincing deepfake video that he then posted on one of many CEO’s social media accounts.

Jake Moore, Global Security Advisor, ESET
Jake Moore, World Safety Advisor, ESET

The video – which had the would-be CEO announce a “problem” to bike from the UK to Australia and racked up greater than 5,000 views – was so convincing that folks began to suggest sponsorships. Certainly, even the corporate’s CFO additionally obtained fooled by the video, asking the CEO about his future whereabouts. Solely a single individual wasn’t fooled — the CEO’s 14-year-old daughter.

In a number of steps, Mr. Moore demonstrated the hazard that lies with the speedy unfold of deepfakes. Certainly, seeing is now not believing – companies, and other people themselves, must scrutinize every thing they arrive throughout on-line. And with the arrival of AI instruments like Sora that may create video primarily based on a number of traces of enter, harmful instances might be nigh.

Ending touches

The ultimate session devoted to the character of AI was a panel that included Mr. Jánošík, Mr. Mazán, and Mr. Moore and was helmed by Ms. Pavlova. It began off with a query concerning the present state of AI, the place the panelists agreed that the most recent fashions are awash with many parameters and wish additional refinement.

The AI panel discussion was chaired by Victoria Pavlova, UK editor of CRN Magazine
The AI panel dialogue was chaired by Victoria Pavlova, UK editor of CRN Journal

The dialogue then shifted to the fast risks and considerations for companies. Mr. Moore emphasised {that a} vital variety of persons are unaware of AI’s capabilities, which dangerous actors can exploit. Though the panelists concurred that refined AI-generated malware is just not at the moment an imminent menace, different risks, comparable to improved phishing e-mail era and deepfakes created utilizing public fashions, are very actual.

Moreover, as highlighted by Mr. Jánošík, the best hazard lies within the information privateness facet of AI, given the quantity of knowledge these fashions obtain from customers. Within the EU, for instance, the GDPR and AI Act have set some frameworks for information safety, however that isn’t sufficient since these are usually not international acts.

Current AI presents both opportunities and some real dangers
Present AI presents each alternatives and a few actual risks

Mr. Moore added that enterprises ought to ensure that their information stays in-house. Enterprise variations of generative fashions can match the invoice, obviating the “want” to depend on (free) variations that retailer information on exterior servers, probably placing delicate company information in danger.

To deal with information privateness considerations, Mr. Mazán urged corporations ought to begin from the underside up, tapping into open-source fashions that may work for less complicated use circumstances, such because the era of summaries. Provided that these become insufficient ought to companies transfer to cloud-powered options from different events.

Mr. Jánošík concluded by saying that corporations usually overlook the drawbacks of AI use — pointers for safe use of AI are certainly wanted, however even frequent sense goes a good distance in the direction of holding their information secure. As encapsulated by Mr. Moore in a solution regarding how AI needs to be regulated, there’s a urgent want to lift consciousness about AI’s potential, together with potential for hurt. Encouraging vital considering is essential for making certain security in our more and more AI-driven world.

The Anker SOLIX F3800 Energy Station is $1,300 off proper now!

0


Anker SOLIX F3800 featured image

Common energy banks and battery packs are good, however if you would like one thing that may really deal with all of it, the Anker SOLIX F3800 Energy Station is the corporate’s greatest battery that’s nonetheless thought of moveable. This factor is so highly effective it might energy entire houses, RVs, or cost your electrical automobile. It’s not low-cost at $3,999, however proper now, there’s a really good deal that saves you a complete $1,300, bringing the value right down to a extra manageable $2,699.

Get the Anker SOLIX F3800 Energy Station for $2,699

This sale is accessible from Amazon, however you will need to manually clip on the $1,300 coupon on the Amazon product web page.

The Anker SOLIX F3800 Energy Station is one thing else. It has a large 3,840Wh battery. To place that in perspective, that is sufficient power to cost a cellphone over 450 occasions. It could actually additionally run a mean fridge for practically 15 hours, or a window AC for about 3.7 hours.

There’s additionally no want to fret about wattage, because the Anker SOLIX F3800 can output as much as 6,000W, each at 120V and 240V. That’s sufficient to maintain virtually any equipment working, however when you have particular wants, you may hook up two items for a complete of 12,000W. It even has particular NEMA 14-50 and L14-30 ports for powering your RV or EV instantly.

After all, you get loads of different ports, together with six AC retailers, three USB-C connections, and a automotive socket. And if you wish to be off the grid, it helps photo voltaic charging as much as 2,400W. Moreover, you may monitor the battery state utilizing Bluetooth or Wi-Fi.

If you’d like in on this deal, go forward and join quickly, as the value ought to return to regular comparatively quickly. It’s not usually you save $1,300 on something!

Additional deal: Anker SOLIX C1000 Energy Station

Is the Anker SOLIX F3800 Energy Station a bit an excessive amount of in your wants? Perhaps you need one thing highly effective, however way more moveable. The Anker SOLIX C1000 Energy Station is simple to take in your tenting journeys and weekend adventures. It has a smaller, but nonetheless very substantial 1,056Wh battery that may cost a cellphone over 90 occasions. It could actually additionally output 1,800W, with peaks of two,400W, so it might nonetheless energy 99% of home equipment. There are six AC retailers, together with two USB-C ports, two USB-A ports, and a automotive socket. To not point out it’s less expensive!

This is what to anticipate from each M4 Mac–and once they’ll arrive

0


Singapore, US develop AI partnership to give attention to upskilling youth and girls

0


youthrobot-2077222775

JK1991/Getty Photographs

Singapore and the US are widening their collaboration on synthetic intelligence (AI) to give attention to increase expertise amongst youth and girls. 

A brand new AI Expertise Bridge initiative expands on the US-Singapore Girls in Tech Partnership Program that the 2 nations launched in 2022. The newest plan goals to bolster expertise in rising expertise, together with AI, with an emphasis on youth and girls, in line with a joint assertion launched by US Secretary of Commerce Gina Raimondo and Singapore’s Deputy Prime Minister and Minister for Commerce and Business Gan Kim Yong. 

Additionally: Transparency is sorely missing amid rising AI curiosity

To be rolled out within the coming months, efforts listed below are a part of the Partnership for Development and Innovation pact that the 2 nations launched in October 2021 to establish bilateral commerce alternatives, specifically throughout these 4 areas: digital economic system and good cities, provide chain, healthcare, and clear vitality and environmental expertise. 

The pact encompasses shared goals to faucet the advantages of AI and “harness its potential for good,” the 2 authorities businesses mentioned. “The US and Singapore have prioritized growth and adoption of interoperable governance frameworks for the trusted, protected, safe, and accountable growth, deployment, and analysis of AI applied sciences,” they mentioned. 

Final October, Singapore’s Infocomm Media Improvement Authority (IMDA) and the US Nationwide Institute of Requirements and Know-how (NIST) mentioned each nations had synced up their respective AI frameworks to ease compliance and can proceed to drive “protected, reliable, and accountable” AI developments. 

The mapping train between IMDA’s AI Confirm and NIST’s AI Danger Administration Framework goals to harmonize worldwide AI governance frameworks, provide better readability on necessities, and cut back compliance prices. 

Transferring ahead, Singapore’s Ministry of Communications and Info (MCI) will arrange an AI dialog with trade and authorities representatives to debate the 2 nations’ investments, governance fashions, and workforce growth in AI

Additionally: Microsoft desires to arm 2.5 million folks in Asean with AI expertise

With the brand new AI Expertise Bridge plan, the Ministry and the US Division of Commerce goal to proceed their AI partnership to advance “an inclusive and forward-looking agenda” for financial development and to spice up AI competitiveness for each nations. 

“We consider the rise of AI, together with generative AI (GenAI), brings with it new and creating alternatives, together with the flexibility to reinforce financial and social welfare and digital inclusion, to speed up and advance socially useful analysis and scientific discovery, to help extra aggressive and environmentally sustainable financial development, and to advertise truthful and aggressive markets,” MCI mentioned

The ministry mentioned nearly 6,000 US organizations at the moment function in Singapore, with bilateral commerce supporting almost 250,000 jobs throughout the US. 

Additionally: Tech giants hatch a plan for AI job losses: Reskill 95 million in 10 years

In response to MCI, expertise spending in Singapore tipped at SG$22 billion (USD$16.3 billion) final yr, whereas US firms’ current and dedicated capital investments in AI over the subsequent few years, alongside Singapore enterprise companions, have exceeded SG$50 billion (USD$37 billion). Organizations from each nations even have pledged to spice up the AI capabilities of greater than 130,000 employees in Singapore.

Their joint efforts in AI governance additional point out a must mitigate the challenges that include the fast adoption of AI. 

“Either side acknowledge that the testing and analysis of AI applied sciences ought to bear in mind trustworthiness issues that may help the goals of AI governance frameworks,” MCI mentioned. 

“We consider AI governance ought to consider related worldwide requirements and internationally acknowledged rules and pointers, together with these on explainability, transparency, accountability, equity, inclusivity, robustness, reproducibility, safety, security, knowledge governance, human-AI configuration, inclusive development, and societal and environmental well-being.” 

Additionally: Singapore seems to spice up AI with plans for quantum computing and knowledge facilities

NIST and IMDA will proceed to collaborate on the subsequent technology of AI, together with mapping their respective frameworks for GenAI, spanning testing, pointers, and benchmarks,

Additionally, the US AI Security Institute, which sits underneath NIST, and Singapore’s Digital Belief Centre, which is funded by IMDA and Nationwide Analysis Basis, will work collectively to advance the science of AI security. It will present a essential hyperlink in a world community of AI security institutes and different government-supported scientific establishments, MCI mentioned.