Home Blog Page 4

Tips on how to dynamically set views utilizing JSON in Swift iOS?


I am engaged on an iOS app that should load and show views dynamically from a distant JSON file (principal.json). This JSON accommodates coordinates (x, y, w, h) and picture names.

I’ve the code beneath, which is used to set views dynamically, however I don’t know methods to set these views first in Storyboard after which assign them dynamically utilizing the code beneath:

{
            let templateDataUrl = URL(string: url!.absoluteString + "/" + "principal.json")!
            
            let activity = URLSession.shared.dataTask(with: templateDataUrl) { information, response, error in
                guard let information = information, error == nil else {
                    print("Error loading template information: (error?.localizedDescription ?? "Unknown error")")
                    errorType().errorStatusBar()
                    return
                }
                do {
                    let templateData = attempt JSONDecoder().decode(TemplateData.self, from: information)
                    
                    DispatchQueue.principal.async { [self] in
                        
                        screenwidth = bounds.measurement.width
                        screenheight = bounds.measurement.peak
                        
                        let frameUrl = URL.init(string: url!.absoluteString + "/" + "body.png" )
                        
                        if let frameUrl = frameUrl {
                            
                            SDWebImageManager.shared.loadImage(with: frameUrl, choices: .highPriority, progress: nil, accomplished: { picture, information, error, cacheType, bool, url in
                                
                                if let error = error {
                                    print("Error loading body picture: (error.localizedDescription)")
                                    errorType().errorStatusBar()
                                } else {
                                    
                                    self.subImage.picture = picture
                                }
                            })
                        }
                        
                        let picture = UIImage(named: "body")
                        let imageSize = picture!.measurement
                        
                        let width = (imageSize.width * screenwidth) / imageSize.peak
                        self.scrollViewTemplate.contentSize = CGSize(width: width, peak: screenwidth)
                        self.scrollSubView.body = CGRect(origin: .zero, measurement: self.scrollViewTemplate.contentSize)
                        
                        for information in templateData.outcome
                        {
                            let xx = Double(information.x) ?? 0.0
                            let yy = Double(information.y) ?? 0.0
                            let ww = Double(information.w) ?? 0.0
                            let hh = Double(information.h) ?? 0.0
                            
                            let newX = (width * xx) / imageSize.width
                            let newY = (screenwidth * yy) / imageSize.peak
                            let newW = (width * ww) / imageSize.width
                            let newH = (screenwidth * hh) / imageSize.peak
                            
                            let rectt = CGRect(x: newX, y: newY, width: newW, peak: newH)
                            print(rectt)
                            
                            if let picture = information.picture
                            {
                                let maskimage = UIImageView()
                                maskimage.sd_setImage(with: URL(string: url!.absoluteString + "/" + picture)!)
                                maskimage.body = CGRect(origin: .zero, measurement: rectt.measurement)
                                
                                maskView = UIView(body: rectt)
                                maskView.clipsToBounds = true
                                maskView.tag = self.maskViewAry.depend
                                maskView.masks = maskimage
                                scrollSubView.addSubview(maskView)
                                
                                let maskImageUrl = URL(string: url!.absoluteString + "/" + picture)!
                                
                                var imageMask = UIImageView()
                                imageMask = UIImageView.init(body: CGRect.init(origin: .zero, measurement: rectt.measurement))
                                imageMask.sd_setImage(with: maskImageUrl)
                                imageMask.tag = self.maskViewAry.depend
                                imageMask.contentMode = .scaleAspectFit
                                
                                let tapGesture = UITapGestureRecognizer(goal:self,motion:#selector(doSomethingOnTap))
                                imageMask.isUserInteractionEnabled = true
                                imageMask.addGestureRecognizer(tapGesture)
                                maskView.addSubview(imageMask)
                                
                                imageMask.sd_setImage(with: maskImageUrl, accomplished: { [weak self] (picture, error, cacheType, url) in
                                    guard let self = self else { return }
                                    if let loadedImage = picture {
                                        self.maskImageAry.append(loadedImage)
                                        self.maskCropImageAry.append(loadedImage)
                                    } else {
                                        print("Didn't load picture: (String(describing: error))")
                                    }
                                })
                                self.maskViewAry.append(maskView)
                                self.scrollSubView.bringSubviewToFront(self.subImage)
                            }
                        }
                        
                        for (_, i) in maskViewAry.enumerated()
                        {
                            imgReplaceImg = UIImageView(body: CGRect(origin: .zero, measurement: CGSize(width: 30, peak: 30)))
                            imgReplaceImg.translatesAutoresizingMaskIntoConstraints = false
                            imgReplaceImg.picture = UIImage(named: "replaceImg")
                            imgReplaceImg.contentMode = .scaleAspectFit
                            scrollSubView.addSubview(imgReplaceImg)
                            scrollSubView.bringSubviewToFront(imgReplaceImg)
                            
                            NSLayoutConstraint.activate([
                                imgReplaceImg.centerXAnchor.constraint(equalTo: i.centerXAnchor),
                                imgReplaceImg.centerYAnchor.constraint(equalTo: i.centerYAnchor),
                                imgReplaceImg.widthAnchor.constraint(equalToConstant: 30),
                                imgReplaceImg.heightAnchor.constraint(equalToConstant: 30)
                            ])
                            
                            imgReplaceImgs.append(imgReplaceImg)
                        }
                        
                    }
                } catch {
                    print("Error decoding template information: (error.localizedDescription)")
                }
            }
            activity.resume()
        }

This code is dynamically loading and rendering views from JSON.

Any steering could be actually respect. Thanks upfront!

Synthetic Intelligence in Nationwide Safety: Acquisition and Integration


As protection and nationwide safety organizations take into account integrating AI into their operations, many acquisition groups are uncertain of the place to begin. In June, the SEI hosted an AI Acquisition workshop. Invited members from authorities, academia, and business described each the promise and the confusion surrounding AI acquisition, together with how to decide on the precise instruments to fulfill their mission wants. This weblog put up particulars practitioner insights from the workshop, together with challenges in differentiating AI methods, steerage on when to make use of AI, and matching AI instruments to mission wants.

This workshop was a part of the SEI’s year-long Nationwide AI Engineering Research to establish progress and challenges within the self-discipline of AI Engineering. Because the U.S. Division of Protection strikes to realize benefit from AI methods, AI Engineering is a vital self-discipline for enabling the acquisition, growth, deployment, and upkeep of these methods. The Nationwide AI Engineering Research will gather and make clear the highest-impact approaches to AI Engineering so far and can prioritize essentially the most urgent challenges for the close to future. On this spirit, the workshop highlighted what acquirers are studying and the challenges they nonetheless face.

Some workshop members shared that they’re already realizing advantages from AI, utilizing it to generate code and to triage paperwork, enabling crew members to focus their effort and time in ways in which weren’t beforehand attainable. Nonetheless, members reported frequent challenges that ranged from normal to particular, for instance, figuring out which AI instruments can help their mission, find out how to check these instruments, and find out how to establish the provenance of AI-generated info. These challenges present that AI acquisition isn’t just about selecting a device that appears superior. It’s about selecting instruments that meet actual operational wants, are reliable, and match inside present methods and workflows.

Challenges of AI in Protection and Authorities

AI adoption in nationwide safety has particular challenges that don’t seem in industrial settings. For instance:

  • The danger is increased and the implications of failure are extra severe. A mistake in a industrial chatbot would possibly trigger confusion. A mistake in an intelligence abstract might result in a mission failure.
  • AI instruments should combine with legacy methods, which can not help fashionable software program.
  • Most information utilized in protection is delicate or categorized. It must be safeguarded in any respect phases of the AI lifecycle.

Assessing AI as a Resolution

AI shouldn’t be seen as a common answer for each state of affairs. Workshop leaders and attendees shared the next tips for evaluating whether or not and find out how to use AI:

  • Begin with a mission want. Select an answer that addresses the requirement or will enhance a selected drawback. It will not be an AI-enabled answer.
  • Ask how the mannequin works. Keep away from methods that operate as black containers. Distributors want to explain the coaching strategy of the mannequin, the info it makes use of, and the way it makes choices.
  • Run a pilot earlier than scaling. Begin with a small-scale experiment in an actual mission setting earlier than issuing a contract, when attainable. Use this pilot to refine necessities and contract language, consider efficiency, and handle threat.
  • Select modular methods. As a substitute of looking for versatile options, establish instruments that may be added or eliminated simply. This improves the possibilities of system effectiveness and prevents being tied to 1 vendor.
  • Construct in human oversight. AI methods are dynamic by nature and, together with testing and analysis efforts, they want steady monitoring—significantly in increased threat, delicate, or categorized environments.
  • Search for reliable methods. AI methods are usually not dependable in the identical means conventional software program is, and the individuals interacting with them want to have the ability to inform when a system is working as meant and when it isn’t. A reliable system gives an expertise that matches end-users’ expectations and meets efficiency metrics.
  • Plan for failure. Even high-performing fashions will make errors. AI methods must be designed to be resilient in order that they detect and get better from points.

Matching AI Instruments to Mission Wants

The particular mission want ought to drive the choice of an answer, and enchancment from the established order ought to decide an answer’s appropriateness. Acquisition groups ought to be sure that AI methods meet the wants of the operators and that the system will work within the context of their atmosphere. For instance, many industrial instruments are constructed for cloud-based methods that assume fixed web entry. In distinction, protection environments are sometimes topic to restricted connectivity and better safety necessities. Key concerns embrace:

  • Be sure the AI system suits inside the present working atmosphere. Keep away from assuming that infrastructure may be rebuilt from scratch.
  • Consider the system within the goal atmosphere and circumstances earlier than deployment.
  • Confirm the standard, variance, and supply of coaching information and its applicability to the state of affairs. Low-quality or imbalanced information will cut back mannequin reliability.
  • Arrange suggestions processes. Analysts and operators have to be able to figuring out and reporting errors in order that they will enhance the system over time.

Not all AI instruments will match into mission-critical working processes. Earlier than buying any system, groups ought to perceive the present constraints and the attainable penalties of including a dynamic system. That features threat administration: figuring out what might go incorrect and planning accordingly.

Information, Coaching, and Human Oversight

Information serves because the cornerstone of each AI system. Figuring out acceptable datasets which might be related for the particular use case is paramount for the system to achieve success. Making ready information for AI methods could be a appreciable dedication in time and sources.

It is usually crucial to determine a monitoring system to detect and proper undesirable modifications in mannequin conduct, collectively known as mannequin drift, that could be too delicate for customers to note.

It’s important to do not forget that AI is unable to evaluate its personal effectiveness or perceive the importance of its outputs. Folks mustn’t put full belief in any system, simply as they might not place complete belief in a brand new human operator on day one. That is the rationale human engagement is required throughout all phases of the AI lifecycle, from coaching to testing to deployment.

Vendor Analysis and Pink Flags

Workshop organizers reported that vendor transparency throughout acquisition is important. Groups ought to keep away from working with corporations that can’t (or won’t) clarify how their methods work in primary phrases associated to the use case. For instance, a vendor must be keen and capable of talk about the sources of information a device was skilled with, the transformations made to that information, the info will probably be capable of work together with, and the outputs anticipated. Distributors don’t must reveal mental property to share this stage of data. Different purple flags embrace

  • limiting entry to coaching information and documentation
  • instruments described as “too complicated to clarify”
  • lack of unbiased testing or audit choices
  • advertising that’s overly optimistic or pushed by worry of AI’s potential

Even when the acquisition crew lacks information about technical particulars, the seller ought to nonetheless present clear info concerning the system’s capabilities and their administration of dangers. The aim is to substantiate that the system is appropriate, dependable, and ready to help actual mission wants.

Classes from Mission Linchpin

One of many workshop members shared classes realized from Mission Linchpin:

  • Use modular design. AI methods must be versatile and reusable throughout completely different missions.
  • Plan for legacy integration. Count on to work with older methods. Substitute is normally not sensible.
  • Make outputs explainable. Leaders and operators should perceive why the system made a selected advice.
  • Give attention to area efficiency. A mannequin that works in testing won’t carry out the identical means in reside missions.
  • Handle information bias fastidiously. Poor coaching information can create severe dangers in delicate operations.

These factors emphasize the significance of testing, transparency, and accountability in AI packages.

Integrating AI with Goal

AI won’t change human decision-making; nonetheless, AI can improve and increase the choice making course of. AI can help nationwide safety by enabling organizations to make choices in much less time. It may well additionally cut back handbook workload and enhance consciousness in complicated environments. Nonetheless, none of those advantages occur by likelihood. Groups have to be intentional of their acquisition and integration of AI instruments. For optimum outcomes, groups should deal with AI like every other important system: one which requires cautious planning, testing, supervising, and robust governance.

Suggestions for the Way forward for AI in Nationwide Safety

The long run success of AI in nationwide safety is determined by constructing a tradition that balances innovation with warning and on utilizing adaptive methods, clear accountability, and continuous interplay between people and AI to realize mission objectives successfully. As we glance towards future success, the acquisition group can take the next steps:

  • Proceed to evolve the Software program Acquisition Pathway (SWP). The Division of Protection’s SWP is designed to extend the pace and scale of software program acquisition. Changes to the SWP to supply a extra iterative and risk-aware course of for AI methods or methods that embrace AI elements will improve its effectiveness. We perceive that OSD(A&S) is engaged on an AI-specific subpath to the SWP with a aim of releasing it later this yr. That subpath could deal with these wanted enhancements.
  • Discover applied sciences. Turn out to be accustomed to new applied sciences to know their capabilities following your group’s AI steerage. For instance, use generative AI for duties which might be very low precedence and/or the place a human overview is predicted – summarizing proposals, producing contracts, and creating technical documentation. People have to be cautious to keep away from sharing non-public or secret info on public methods and might want to intently test the outputs to keep away from sharing false info.
  • Advance the self-discipline of AI Engineering. AI Engineering helps not solely creating, integrating, and deploying AI capabilities, but additionally buying AI capabilities. A forthcoming report on the Nationwide AI Engineering Research will spotlight suggestions for creating necessities for methods, judging the appropriateness of AI methods, and managing dangers.

ios – Stopping A number of Calls to loadProducts Operate


I’ve a ProductListScreen, which shows all merchandise. I need to make it possible for a consumer can not carry out a number of concurrent calls to the loadProducts. At the moment, I’m utilizing the next code and it really works. However I’m searching for higher choices and possibly even transferring the logic of process cancellation contained in the Retailer.

struct ProductListScreen: View {
    
    let class: Class
    @Atmosphere(Retailer.self) personal var retailer
    @Atmosphere(.dismiss) personal var dismiss
    @State personal var showAddProductScreen: Bool = false
    @State personal var isLoading: Bool = false
    
    personal func loadProducts() async {
                
        guard !isLoading else { return }
        isLoading = true
        
        defer { isLoading = false }
                
        do {
            attempt await retailer.loadProductsBy(categoryId: class.id)
        } catch {
            // present error in toast message
            print("Didn't load: (error.localizedDescription)")
        }
    }
    
    var physique: some View {
        ZStack {
            if retailer.merchandise.isEmpty {
                ContentUnavailableView("No merchandise obtainable", systemImage: "shippingbox")
            } else {
                Checklist(retailer.merchandise) { product in
                    NavigationLink {
                        ProductDetailScreen(product: product)
                    } label: {
                        ProductCellView(product: product)
                    }
                }.refreshable(motion: {
                    await loadProducts()
                })
            }
        }.overlay(alignment: .heart, content material: {
            if isLoading {
                ProgressView("Loading...")
            }
        })
        .process {
            await loadProducts()
        }

Right here is my implementation of Retailer.

@MainActor
@Observable
class Retailer {
    
    var classes: [Category] = []
    var merchandise: [Product] = []
    
    let httpClient: HTTPClient
    
    init(httpClient: HTTPClient) {
        self.httpClient = httpClient
    }
   
    
    func loadProductsBy(categoryId: Int) async throws {
        
        let useful resource = Useful resource(endpoint: .productsByCategory(categoryId), modelType: [Product].self)
        merchandise = attempt await httpClient.load(useful resource)
    }

.NET Aspire’s CLI reaches normal availability in 9.4 launch


Microsoft has introduced the discharge of .NET Aspire 9.4, which the corporate says is the biggest replace but. 

.NET Aspire is a set of instruments, templates, and packages that Microsoft supplies to allow builders to construct distributed apps with observability in-built. 

With this launch, Aspire’s CLI is now usually accessible and contains 4 core instructions: aspire new (use templates to create an app), aspire add (add internet hosting integrations), aspire run (run the app from any terminal or editor), and aspire config (view, set, and alter CLI settings). 

Moreover, there are two new beta instructions that may be turned on utilizing aspire config set. exec permits builders to execute CLI instruments and deploy permits apps to be deployed to dev, check, or prod environments. 

Microsoft additionally redesigned the expertise round its eventing APIs and added an interplay service that enables builders to create customized UX for getting person enter. It helps normal textual content enter, masked textual content enter, numeric enter, dropdowns, and checkboxes. 

.NET Aspire additionally makes use of this interplay service to gather lacking parameter values by prompting the developer for them earlier than beginning a useful resource that wants them. 

Additionally new on this launch are previews for internet hosting integrations with GitHub Fashions and Azure AI Foundry to allow builders to outline AI apps of their apphost after which run them domestically. 

“Aspire streamlines distributed, advanced app dev, and an more and more standard instance of that is AI growth. Should you’ve been including agentic workflows, chatbots, or different AI-enabled experiences to your stacks, you understand how tough it’s to strive completely different fashions, wire them up, deploy them (and authenticate to them!) at dev time, and determine what’s truly occurring when you debug. However, AI-enabled apps are actually simply distributed apps with a brand new sort of container – an AI mannequin! – which implies Aspire is ideal for streamlining this dev loop,” Microsoft wrote in a weblog put up

And at last, .NET Aspire 9.4 provides the flexibility to make use of AddExternalService() to mannequin a URL or endpoint as a useful resource, get its standing, and configure it like another useful resource within the apphost.

Cisco groups with Hugging Face for AI mannequin anti-malware



  • ClamAV can now detect malicious code in AI fashions: “We’re releasing this functionality to the world. Without spending a dime. Along with its protection of conventional malware, ClamAV can now detect deserialization dangers in widespread mannequin file codecs similar to .pt and .pkl (in milliseconds, not minutes). This enhanced performance is obtainable immediately for everybody utilizing ClamAV,” Anderson and Fordyce wrote.
  • ClamAV is targeted on AI danger in VirusTotal: “ClamAV is the one antivirus engine to detect malicious fashions in each Hugging Face and VirusTotal – a well-liked menace intelligence platform that can scan uploaded fashions.”

Prior Cisco-Hugging Face collaborations

An earlier tie-in between Cisco’s Basis AI and Hugging Face helped produce Cerberus, an AI provide chain safety evaluation mannequin. Cerberus analyzes fashions as they enter Hugging Face and shares the leads to standardized menace feeds that Cisco Safety merchandise can use to construct and implement entry insurance policies for the AI provide chain, based on a weblog from Nathan Chang, product supervisor with the Basis AI crew. 

Cerberus expertise can be built-in with Cisco Safe Endpoint and Safe Electronic mail to allow automated blocking of recognized malicious recordsdata throughout learn/write/modify operations in addition to e mail attachments containing malicious AI Provide Chain Safety artifacts as attachments. Integration with Cisco Safe Entry Safe Net Gateway allows Cerberus to dam downloads of doubtless compromised AI fashions and block downloads of fashions from non-approved sources, based on Chang.

“Customers of Cisco Safe Entry can configure present entry to Hugging Face repositories, block entry to potential threats in AI fashions, block AI fashions with dangerous licenses, and implement compliance insurance policies on AI fashions that originate from delicate organizations or politically delicate areas,” Anderson and Fordyce wrote.

Cisco Basis AI

When Cisco launched Basis AI again in April, Jeetu Patel, govt vice chairman and chief product officer for Cisco, described it as a “a brand new crew of prime AI and safety consultants targeted on accelerating innovation for cyber safety groups.” Patel highlighted the discharge of the trade’s first open weight reasoning mannequin constructed particularly for safety:

“The Basis AI Safety mannequin is an 8-billion parameter, open weight LLM that’s designed from the bottom up for cybersecurity. The mannequin was pre-trained on fastidiously curated information units that seize the language, logic, and real-world data and workflows that safety professionals work with every single day,” Patel wrote in a weblog put up on the group’s introduction.

Prospects can use the mannequin as their very own AI safety base or combine it with their very own closed-source mannequin relying on their wants, Patel acknowledged on the time. “And that reasoning framework principally lets you take any base mannequin, then make that into an AI reasoning mannequin.”