Home Blog Page 2

swiftui – iOS WKWebView exhibits about:clean after app stop and relaunch


I am utilizing WKWebView in my iOS app to show an online web page. It really works fantastic when the app is working or backgrounded. Nonetheless, after I stop the app (swipe up) and relaunch it, the WebView solely exhibits about:clean as an alternative of the anticipated URL.

The web site masses simply fantastic after I delete and reinstall the app.

WebView.swift

struct WebView: UIViewRepresentable {
    
    var url: URL?
    
    var redirect: ((URL) -> Void)?
    
    func makeUIView(context: Context) -> WKWebView {
        let webView = WKWebView()
        webView.isInspectable = true
        webView.navigationDelegate = context.coordinator
        return webView
    }
    
    func updateUIView(_ webView: WKWebView, context: Context) {
        if let url = url {
            webView.load(URLRequest(url: url))
        }
    }
    
    func makeCoordinator() -> Coordinator {
        Coordinator(mother or father: self)
    }
    
    class Coordinator: NSObject, WKNavigationDelegate {
        
        var mother or father: WebView
        
        init(mother or father: WebView) {
            self.mother or father = mother or father
        }
    
        func webView(_ webView: WKWebView, didFinish navigation: WKNavigation!) {
            let javascript = """
                if (doc.querySelector('meta[name="viewport"]')) {
                    doc.querySelector('meta[name="viewport"]').setAttribute('content material', 'width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no');
                } else {
                    const meta = doc.createElement('meta');
                    meta.setAttribute('identify', 'viewport');
                    meta.setAttribute('content material', 'width=device-width, initial-scale=1.0, maximum-scale=1.0, user-scalable=no');
                    doc.getElementsByTagName('head')[0].appendChild(meta);
                }
                true;
            """
            
            webView.evaluateJavaScript(javascript) { consequence, error in
                if let error = error {
                    print("Error injecting JavaScript: (error)")
                } else {
                    print("JavaScript executed efficiently.")
                }
            }
            print("WebView completed loading.")
        }

        func webView(_ webView: WKWebView, didFail navigation: WKNavigation!, withError error: Error) {
            print("WebView failed with error: (error.localizedDescription)")
        }

        func webView(_ webView: WKWebView, didFailProvisionalNavigation navigation: WKNavigation!, withError error: Error) {
            print("WebView provisional load failed: (error.localizedDescription)")
        }
        
        func webView(_ webView: WKWebView, decidePolicyFor navigationAction: WKNavigationAction, decisionHandler: @escaping (WKNavigationActionPolicy) -> Void) {
            if let url = navigationAction.request.url, url.scheme == "oauthapp" {
                mother or father.redirect?(url)  // Cross deep hyperlink to your SwiftUI view
                decisionHandler(.cancel) // Cancel navigation
                return
            }
            decisionHandler(.permit)
        }
    }

}

ContentView

func buildAuthURL() -> URL? {
    var parts = URLComponents(string: authURL)
    parts?.queryItems = [
        URLQueryItem(name: "response_type", value: "code"),
        URLQueryItem(name: "client_id", value: clientID),
        URLQueryItem(name: "redirect_uri", value: redirectURI),
        URLQueryItem(name: "code_challenge", value: codeChallenge),
        URLQueryItem(name: "code_challenge_method", value: "S256"),
        URLQueryItem(name: "scope", value: scopes)
    ]
    return parts?.url
}

var physique: some View {
    if accessToken == nil {
        WebView(url: buildAuthURL()){ url in
            if let parts = URLComponents(string: url.absoluteString),
               let code = parts.queryItems?.first(the place: { $0.identify == "code" })?.worth {
                print("Authorization code: (code)")
                authCode = code
                requestAccessToken(code: code)
            } else {
                print("Didn't extract code from URL.")
            }
        }.ignoresSafeArea()
    } else {
        Textual content("Authorization Code: ((authCode ?? "").prefix(15))...")
        Textual content("Entry Token: ((accessToken ?? "").prefix(15))...")
    }
}

What’s occurring:
After quitting and relaunching the app:

  • The WebView seems.
  • But it surely solely exhibits a clean white display screen with about:clean because the URL.

Notes:

  • This solely occurs after a chilly begin (i.e., app was pressure stop).
  • There is not any crash, and every thing else within the app works fantastic.

Query:

How can I be sure that WKWebView correctly masses the supposed URL as an alternative of displaying about:clean after a chilly begin?

Robots-Weblog | Inklusionsprojekt mit Low-Price-Roboter gewinnt ROIBOT Award von igus


Wittekindshofer Werkstätten realisieren behindertengerechten Arbeitsplatz mit Low-Price-Roboter für 4.970 Euro

Köln, 10. April 2025 – Daniel Hillebrand leidet an einer Tetraspastik, die eine kontrollierte Bewegung der Extremitäten unmöglich macht. Trotzdem kann er selbstbestimmt arbeiten – dank eines automatisierten Arbeitsplatzes, den die Diakonische Stiftung Wittekindshofer Werkstätten aus Unhealthy Oeynhausen trotz engen Budgets mit einem Low-Price-Roboter von igus realisiert hat. Für dieses kreative Inklusionsprojekt erhielt die Stiftung jetzt den ROIBOT Award. Der Wettbewerb zeichnet progressive und wirtschaftliche Automatisierungsprojekte aus, die mithilfe von igus Produkten erfolgreich umgesetzt wurden. Zu den weiteren Preisträgern zählen das niederländische Unternehmen Paperfoam, das französische Forschungsinstitut CNRS und die Universität Politecnico aus Mailand.

Daniel Hillebrand sitzt im Rollstuhl und bewegt mit seinem Kinn einen Joystick. Damit steuert er einen Roboterarm, der Kunststoffbauteile sortiert. Mehrere Stunden professional Tag, ohne fremde Hilfe. „Daniel ist es gewohnt, in seinem Leben quick vollständig auf Hilfe angewiesen zu sein“, sagt Torsten Jeschke, Elektriker und Erzieher in den Wittekindshofer Werkstätten. „Dank der neuen Anlage kann er nun trotz seiner schweren Lähmung selbstbestimmt arbeiten.“ Das sei für ihn der Himmel auf Erden. „Der Roboter ist cool“, bestätigt Daniel Hillebrand. „Ich musste in die Technik erst reinkommen, aber mittlerweile läuft alles richtig intestine. Am schönsten ist es, wenn der Sack nach langer Arbeit voll ist.“

„Ein Automationsprojekt, das für uns bei igus besonders ergreifend ist.“
Marktübliche Industrieroboter wären für die Wittekindshofer Werkstätten unerschwinglich und in der Steuerung zu komplex gewesen. Jeschke hat deshalb eine günstigere Lösung zusammengestellt, die sich ähnlich leicht bedienen lässt wie ein Computerspiel – mithilfe der Low-Price-Robotik-Plattform RBTX von igus. Herzstück und Daniel Hillebrands Armersatz ist dabei der ReBeL, ein Gelenkarmroboter aus Hochleistungskunststoff für nur 4.970 Euro. igus hatte den ROIBOT-Wettbewerb zum mittlerweile dritten Mal ausgeschrieben, um Unternehmen und Organisationen auszuzeichnen, die mithilfe des RBTX-Marktplatzes besonders smarte und wirtschaftliche Automationsprojekte realisieren. Die Gewinner erhalten Gutscheine für Robotik-{Hardware} im Wert von bis zu 5.000 Euro. „Für uns ist es wirklich ergreifend zu sehen, wie es die Wittekindshofer Werkstätten geschafft haben, mit begrenzten finanziellen Ressourcen und dafür umso mehr Fantasie ein Automationsprojekt auf die Beine zu stellen, welches das Leben eines Menschen so sehr verbessert. Wir hoffen, dass sie den 5.000 Euro-Gutschein nutzen können, um in Zukunft noch weitere Projekte dieser Artwork umzusetzen“, sagt Alexander Mühlens, Leiter des Geschäftsbereichs Low-Price-Automation bei igus und Schirmherr der ROIBOT Awards. igus selbst hat die Good Work Constitution des Verband Deutscher Maschinen- und Anlagenbauer unterschrieben und sich damit dem positiven Beitrag von Robotik zur Gesellschaft verpflichtet. Die Charta betont, dass Robotik und Automatisierungstechnologien nicht nur die Produktivität steigern, sondern auch das Leben der Menschen verbessern können, indem sie Arbeitsbedingungen optimieren und neue Möglichkeiten schaffen.

Die weiteren Preisträger: Roboterkomponenten für die Qualitätssicherung, Astroteilchenphysik und automatisierte Obsternte
Platz zwei und 2.500 Euro für Robotik-{Hardware} gehen an Paperfoam. Die niederländische Firma hat den Gelenkarmroboter ReBeL von igus mit einer Kamera ausgestattet, um ihre biobasierten und recycelbaren Verpackungen stichprobenweise auf Produktionsfehler zu prüfen. Die Lösung reduziert die körperliche Belastung der Mitarbeiter und erhöht gleichzeitig die Qualität der Produktion. Über Platz drei und 1.000 Euro freut sich das französische Forschungsinstitut Centre nationwide de la recherche scientifique (CNRS) für die Entwicklung einer Kalibriervorrichtung eines Teleskops für die Astroteilchenphysik. Durch den Einsatz von schmierfreien Linearachsen von igus erreichen die Konstrukteure eine hohe Präzision und Wartungsfreundlichkeit. Der Sonderpreis für Bildungseinrichtungen und ebenfalls 1.000 Euro gehen an die wissenschaftliche-technische Universität Politecnico in Mailand. Sie hat mit dem ReBeL Roboterarm einen mobilen Manipulator konstruiert, der die Obsternte durch Automatisierung effizienter und weniger arbeitsintensiv gestaltet. „Die Gewinner beweisen, dass Automation heute nicht mehr nur eine Frage des Geldes ist“, so Mühlens abschließend. „Auch mit kleinen Budgets und Kreativität lassen sich wirtschaftliche Automationslösungen mit einem schnellen Return on Make investments realisieren. Wir freuen uns schon darauf weitere spannende und kostengünstige Automatisierungsprojekte beim nächsten ROIBOT Award kennenzulernen.“

Erfahren Sie mehr über den ROIBOT Award und die Gewinner auf:
https://www.igus.de/automation/service/gewinner-roibot
 



BurgerBot in Los Gatos, California, automates the fast-food line


It is taking place: The robots are taking our jobs. No sick days, lavatory breaks, and no extra curly hairs in your buns. Simply chilly, arduous effectivity. Extra particularly, BurgerBot is a brand new fast-food joint the place robots are doing all of the work that people aren’t fascinated with, like burger meeting traces.

In Los Gatos, California, one of many San Francisco Bay Space’s extra prosperous areas, a shiny new fancy fast-food idea has simply popped up within one in every of its fashionable upscale brunch spots. ABB Robotics and BurgerBots have teamed up and unleashed a pair of IRB 360 FlexPickers and YuMi cobots (collaborative robots) to slap out some tasty burgers for the plenty – in 27 seconds, flat.

These machines do not simply stack US$18 all-beef patties, particular sauce, lettuce, cheese, pickles, onions on a sesame seed bun with surgical precision onto a QR-coded tray – they’re claimed to make completely constant burgers each single time with zero perspective. A far cry from little Billy, who’s by no means had a job earlier than and is on day one in every of coaching and experiencing rush hour for the primary time.

However earlier than we pull out the protest indicators and begin a picket line, here is some meals for thought: A full employees of people is employed on the restaurant. The bots (for now) solely deal with the burger manufacturing operation – from grinding the meat and griddling it as much as tossing it onto a conveyor belt meeting line. They then assemble the substances and kick out a whole, ready-to-eat burger again to a human server to be dished out to a ready visitor.

The robots do the repetitive stuff whereas supposedly leaving individuals extra time for hospitality and different people-y issues.

BurgerBots final assembly: an $18 burger, still in its QR-coded tray
BurgerBots remaining meeting: an $18 burger, nonetheless in its QR-coded tray

ABB

“The meals service business is dynamic and demanding, and our know-how brings industrial-grade consistency, effectivity and reliability to this area,” stated Marc Segura, President of ABB Robotics Division. “By taking up repetitive and time-consuming duties, robots permit employees to give attention to what issues most – creating memorable eating experiences.”

ABB surveyed 1,250 hospitality staff and located that 67% truly need robots to take over boring, gross, and harmful duties, and 63% had been excited by the prospect of a robotic making their job simpler. Ultimately, automation is not essentially about changing people, it is about upgrading all the system.

When the washer changed the washboard, individuals all over the place rejoiced.

BurgerBots is not only a Silicon Valley tech gimmick. It is designed for scalability, hygiene, and effectivity. ABB’s compact robotic cell combines the FlexPicker 360 – which grabs and stacks veggies and the like – and the YuMi robotic for remaining meeting as a field rolls down a conveyor. The system makes use of real-time stock monitoring from lettuce to condiments and all the things in between.

Robotic burger-making in 27 seconds!

BurgerBots has solely been open for roughly 24 hours on the time of writing, so time will inform the way it performs … although this is not ABB’s first foray into robotic meals prep. In 2021, ABB’s IRB 4600 ‘bot helped energy Roboeatz’s ARK (Autonomous Robotic Kitchen) in Latvia – claimed to be the world’s most superior autonomous kitchen that may whip up over 1,000 recipes from 80 recent substances.

In line with 2025 information from the World Financial Discussion board, automation and AI might result in the lack of roughly 92 million human jobs (about 8% of all jobs) by 2030. In direction of the highest of the potential record are positions like cashiers and fast-food staff. The primary duties to seemingly substitute human staff might be harmful, repetitive, or tedious jobs – and roles that do not significantly require excessive social or emotional intelligence.

That being stated, on the BurgerBot web site, the corporate is accepting resumes from certified people. Simply not for the burger-making place.

Supply: ABB Robotics



goal c – Easy methods to save ics file to iOS calendar app utilizing UIActivityViewController?


Utilizing UIActivityViewController, I’m able to share eg Save a contact to Apple’s Contacts app. Nevertheless, when I attempt to one thing analagous for a Process within the type of an ics file, UIActityController will not be displaying an choice to avoid wasting to Calendar.

When the consumer faucets share, my app packages the duty as an .ics file and opens the UIActivityController which reveals the .ics file because the merchandise to share full with a calendar icon.

Nevertheless, the choice to avoid wasting to Calendar will not be proven, solely an choice to Copy or Save to Information. The Calendar app is put in on the simulator.

I do know getting the UIActivityViewController to acknowledge apps for sharing may be tough, however is there any technique to drive it to indicate the choice of save to Calendar within the case of an .ics file? (Code beneath is Goal-C however any Swift resolution can be nice too.)

Right here is the code to launch the activityViewController:

NSURL*icsurl = [self getICSURLFromItem:_item];
    
NSArray *activityItems;

if (picture == nil) {
    activityItems = @[text,icsurl];
}
else {
    activityItems =@[image,text,icsurl];
}
UIActivityViewController *activityViewController = [[UIActivityViewController alloc] initWithActivityItems: activityItems applicationActivities:nil];

[self presentViewController:activityViewController animated:YES completion:nil];


//And code to create ics file verified working

-(NSURL*) getICSURLFromItem:(Objects *)merchandise {
    //construct ICS
    NSMutableArray *mutableArray = [[NSMutableArray alloc] init];
    //required
    [mutableArray addObject:@"BEGIN:VCALENDAR"];
    [mutableArray addObject:@"VERSION:2.0"];
    [mutableArray addObject:@"PRODID:-//Acme Inc//Acme//EN"];
    [mutableArray addObject:@"METHOD:PUBLISH"];
    [mutableArray addObject:@"BEGIN:VEVENT"];
    [mutableArray addObject:item.summary];
    [mutableArray addObject:item.description];
    [mutableArray addObject:item.timezone];
    [mutableArray addObject:item.start];
    [mutableArray addObject:item.end];
    [mutableArray addObject:item.stamp];
    [mutableArray addObject:item.last];
    [mutableArray addObject:statusconfirmed];
    [mutableArray addObject:sequence];
  
    NSString * storedusername = [[NSUserDefaults standardUserDefaults] objectForKey:@"userName"];
    NSString * storedemail = [[NSUserDefaults standardUserDefaults] objectForKey:@"emailAddress"];
    NSString *organizer =[NSString stringWithFormat:@"ORGANIZER;CN="%@ at Acme":mailto:%@",storedusername,storedemail];
    [mutableArray addObject:organizer];
   
    [mutableArray addObject:@"END:VEVENT"];
    
    [mutableArray addObject:@"END:VCALENDAR"];
   
    NSString *ICSString = [mutableArray componentsJoinedByString:@"n"];
    NSString *ICSFilePath;
    NSString *humanFileName = merchandise.process;
    NSString *fullFileName = [humanFileName stringByAppendingString: @".ics"];
    ICSFilePath = [cachesPathString  stringByAppendingPathComponent:fullFileName];
    [ICSString writeToFile:ICSFilePath atomically:YES encoding:NSUnicodeStringEncoding error:nil];
           
    NSURL * ICSURL = [[NSURL alloc] initFileURLWithPath:ICSFilePath];
    return ICSURL;
}

On the simulator it does present an choice to repeat or save to information however that’s it. The simulator does have Calendar put in.

CNTXT AI Launches Munsit: The Most Correct Arabic Speech Recognition System Ever Constructed


In a defining second for Arabic-language synthetic intelligence, CNTXT AI has unveiled Munsit, a next-generation Arabic speech recognition mannequin that isn’t solely essentially the most correct ever created for Arabic, however one which decisively outperforms world giants like OpenAI, Meta, Microsoft, and ElevenLabs on normal benchmarks. Developed within the UAE and tailor-made for Arabic from the bottom up, Munsit represents a robust step ahead in what CNTXT calls “sovereign AI”—expertise constructed within the area, for the area, but with world competitiveness.

The scientific foundations of this achievement are specified by the group’s newly revealed paper, Advancing Arabic Speech Recognition Via Massive-Scale Weakly Supervised Studying, which introduces a scalable, data-efficient coaching technique that addresses the long-standing shortage of labeled Arabic speech information. That technique—weakly supervised studying—has enabled the group to assemble a system that units a brand new bar for transcription high quality throughout each Trendy Normal Arabic (MSA) and greater than 25 regional dialects.

Overcoming the Information Drought in Arabic ASR

Arabic, regardless of being one of the vital broadly spoken languages globally and an official language of the United Nations, has lengthy been thought-about a low-resource language within the area of speech recognition. This stems from each its morphological complexity and a scarcity of huge, various, labeled speech datasets. Not like English, which advantages from numerous hours of manually transcribed audio information, Arabic’s dialectal richness and fragmented digital presence have posed important challenges for constructing strong computerized speech recognition (ASR) techniques.

Fairly than ready for the sluggish and costly strategy of handbook transcription to catch up, CNTXT AI pursued a radically extra scalable path: weak supervision. Their method started with an enormous corpus of over 30,000 hours of unlabeled Arabic audio collected from various sources. Via a custom-built information processing pipeline, this uncooked audio was cleaned, segmented, and robotically labeled to yield a high-quality 15,000-hour coaching dataset—one of many largest and most consultant Arabic speech corpora ever assembled.

This course of didn’t depend on human annotation. As an alternative, CNTXT developed a multi-stage system for producing, evaluating, and filtering hypotheses from a number of ASR fashions. These transcriptions have been cross-compared utilizing Levenshtein distance to pick out essentially the most constant hypotheses, then handed by a language mannequin to guage their grammatical plausibility. Segments that failed to satisfy outlined high quality thresholds have been discarded, guaranteeing that even with out human verification, the coaching information remained dependable. The group refined this pipeline by a number of iterations, every time enhancing label accuracy by retraining the ASR system itself and feeding it again into the labeling course of.

Powering Munsit: The Conformer Structure

On the coronary heart of Munsit is the Conformer mannequin, a hybrid neural community structure that mixes the native sensitivity of convolutional layers with the worldwide sequence modeling capabilities of transformers. This design makes the Conformer significantly adept at dealing with the nuances of spoken language, the place each long-range dependencies (equivalent to sentence construction) and fine-grained phonetic particulars are essential.

CNTXT AI applied a big variant of the Conformer, coaching it from scratch utilizing 80-channel mel-spectrograms as enter. The mannequin consists of 18 layers and consists of roughly 121 million parameters. Coaching was performed on a high-performance cluster utilizing eight NVIDIA A100 GPUs with bfloat16 precision, permitting for environment friendly dealing with of large batch sizes and high-dimensional characteristic areas. To deal with tokenization of Arabic’s morphologically wealthy construction, the group used a SentencePiece tokenizer educated particularly on their {custom} corpus, leading to a vocabulary of 1,024 subword items.

Not like standard supervised ASR coaching, which usually requires every audio clip to be paired with a rigorously transcribed label, CNTXT’s technique operated completely on weak labels. These labels, though noisier than human-verified ones, have been optimized by a suggestions loop that prioritized consensus, grammatical coherence, and lexical plausibility. The mannequin was educated utilizing the Connectionist Temporal Classification (CTC) loss perform, which is well-suited for unaligned sequence modeling—important for speech recognition duties the place the timing of spoken phrases is variable and unpredictable.

Dominating the Benchmarks

The outcomes converse for themselves. Munsit was examined in opposition to main open-source and industrial ASR fashions on six benchmark Arabic datasets: SADA, Frequent Voice 18.0, MASC (clear and noisy), MGB-2, and Casablanca. These datasets collectively span dozens of dialects and accents throughout the Arab world, from Saudi Arabia to Morocco.

Throughout all benchmarks, Munsit-1 achieved a mean Phrase Error Fee (WER) of 26.68 and a Character Error Fee (CER) of 10.05. By comparability, the best-performing model of OpenAI’s Whisper recorded a mean WER of 36.86 and CER of 17.21. Meta’s SeamlessM4T, one other state-of-the-art multilingual mannequin, got here in even increased. Munsit outperformed each different system on each clear and noisy information, and demonstrated significantly sturdy robustness in noisy situations, a important issue for real-world purposes like name facilities and public providers.

The hole was equally stark in opposition to proprietary techniques. Munsit outperformed Microsoft Azure’s Arabic ASR fashions, ElevenLabs Scribe, and even OpenAI’s GPT-4o transcribe characteristic. These outcomes should not marginal features—they symbolize a mean relative enchancment of 23.19% in WER and 24.78% in CER in comparison with the strongest open baseline, establishing Munsit because the clear chief in Arabic speech recognition.

A Platform for the Way forward for Arabic Voice AI

Whereas Munsit-1 is already reworking the chances for transcription, subtitling, and buyer help in Arabic-speaking markets, CNTXT AI sees this launch as just the start. The corporate envisions a full suite of Arabic-language voice applied sciences, together with text-to-speech, voice assistants, and real-time translation techniques—all grounded in sovereign infrastructure and regionally related AI.

“Munsit is greater than only a breakthrough in speech recognition,” mentioned Mohammad Abu Sheikh, CEO of CNTXT AI. “It’s a declaration that Arabic belongs on the forefront of worldwide AI. We’ve confirmed that world-class AI doesn’t have to be imported — it may be constructed right here, in Arabic, for Arabic.”

With the rise of region-specific fashions like Munsit, the AI business is coming into a brand new period—one the place linguistic and cultural relevance should not sacrificed within the pursuit of technical excellence. In actual fact, with Munsit, CNTXT AI has proven they’re one and the identical.