Home Blog Page 3759

ios – Preliminary AVPlayer Play Again Reveals Clean Image For A part of Video


I’ve an array of AVURLAsset movies I merge sequentially by way of AVMutableComposition. I export the merged video utilizing AVAssetExportSession, wait till its standing is .accomplished, after which play again the merged video with AVPlayer.

The difficulty I am misplaced over is the truth that throughout play again, a sub-video of the merged video, its video image/content material, is clean whereas the audio continues to play. What’s extra complicated is that, once I replay the merged video, it is solely then that the image/content material of that sub video shows.

I attempted including an observer on AVPlayerItem standing to attend till the merchandise is able to play, nevertheless it nonetheless has that bizarre glitch solely on the primary play again…

Any steerage or workaround can be appreciated.

func export() {

guard let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) else { return }

exporter.outputURL = outputUrl
exporter.outputFileType = .mp4
exporter.shouldOptimizeForNetworkUse = true
exporter.videoComposition = mainComposition

exporter.exportAsynchronously { [weak self] in
    
    guard let self = self else { return }
    
    if exporter.standing == .cancelled || exporter.standing == .failed {
                    
    } else if exporter.standing == .accomplished {
                    
        self.play(video: exporter.asset)
    }
}
}

override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {

if keyPath == "standing" {
    
    if let merchandise = object as? AVPlayerItem {
    
        if merchandise.standing == .readyToPlay {
            
            print("Merchandise is able to play")
                                                    
            if !didStartPlayback {
                DispatchQueue.major.async {
                    self.avPlayerPlayback.play()
                }
            }
            
        } else if merchandise.standing == .failed {
            
        }
    }
    
} else if keyPath == "fee", let participant = object as? AVPlayer {
    
    if participant.fee > 0, !didStartPlayback {
        didStartPlayback = true
    }
}
}

Intel unintentionally leaks monstrous 9,324-pin socket for “Diamond Rapids” Xeon CPUs

0


In a nutshell: Intel has unintentionally spilled some beans on its next-next-gen server processors. Code-named Diamond Rapids, these upcoming Xeon CPUs will slot into the corporate’s Oak Stream platform – and if the leaked particulars are correct, they are going to be an absolute unit.

The Xeon leak originated from Intel’s personal web site, the place the corporate briefly posted details about a $900 take a look at interposer for Oak Stream earlier than rapidly eradicating the web page. However that was not earlier than eagle-eyed Twitter consumer @harukaze5719 managed to seize a screenshot.

The screenshot reveals that Diamond Rapids will use a monster of a socket known as LGA 9324 – which has a staggering 9,324 pins. That makes it almost six occasions larger than the LGA 1700 socket used for client CPUs, and greater than double the dimensions of the LGA 4677 socket powering Intel’s newest 4th and Fifth-gen Xeon chips.

Intel has already began seeding pattern take a look at instruments for Oak Stream to companions and we are able to count on the primary Diamond Rapids CPUs to ship in late 2025 or early 2026. As for the socket, the screenshot lists it as up for pre-orders with shipments anticipated to start within the remaining quarter of this yr.

As for why Diamond Rapids wants such a gargantuan socket, the main principle is that it’ll pack considerably extra cores and reminiscence channels than earlier Xeon chips. Rumor has it Diamond Rapids may rise up to 16 DRAM channels together with assist for PCIe 6.0.

After all, that is all hypothesis for now. What we do know with higher certainty is that Diamond Rapids will comply with Intel’s 18A “Clearwater Forest” Xeons within the product roadmap. Clearwater Forest remains to be in early testing, having simply powered on alongside Panther Lake for the primary time this month, as revealed by Intel.

Diamond Rapids will make the most of Intel’s 14A course of node when it will definitely arrives in a yr or two. And with that monstrous LGA 9324 socket, it appears clear Intel is prepping one thing actually large to tackle AMD’s personal high-core-count Epyc “Venice” chips within the subsequent spherical of the server CPU wars.



The Shokz OpenFit Air would’ve been excellent if not for my glasses

0


Two years in the past, I believed my quest for the perfect working earbuds was over. The Beats Match Professional have been merely excellent in each means. Then I moved to the suburbs. The Beats’ ambient mode was no match for North Jersey drivers’ blatant disregard for pace limits. Open-ear headphones appeared like the plain reply. However which of them? For a couple of weeks, I dabbled with some bone conduction headphones, however I missed my bass drops an excessive amount of. That, in flip, led me to strive the $119.95 Shokz OpenFit Air.

Whereas Shokz is greatest identified for its bone conduction tech, the OpenFit Air makes use of conventional air conduction like another wi-fi buds. The speaker sits above your ear canal, whereas ear hooks maintain them securely in place. I usually don’t favor ear hooks — consolation tends to be hit and miss. However I’ll give something a go if it means I get to have situational consciousness with out giving up my bass.

The OpenFit Air hook round your ears, whereas the speaker sits proper above your ear canal.

The excellent news: the bass is higher. The unhealthy information is, it’s not by a lot. In truth, no open-ear headphones are nice at bass, however these are at the very least much less muddy than bone conduction. If I’m working subsequent to a busy road, I really can hear the rumbly “five-star Michelin” verse in “God’s Menu” by Stray Children. It’s not crystal clear — however I can hear it as a substitute of sighing and wishing I had my Beats Match Professional. (All bets are off for those who’re close to a freeway, although.)

Except for bass, the speaker place means you possibly can hear your content material fairly properly, however open-ear buds at all times have limitations. I wore these on a number of commutes to the workplace. A quiet NJ Transit bus or prepare? No drawback. A loud New York Metropolis subway drowned out my audiobooks to the purpose the place I gave up listening to something in any respect. I by no means received any complaints about sound leakage, however I think about that’s as a result of the NYC subway is just too loud for anybody to listen to something.

At 8.7g, the OpenFit Air are fairly light-weight.

As train buds, these are a fantastic choice. They’ve about six hours on a single cost, and a 10-minute cost will get you two hours of use, which isn’t too shabby except you’re an ultramarathoner. With IP55, you possibly can work up a pleasant sweat or deal with a little bit of rain, although I’d be cautious of a full-on downpour. They’re cozy and fairly safe — that’s, till you set some glasses on.

Whereas the hooks are pliable and comparatively skinny, they took up a whole lot of area behind my ears. On days after I didn’t put on contacts, the highest of my ears felt crowded and weighed down. That heavy sensation made me paranoid that the Air would fall off, and I’d go to readjust them. Typically, that ended up with me unintentionally triggering the contact controls. Even after I didn’t put on glasses, I unintentionally dislodged the Air each time I brushed my bangs behind my ears. I don’t love that, as somebody who’s been traumatized one too many occasions by chasing after a bud that fell out of my ear mid-run. To be truthful, you might not have these points. A few of that is doubtless all the way down to the form of my ears and the very fact my glasses have chunkier arms. However for those who actually desire a safe match, a wraparound design is a more sensible choice.

$120

The Shokz OpenFit Air are a pair of light-weight open-ear buds that use air conduction and have six hours of battery life.

To my shock, I ended up preferring the Air for on a regular basis put on. Perhaps it’s simply me, however bone conduction headphones scream health in a means that feels a little bit bizarre after I’m working errands. These remind me extra of the sleeker, extra discreet Bose Extremely Open Earbuds. My favourite technique to put on these was after I was cooking or cleansing. I might simply hearken to my audiobooks or podcasts, maintain an ear out for beeping home equipment, and maintain the naughty cat from attempting to eat my dinner straight from the pan.

If I had higher imaginative and prescient, these may’ve ended my quest for a Beats Match Professional various. They’re fairly reasonably priced at $120, bass is respectable for a pair of open-ear buds, they usually’re cozy. I need one thing only a smidge safer for days after I don’t really feel like popping lenses in. Alas, the hunt continues.

Migrate Amazon Redshift from DC2 to RA3 to accommodate growing knowledge volumes and analytics calls for

0


This can be a visitor publish by Valdiney Gomes, Hélio Leal, Flávia Lima, and Fernando Saga from Dafiti.

As companies try to make knowledgeable selections, the quantity of information being generated and required for evaluation is rising exponentially. This pattern is not any exception for Dafiti, an ecommerce firm that acknowledges the significance of utilizing knowledge to drive strategic decision-making processes. With the ever-increasing quantity of information accessible, Dafiti faces the problem of successfully managing and extracting worthwhile insights from this huge pool of knowledge to achieve a aggressive edge and make data-driven selections that align with firm enterprise targets.

Amazon Redshift is extensively used for Dafiti’s knowledge analytics, supporting roughly 100,000 each day queries from over 400 customers throughout three international locations. These queries embody each extract, rework, and cargo (ETL) and extract, load, and rework (ELT) processes and one-time analytics. Dafiti’s knowledge infrastructure depends closely on ETL and ELT processes, with roughly 2,500 distinctive processes run each day. These processes retrieve knowledge from round 90 totally different knowledge sources, leading to updating roughly 2,000 tables within the knowledge warehouse and three,000 exterior tables in Parquet format, accessed by way of Amazon Redshift Spectrum and an information lake on Amazon Easy Storage Service (Amazon S3).

The rising want for cupboard space to take care of knowledge from over 90 sources and the performance accessible on the brand new Amazon Redshift node varieties, together with managed storage, knowledge sharing, and zero-ETL integrations, led us emigrate from DC2 to RA3 nodes.

On this publish, we share how we dealt with the migration course of and supply additional impressions of our expertise.

Amazon Redshift at Dafiti

Amazon Redshift is a completely managed knowledge warehouse service, and was adopted by Dafiti in 2017. Since then, we’ve had the chance to comply with many inventions and have gone by way of three totally different node varieties. We began with 115 dc2.massive nodes and with the launch of Redshift Spectrum and the migration of our chilly knowledge to the information lake, then we significantly improved our structure and migrated to 4 dc2.8xlarge nodes. RA3 launched many options, permitting us to scale and pay for computing and storage independently. That is what introduced us to the present second, the place we’ve eight ra3.4xlarge nodes within the manufacturing setting and a single node ra3.xlplus cluster for improvement.

Given our state of affairs, the place we’ve many knowledge sources and quite a lot of new knowledge being generated each second, we got here throughout an issue: the ten TB we had accessible in our cluster was inadequate for our wants. Though most of our knowledge is at the moment within the knowledge lake, extra cupboard space was wanted within the knowledge warehouse. This was solved by RA3, which scales compute and storage independently. Additionally, with zero-ETL, we simplified our knowledge pipelines, ingesting tons of information in close to actual time from our Amazon Relational Database Service (Amazon RDS) situations, whereas knowledge sharing permits an information mesh strategy.

Migration course of to RA3

Our first step in direction of migration was to know how the brand new cluster needs to be sized; for this, AWS offers a suggestion desk.

Given the configuration of our cluster, consisting of 4 dc2.8xlarge nodes, the advice was to modify to ra3.4xlarge.

At this level, one concern we had was relating to decreasing the quantity of vCPU and reminiscence. With DC2, our 4 nodes offered a complete of 128 vCPUs and 976 GiB; in RA3, even with eight nodes, these values have been lowered to 96 vCPUs and 768 GiB. Nevertheless, the efficiency was improved, with processing of workloads 40% quicker normally.

AWS affords Redshift Check Drive to validate whether or not the configuration chosen for Amazon Redshift is right in your workload earlier than migrating the manufacturing setting. At Dafiti, given the particularities of our workload, which supplies us some flexibility to make adjustments to particular home windows with out affecting the enterprise, it wasn’t obligatory to make use of Redshift Check Drive.

We carried out the migration as follows:

  1. We created a brand new cluster with eight ra3.4xlarge nodes from the snapshot of our four-node dc2.8xlarge cluster. This course of took round 10 minutes to create the brand new cluster with 8.75 TB of information.
  2. We turned off our inside ETL and ELT orchestrator, to forestall our knowledge from being up to date through the migration interval.
  3. We modified the DNS pointing to the brand new cluster in a clear approach for our customers. At this level, solely one-time queries and people made by Amazon QuickSight reached the brand new cluster.
  4. After the learn question validation stage was full and we have been happy with the efficiency, we reconnected our orchestrator in order that the information transformation queries could possibly be run within the new cluster.
  5. We eliminated the DC2 cluster and accomplished the migration.

The next diagram illustrates the migration structure.

Migrate architecture

Through the migration, we outlined some checkpoints at which a rollback could be carried out if one thing undesirable occurred. The primary checkpoint was in Step 3, the place the discount in efficiency in consumer queries would result in a rollback. The second checkpoint was in Step 4, if the ETL and ELT processes introduced errors or there was a lack of efficiency in comparison with the metrics collected from the processes run in DC2. In each instances, the rollback would merely happen by altering the DNS to level to DC2 once more, as a result of it could nonetheless be potential to rebuild all processes inside the outlined upkeep window.

Outcomes

The RA3 household launched many options, allowed scaling, and enabled us to pay for compute and storage independently, which modified the sport at Dafiti. Earlier than, we had a cluster that carried out as anticipated, however restricted us when it comes to storage, requiring each day upkeep to take care of management of disk house.

The RA3 nodes carried out higher and workloads ran 40% quicker normally. It represents a big lower within the supply time of our important knowledge analytics processes.

This enchancment grew to become much more pronounced within the days following the migration, because of the potential in Amazon Redshift to optimize caching, statistics, and apply efficiency suggestions. Moreover, Amazon Redshift is ready to present suggestions for optimizing our cluster based mostly on our workload calls for by way of Amazon Redshift Advisor suggestions, and affords computerized desk optimization, which performed a key position in reaching a seamless transition.

Furthermore, the storage capability leap from 10 TB to a number of PB solved Dafiti’s main problem of accommodating rising knowledge volumes. This substantial improve in storage capabilities, mixed with the sudden efficiency enhancements, demonstrated that the migration to RA3 nodes was a profitable strategic resolution that addressed Dafiti’s evolving knowledge infrastructure necessities.

Knowledge sharing has been used for the reason that second of migration, to share knowledge between the manufacturing and improvement setting, however the pure evolution is to allow the information mesh at Dafiti by way of this useful resource. The limitation we had was the necessity to activate case sensitivity, which is a prerequisite for knowledge sharing, and which pressured us to alter some damaged processes. However that was nothing in comparison with the advantages we’re seeing from migrating to RA3.

Conclusion

On this publish, we mentioned how Dafiti dealt with migrating to Redshift RA3 nodes, and the advantages of this migration.

Do you need to know extra about what we’re doing within the knowledge space at Dafiti? Try the next assets:

 The content material and opinions on this publish are these of Dafiti’s authors and AWS isn’t liable for the content material or accuracy of this publish.


In regards to the Authors

Valdiney Gomes is Knowledge Engineering Coordinator at Dafiti. He labored for a few years in software program engineering, migrated to knowledge engineering, and at the moment leads a tremendous crew liable for the information platform for Dafiti in Latin America.

Hélio Leal is a Knowledge Engineering Specialist at Dafiti, liable for sustaining and evolving the complete knowledge platform at Dafiti utilizing AWS options.

Flávia Lima is a Knowledge Engineer at Dafiti, liable for sustaining the information platform and offering knowledge from many sources to inside clients.

Fernando Saga is an information engineer at Dafiti, liable for sustaining Dafiti’s knowledge platform utilizing AWS options.

The Solely Prompting Framework for Each Use


The Solely Prompting Framework for Each Use
Picture by freestockcenter on Freepik

 

With the emergence of enormous language fashions, immediate engineering has develop into an important ability. Put merely, prompting includes how people work together with machines. Engineering the immediate suggests an efficient method to talk the requirement in order that the machines’ responses are contextual, related, and correct.

 

The Framework

 
The immediate engineering framework shared on this article considerably enhances your interactions with AI methods. Let’s study to create highly effective prompts by following the six-step framework, together with persona, context, and activity, and present me how anticipated output and tone.

 

Prompting FrameworkPrompting FrameworkPicture by the Writer

 

1. Persona

 
Think about a persona because the go-to individual or a website knowledgeable you’d strategy to unravel a selected activity. Persona is analogous, simply that the knowledgeable is now the mannequin you might be interacting with. Assigning the persona to the mannequin is equal to giving it a task or id that helps set the suitable degree of experience and perspective for the duty at hand.

Instance: “As an knowledgeable in sentiment evaluation by means of buyer care conversations…”

The mannequin that’s skilled on an enormous corpus of knowledge is now instructed to faucet into the data and perspective of a knowledge scientist performing sentiment evaluation.

 

2. Context

 
Context supplies the background info and the scope of the duty that the mannequin should concentrate on. Such an understanding of the state of affairs might embody info, filters, or constraints that outline the surroundings wherein the mannequin wants to reply.

Instance: “… analyzing name data to know the client ache factors and their sentiments from the decision particulars between a buyer and agent”

This context highlights the particular case of name heart knowledge evaluation. Offering context is equal to an optimization drawback – giving an excessive amount of context can obscure the precise goal whereas offering too little limits the mannequin’s capacity to reply appropriately.

 

3. Job

 
The duty is the particular motion that the mannequin should take. That is the entire goal of your immediate that the mannequin should accomplish. I name it 2C – clear and concise, implying the mannequin ought to be capable to perceive the expectation.

Instance: “… analyze the info and study to compute the sentiment from any future dialog.”

 

4. Present me how

 
Be aware that there isn’t any free lunch. The massive language fashions have been proven to hallucinate, that means they have a tendency to provide deceptive or incorrect outcomes. As Google Cloud explains, “These errors will be brought on by a wide range of components, together with inadequate coaching knowledge, incorrect assumptions made by the mannequin, or biases within the knowledge used to coach the mannequin.”

One method to restrict such conduct is to ask the mannequin to clarify the way it arrived on the response, slightly than simply share the ultimate reply.

Instance: “Present a short rationalization highlighting the phrases and the reasoning behind the computed sentiment.”

 

5. Anticipated Output

 
Principally, we’d like the output in a specified format that’s structured in a transparent and easy-to-follow. Relying on how the person consumes the data, the output could possibly be organized within the type of an inventory, a desk, or a paragraph.

Instance: “Share the response for the give name abstract in a 2-pointer format together with Buyer sentiment and Key phrases that mirror the sentiment class…”

 

6. Tone

 
Though specifying the tone is commonly thought of non-compulsory, specifying it helps tailor the language to the meant viewers. There are numerous tones that the mannequin can alter its response, corresponding to informal, direct, cheerful, and many others.

Instance: “Use knowledgeable but accessible tone, avoiding overly technical jargon the place potential.”

 

Placing It All Collectively

 

Nice, so we’ve got mentioned all six parts of the prompting framework. Now, let’s mix them right into a single immediate:

“As an knowledgeable in sentiment evaluation by means of buyer care conversations, you might be analyzing name data to know the client ache factors and their sentiments from the decision particulars between a buyer and agent. Analyze the info and study to compute the sentiment from any future dialog. Present a short rationalization highlighting the phrases and the reasoning behind the computed sentiment. Share the response for the give name abstract in a 2-pointer format together with Buyer sentiment and Key phrases that mirror the sentiment class. Use knowledgeable but accessible tone, avoiding overly technical jargon the place potential.”

 

Advantages of Efficient Prompting

 

Not solely does this framework lay down the groundwork for a transparent ask, nevertheless it additionally provides the mandatory context and describes the persona to tailor the response to the particular state of affairs. Asking the mannequin to point out the way it arrives on the outcomes provides additional depth.

Mastering the artwork of prompting comes with apply and is a steady course of. Working towards and refining the prompting expertise permits us to extract extra worth from AI interactions.

It’s much like experiment design whereas constructing machine studying fashions. I hope this framework supplies you with a stable construction, nevertheless, don’t really feel restricted by it. Use it as a baseline to experiment additional and preserve adjusting primarily based in your particular wants.
 
 

Vidhi Chugh is an AI strategist and a digital transformation chief working on the intersection of product, sciences, and engineering to construct scalable machine studying methods. She is an award-winning innovation chief, an writer, and a world speaker. She is on a mission to democratize machine studying and break the jargon for everybody to be part of this transformation.