Home Blog

Fixing Diffusion Fashions’ Restricted Understanding of Mirrors and Reflections

0


Since generative AI started to garner public curiosity, the pc imaginative and prescient analysis area has deepened its curiosity in creating AI fashions able to understanding and replicating bodily legal guidelines; nonetheless, the problem of educating machine studying methods to simulate phenomena reminiscent of gravity and liquid dynamics has been a big focus of analysis efforts for no less than the previous 5 years.

Since latent diffusion fashions (LDMs) got here to dominate the generative AI scene in 2022, researchers have more and more centered on LDM structure’s restricted capability to know and reproduce bodily phenomena. Now, this problem has gained extra prominence with the landmark growth of OpenAI’s generative video mannequin Sora, and the (arguably) extra consequential latest launch of the open supply video fashions Hunyuan Video and Wan 2.1.

Reflecting Badly

Most analysis aimed toward bettering LDM understanding of physics has centered on areas reminiscent of gait simulation, particle physics, and different elements of Newtonian movement. These areas have attracted consideration as a result of inaccuracies in fundamental bodily behaviors would instantly undermine the authenticity of AI-generated video.

Nevertheless, a small however rising strand of analysis concentrates on certainly one of LDM’s largest weaknesses – it is relative incapacity to provide correct reflections.

From the January 2025 paper 'Reflecting Reality: Enabling Diffusion Models to Produce Faithful Mirror Reflections', examples of 'reflection failure' versus the researchers' own approach. Source: https://arxiv.org/pdf/2409.14677

From the January 2025 paper ‘Reflecting Actuality: Enabling Diffusion Fashions to Produce Devoted Mirror Reflections’, examples of ‘reflection failure’ versus the researchers’ personal method. Supply: https://arxiv.org/pdf/2409.14677

This problem was additionally a problem through the CGI period and stays so within the area of video gaming, the place ray-tracing algorithms simulate the trail of sunshine because it interacts with surfaces. Ray-tracing calculates how digital mild rays bounce off or move via objects to create practical reflections, refractions, and shadows.

Nevertheless, as a result of every extra bounce drastically will increase computational value, real-time purposes should commerce off latency in opposition to accuracy by limiting the variety of allowed light-ray bounces.

A representation of a virtually-calculated light-beam in a traditional 3D-based (i.e., CGI) scenario, using technologies and principles first developed in the 1960s, and which came to fulmination between 1982-93 (the span between Tron [1982] and Jurassic Park [1993]. Source: https://www.unrealengine.com/en-US/explainers/ray-tracing/what-is-real-time-ray-tracing

A illustration of a virtually-calculated light-beam in a conventional 3D-based (i.e., CGI) situation, utilizing applied sciences and ideas first developed within the Sixties, and which got here to fulmination between 1982-93 (the span between ‘Tron’ [1982] and ‘Jurassic Park’ [1993]. Supply: https://www.unrealengine.com/en-US/explainers/ray-tracing/what-is-real-time-ray-tracing

For example, depicting a chrome teapot in entrance of a mirror may contain a ray-tracing course of the place mild rays bounce repeatedly between reflective surfaces, creating an nearly infinite loop with little sensible profit to the ultimate picture. Generally, a mirrored image depth of two to 3 bounces already exceeds what the viewer can understand. A single bounce would end in a black mirror, because the mild should full no less than two journeys to kind a visual reflection.

Every extra bounce sharply will increase computational value, typically doubling render occasions, making quicker dealing with of reflections one of the vital important alternatives for bettering ray-traced rendering high quality.

Naturally, reflections happen, and are important to photorealism, in far much less apparent situations – such because the reflective floor of a metropolis road or a battlefield after the rain; the reflection of the opposing road in a store window or glass doorway; or within the glasses of depicted characters, the place objects and environments could also be required to look.

A simulated twin-reflection achieved via traditional compositing for an iconic scene in 'The Matrix' (1999).

A simulated twin-reflection achieved by way of conventional compositing for an iconic scene in ‘The Matrix’ (1999).

Picture Issues

Because of this, frameworks that have been common previous to the appearance of diffusion fashions, reminiscent of Neural Radiance Fields (NeRF), and a few more moderen challengers reminiscent of Gaussian Splatting have maintained their very own struggles to enact reflections in a pure method.

The REF2-NeRF mission (pictured under) proposed a NeRF-based modeling technique for scenes containing a glass case. On this technique, refraction and reflection have been modeled utilizing parts that have been dependent and unbiased of the viewer’s perspective. This method allowed the researchers to estimate the surfaces the place refraction occurred, particularly glass surfaces, and enabled the separation and modeling of each direct and mirrored mild elements.

Examples from the Ref2Nerf paper. Source: https://arxiv.org/pdf/2311.17116

Examples from the Ref2Nerf paper. Supply: https://arxiv.org/pdf/2311.17116

Different NeRF-facing reflection options of the final 4-5 years have included NeRFReN, Reflecting Actuality, and Meta’s 2024 Planar Reflection-Conscious Neural Radiance Fields mission.

For GSplat, papers reminiscent of Mirror-3DGS, Reflective Gaussian Splatting, and RefGaussian have supplied options concerning the reflection drawback, whereas the 2023 Nero mission proposed a bespoke technique of incorporating reflective qualities into neural representations.

MirrorVerse

Getting a diffusion mannequin to respect reflection logic is arguably tougher than with explicitly structural, non-semantic approaches reminiscent of Gaussian Splatting and NeRF. In diffusion fashions, a rule of this type is simply prone to grow to be reliably embedded if the coaching information accommodates many assorted examples throughout a variety of situations, making it closely depending on the distribution and high quality of the unique dataset.

Historically, including specific behaviors of this type is the purview of a LoRA or the fine-tuning of the bottom mannequin; however these will not be superb options, since a LoRA tends to skew output in direction of its personal coaching information, even with out prompting, whereas fine-tunes – apart from being costly – can fork a serious mannequin irrevocably away from the mainstream, and engender a bunch of associated customized instruments that can by no means work with any different pressure of the mannequin, together with the unique one.

On the whole, bettering diffusion fashions requires that the coaching information pay higher consideration to the physics of reflection. Nevertheless, many different areas are additionally in want of comparable particular consideration. Within the context of hyperscale datasets, the place customized curation is expensive and troublesome, addressing each single weak point on this method is impractical.

Nonetheless, options to the LDM reflection drawback do crop up every so often. One latest such effort, from India, is the MirrorVerse mission, which provides an improved dataset and coaching technique able to bettering of the state-of-the-art on this specific problem in diffusion analysis.

Right-most, the results from MirrorVerse pitted against two prior approaches (central two columns). Source: https://arxiv.org/pdf/2504.15397

Rightmost, the outcomes from MirrorVerse pitted in opposition to two prior approaches (central two columns). Supply: https://arxiv.org/pdf/2504.15397

As we will see within the instance above (the function picture within the PDF of the brand new research), MirrorVerse improves on latest choices tackling the identical drawback, however is much from excellent.

Within the higher proper picture, we see that the ceramic jars are considerably to the correct of the place they need to be, and within the picture under, which ought to technically not function a mirrored image of the cup in any respect, an inaccurate reflection has been shoehorned into the correct–hand space, in opposition to the logic of pure reflective angles.

Due to this fact we’ll check out the brand new technique not a lot as a result of it could signify the present state-of-the-art in diffusion-based reflection, however equally as an example the extent to which this will likely show to be an intractable problem for latent diffusion fashions, static and video alike, because the requisite information examples of reflectivity are most probably to be entangled with specific actions and situations.

Due to this fact this specific perform of LDMs might proceed to fall in need of structure-specific approaches reminiscent of NeRF, GSplat, and likewise conventional CGI.

The new paper is titled MirrorVerse: Pushing Diffusion Fashions to Realistically Replicate the World, and comes from three researchers throughout Imaginative and prescient and AI Lab, IISc Bangalore, and the Samsung R&D Institute at Bangalore. The paper has an related mission web page, in addition to a dataset at Hugging Face, with supply code launched at GitHub.

Technique

The researchers be aware from the outset the issue that fashions reminiscent of Secure Diffusion and Flux have in respecting reflection-based prompts, illustrating the problem adroitly:

From the paper: Current state-of-the-art text-to-image models, SD3.5 and Flux, exhibited significant challenges in producing consistent and geometrically accurate reflections when prompted to generate reflections in the scene.

From the paper: Present state-of-the-art text-to-image fashions, SD3.5 and Flux, exhibiting important challenges in producing constant and geometrically correct reflections when prompted to generate them in a scene.

The researchers have developed MirrorFusion 2.0, a diffusion-based generative mannequin aimed toward bettering the photorealism and geometric accuracy of mirror reflections in artificial imagery. Coaching for the mannequin was based mostly on the researchers’ personal newly-curated dataset, titled MirrorGen2, designed to deal with the generalization weaknesses noticed in earlier approaches.

MirrorGen2 expands on earlier methodologies by introducing random object positioning, randomized rotations, and express object grounding, with the aim of making certain that reflections stay believable throughout a wider vary of object poses and placements relative to the mirror floor.

Schema for the generation of synthetic data in MirrorVerse: the dataset generation pipeline applied key augmentations by randomly positioning, rotating, and grounding objects within the scene using the 3D-Positioner. Objects are also paired in semantically consistent combinations to simulate complex spatial relationships and occlusions, allowing the dataset to capture more realistic interactions in multi-object scenes.

Schema for the era of artificial information in MirrorVerse: the dataset era pipeline utilized key augmentations by randomly positioning, rotating, and grounding objects inside the scene utilizing the 3D-Positioner. Objects are additionally paired in semantically constant mixtures to simulate advanced spatial relationships and occlusions, permitting the dataset to seize extra practical interactions in multi-object scenes.

To additional strengthen the mannequin’s skill to deal with advanced spatial preparations, the MirrorGen2 pipeline incorporates paired object scenes, enabling the system to raised signify occlusions and interactions between a number of parts in reflective settings.

The paper states:

‘Classes are manually paired to make sure semantic coherence – for example, pairing a chair with a desk. Throughout rendering, after positioning and rotating the first [object], an extra [object] from the paired class is sampled and organized to stop overlap, making certain distinct spatial areas inside the scene.’

In regard to express object grounding, right here the authors ensured that the generated objects have been ‘anchored’ to the bottom within the output artificial information, somewhat than ‘hovering’ inappropriately, which may happen when artificial information is generated at scale, or with extremely automated strategies.

Since dataset innovation is central to the novelty of the paper, we’ll proceed sooner than typical to this part of the protection.

Information and Checks

SynMirrorV2

The researchers’ SynMirrorV2 dataset was conceived to enhance the range and realism of mirror reflection coaching information, that includes 3D objects sourced from the Objaverse and Amazon Berkeley Objects (ABO) datasets, with these picks subsequently refined via OBJECT 3DIT, in addition to the filtering course of from the V1 MirrorFusion mission, to remove low-quality asset. This resulted in a refined pool of 66,062 objects.

Examples from the Objaverse dataset, used in the creation of the curated dataset for the new system. Source: https://arxiv.org/pdf/2212.08051

Examples from the Objaverse dataset, used within the creation of the curated dataset for the brand new system. Supply: https://arxiv.org/pdf/2212.08051

Scene development concerned inserting these objects onto textured flooring from CC-Textures and HDRI backgrounds from the PolyHaven CGI repository, utilizing both full-wall or tall rectangular mirrors. Lighting was standardized with an area-light positioned above and behind the objects, at a forty-five diploma angle. Objects have been scaled to suit inside a unit dice and positioned utilizing a precomputed intersection of the mirror and digital camera viewing frustums, making certain visibility.

Randomized rotations have been utilized across the y-axis, and a grounding method used to stop ‘floating artifacts’.

To simulate extra advanced scenes, the dataset additionally integrated a number of objects organized in accordance with semantically coherent pairings based mostly on ABO classes. Secondary objects have been positioned to keep away from overlap, creating 3,140 multi-object scenes designed to seize assorted occlusions and depth relationships.

Examples of rendered views from the authors' dataset containing multiple (more than two) objects, with illustrations of object segmentation and depth map visualizations seen below.

Examples of rendered views from the authors’ dataset containing a number of (greater than two) objects, with illustrations of object segmentation and depth map visualizations seen under.

Coaching Course of

Acknowledging that artificial realism alone was inadequate for sturdy generalization to real-world information, the researchers developed a three-stage curriculum studying course of for coaching MirrorFusion 2.0.

In Stage 1, the authors initialized the weights of each the conditioning and era branches with the Secure Diffusion v1.5 checkpoint, and fine-tuned the mannequin on the single-object coaching break up of the SynMirrorV2 dataset. Not like the above-mentioned Reflecting Actuality mission, the researchers didn’t freeze the era department. They then educated the mannequin for 40,000 iterations.

In Stage 2, the mannequin was fine-tuned for an extra 10,000 iterations, on the multiple-object coaching break up of SynMirrorV2, with a view to train the system to deal with occlusions, and the extra advanced spatial preparations present in practical scenes.

Lastly, In Stage 3, an extra 10,000 iterations of finetuning have been performed utilizing real-world information from the MSD dataset, utilizing depth maps generated by the Matterport3D monocular depth estimator.

Examples from the MSD dataset, with real-world scenes analyzed into depth and segmentation maps. Source: https://arxiv.org/pdf/1908.09101

Examples from the MSD dataset, with real-world scenes analyzed into depth and segmentation maps. Supply: https://arxiv.org/pdf/1908.09101

Throughout coaching, textual content prompts have been omitted for 20 p.c of the coaching time with a view to encourage the mannequin to make optimum use of the out there depth data (i.e., a ‘masked’ method).

Coaching happened on 4 NVIDIA A100 GPUs for all phases (the VRAM spec is just not equipped, although it might have been 40GB or 80GB per card). A studying fee of 1e-5 was used on a batch measurement of 4 per GPU, beneath the AdamW optimizer.

This coaching scheme progressively elevated the issue of duties introduced to the mannequin, starting with easier artificial scenes and advancing towards more difficult compositions, with the intention of creating sturdy real-world transferability.

Testing

The authors evaluated MirrorFusion 2.0 in opposition to the earlier state-of-the-art, MirrorFusion, which served because the baseline, and performed experiments on the MirrorBenchV2 dataset, overlaying each single and multi-object scenes.

Further qualitative exams have been performed on samples from the MSD dataset, and the Google Scanned Objects (GSO) dataset.

The analysis used 2,991 single-object pictures from seen and unseen classes, and 300 two-object scenes from ABO. Efficiency was measured utilizing Peak Sign-to-Noise Ratio (PSNR); Structural Similarity Index (SSIM); and Discovered Perceptual Picture Patch Similarity (LPIPS) scores, to evaluate reflection high quality on the masked mirror area. CLIP similarity was used to judge textual alignment with the enter prompts.

In quantitative exams, the authors generated pictures utilizing 4 seeds for a selected immediate, and deciding on the ensuing picture with one of the best SSIM rating. The 2 reported tables of outcomes for the quantitative exams are proven under.

Left, Quantitative results for single object reflection generation quality on the MirrorBenchV2 single object split. MirrorFusion 2.0 outperformed the baseline, with the best results shown in bold. Right, quantitative results for multiple object reflection generation quality on the MirrorBenchV2 multiple object split. MirrorFusion 2.0 trained with multiple objects outperformed the version trained without them, with the best results shown in bold.

Left, Quantitative outcomes for single object reflection era high quality on the MirrorBenchV2 single object break up. MirrorFusion 2.0 outperformed the baseline, with one of the best outcomes proven in daring. Proper, quantitative outcomes for a number of object reflection era high quality on the MirrorBenchV2 a number of object break up. MirrorFusion 2.0 educated with a number of objects outperformed the model educated with out them, with one of the best outcomes proven in daring.

The authors remark:

‘[The results] present that our technique outperforms the baseline technique and finetuning on a number of objects improves the outcomes on advanced scenes.’

The majority of outcomes, and people emphasised by the authors, regard qualitative testing. As a result of dimensions of those illustrations, we will solely partially reproduce the paper’s examples.

Comparison on MirrorBenchV2: the baseline failed to maintain accurate reflections and spatial consistency, showing incorrect chair orientation and distorted reflections of multiple objects, whereas (the authors contend) MirrorFusion 2.0 correctly renders the chair and the sofas, with accurate position, orientation, and structure.

Comparability on MirrorBenchV2: the baseline failed to keep up correct reflections and spatial consistency, displaying incorrect chair orientation and distorted reflections of a number of objects, whereas (the authors contend) MirrorFusion 2.0 accurately renders the chair and the sofas, with correct place, orientation, and construction.

Of those subjective outcomes, the researchers opine that the baseline mannequin didn’t precisely render object orientation and spatial relationships in reflections, typically producing artifacts reminiscent of incorrect rotation and floating objects. MirrorFusion 2.0, educated on SynMirrorV2, the authors contend, preserves appropriate object orientation and positioning in each single-object and multi-object scenes, leading to extra practical and coherent reflections.

Under we see qualitative outcomes on the aforementioned GSO dataset:

Comparison on the GSO dataset. The baseline misrepresented object structure and produced incomplete, distorted reflections, while MirrorFusion 2.0, the authors contend, preserves spatial integrity and generates accurate geometry, color, and detail, even on out-of-distribution objects.

Comparability on the GSO dataset. The baseline misrepresents object construction and produced incomplete, distorted reflections, whereas MirrorFusion 2.0, the authors contend, preserves spatial integrity and generates correct geometry, colour, and element, even on out-of-distribution objects.

Right here the authors remark:

‘MirrorFusion 2.0 generates considerably extra correct and practical reflections. For example, in Fig. 5 (a – above), MirrorFusion 2.0 accurately displays the drawer handles (highlighted in inexperienced), whereas the baseline mannequin produces an implausible reflection (highlighted in purple).

‘Likewise, for the “White-Yellow mug” in Fig. 5 (b), MirrorFusion 2.0 delivers a convincing geometry with minimal artifacts, in contrast to the baseline, which fails to precisely seize the item’s geometry and look.’

The ultimate qualitative take a look at was in opposition to the aforementioned real-world MSD dataset (partial outcomes proven under):

Real-world scene results comparing MirrorFusion, MirrorFusion 2.0, and MirrorFusion 2.0, fine-tuned on the MSD dataset. MirrorFusion 2.0, the authors contend, captures complex scene details more accurately, including cluttered objects on a table, and the presence of multiple mirrors within a three-dimensional environment. Only partial results are shown  here, due to the dimensions of the results in the original paper, to which we refer the reader for full results and better resolution.

Actual-world scene outcomes evaluating MirrorFusion, MirrorFusion 2.0, and MirrorFusion 2.0, fine-tuned on the MSD dataset. MirrorFusion 2.0, the authors contend, captures advanced scene particulars extra precisely, together with cluttered objects on a desk, and the presence of a number of mirrors inside a three-dimensional surroundings. Solely partial outcomes are proven  right here, as a result of dimensions of the ends in the unique paper, to which we refer the reader for full outcomes and higher decision.

Right here the authors observe that whereas MirrorFusion 2.0 carried out properly on MirrorBenchV2 and GSO information, it initially struggled with advanced real-world scenes within the MSD dataset. Advantageous-tuning the mannequin on a subset of MSD improved its skill to deal with cluttered environments and a number of mirrors, leading to extra coherent and detailed reflections on the held-out take a look at break up.

Moreover, a consumer research was performed, the place 84% of customers are reported to have most well-liked generations from MirrorFusion 2.0 over the baseline technique.

Results of the user study.

Outcomes of the consumer research.

Since particulars of the consumer research have been relegated to the appendix of the paper, we refer the reader to that for the specifics of the research.

Conclusion

Though a number of of the outcomes proven within the paper are spectacular enhancements on the state-of-the-art, the state-of-the-art for this specific pursuit is so abysmal that even an unconvincing mixture answer can win out with a modicum of effort. The elemental structure of a diffusion mannequin is so inimical to the dependable studying and demonstration of constant physics, that the issue itself is really posed, and never apparently not disposed towards a chic answer.

Additional, including information to current fashions is already the usual technique of remedying shortfalls in LDM efficiency, with all of the disadvantages listed earlier. It’s affordable to imagine that if future high-scale datasets have been to pay extra consideration to the distribution (and annotation) of reflection-related information factors, we may anticipate that the ensuing fashions would deal with this situation higher.

But the identical is true of a number of different bugbears in LDM output – who can say which ones most deserves the trouble and cash concerned within the form of answer that the authors of the brand new paper suggest right here?

 

First printed Monday, April 28, 2025

ios – Find out how to repair animation mixing?


I’ve a code that scrolls by means of full-screen photos ahead and backward when tapping the left or proper facet of the display screen. When a picture seems on the display screen, it performs one of many animation sorts: high, backside, left, proper, zoomin, zoomout — every of those animations consists of two sub-animations (the primary animation is quick, and the second is looped).

The issue is that generally, when switching photos, the animation coordinates stack. Below regular circumstances, animations ought to solely have an effect on both X, Y, or Scale. However in my case, it generally occurs that photos transfer diagonally (X + Y), the animation goes past the picture boundaries, and I see black areas on the display screen. This should not occur. How can I repair this?

I am utilizing removeAllAnimations earlier than every animation, but it surely would not assist.

full code:

class ReaderController: UIViewController, CAAnimationDelegate {
    
    var pagesData = [PageData]()
    var index = Int()
    var pageIndex: Int = -1
    
    let pageContainer: UIView = {
        let view = UIView()
        view.translatesAutoresizingMaskIntoConstraints = false
        return view
    }()
    
    let pageViews: [PageLayout] = {
        let view = [PageLayout(), PageLayout()]
        view[0].translatesAutoresizingMaskIntoConstraints = false
        view[1].translatesAutoresizingMaskIntoConstraints = false
        return view
    }()
    
    personal weak var currentTransitionView: PageLayout?
        
    override func viewDidLoad() {
        tremendous.viewDidLoad()

        setupViews()
        setupConstraints()

        pageViews[0].index = index
        pageViews[1].index = index
        pageViews[0].pageIndex = pageIndex
        pageViews[1].pageIndex = pageIndex
        
        pageTransition(animated: false, course: "fromRight")
    }
        
    func setupViews() {
        pageContainer.addSubview(pageViews[0])
        pageContainer.addSubview(pageViews[1])
        view.addSubview(pageContainer)
    }
        
    func setupConstraints() {
        pageContainer.topAnchor.constraint(equalTo: view.topAnchor, fixed: 0.0).isActive = true
        pageContainer.bottomAnchor.constraint(equalTo: view.bottomAnchor, fixed: 0.0).isActive = true
        pageContainer.leadingAnchor.constraint(equalTo: view.leadingAnchor, fixed: 0.0).isActive = true
        pageContainer.trailingAnchor.constraint(equalTo: view.trailingAnchor, fixed: 0.0).isActive = true

        pageViews[0].topAnchor.constraint(equalTo: pageContainer.topAnchor).isActive = true
        pageViews[0].bottomAnchor.constraint(equalTo: pageContainer.bottomAnchor).isActive = true
        pageViews[0].leadingAnchor.constraint(equalTo: pageContainer.leadingAnchor).isActive = true
        pageViews[0].trailingAnchor.constraint(equalTo: pageContainer.trailingAnchor).isActive = true

        pageViews[1].topAnchor.constraint(equalTo: pageContainer.topAnchor).isActive = true
        pageViews[1].bottomAnchor.constraint(equalTo: pageContainer.bottomAnchor).isActive = true
        pageViews[1].leadingAnchor.constraint(equalTo: pageContainer.leadingAnchor).isActive = true
        pageViews[1].trailingAnchor.constraint(equalTo: pageContainer.trailingAnchor).isActive = true
    }
        
    func loadData(fileName: Any) -> PagesData {
        var url = NSURL()
        url = Bundle.essential.url(forResource: "textual content", withExtension: "json")! as NSURL
        let information = strive! Information(contentsOf: url as URL)
        let particular person = strive! JSONDecoder().decode(PagesData.self, from: information)
        return particular person
    }
    
    override func touchesBegan(_ touches: Set, with occasion: UIEvent?) {
        for contact in touches {
            let location = contact.location(in: view.self)

            if view.safeAreaInsets.left > 30 {
                if (location.x > self.view.body.measurement.width - (view.safeAreaInsets.left * 1.5)) {
                    pageTransition(animated: true, course: "fromRight")
                } else if (location.x < (view.safeAreaInsets.left * 1.5)) {
                    pageTransition(animated: true, course: "fromLeft")
                }
            }

            else {
                if (location.x > self.view.body.measurement.width - 40) {
                    pageTransition(animated: true, course: "fromRight")
                } else if (location.x < 40) {
                    pageTransition(animated: true, course: "fromLeft")
                }
            }
        }
        
    }
    
    func pageTransition(animated: Bool, course: String) {
        let end result = loadData(fileName: pagesData)
        
        swap course {
        case "fromRight":
            pageIndex += 1
        case "fromLeft":
            pageIndex -= 1
        default: break
        }
        
        pageViews[0].pageIndex = pageIndex
        pageViews[1].pageIndex = pageIndex
        
        guard pageIndex >= 0 && pageIndex < end result.pagesData.depend else {
            pageIndex = max(0, min(pageIndex, end result.pagesData.depend - 1))
            return
        }
        
        let fromView = pageViews[0].isHidden ? pageViews[1] : pageViews[0]
        let toView = pageViews[0].isHidden ? pageViews[0] : pageViews[1]
        toView.imageView.layer.removeAllAnimations()
        toView.imageView.rework = .id
        toView.configure(theData: end result.pagesData[pageIndex])
        if animated {
            fromView.isHidden = true
            toView.isHidden = false
        } else {
            fromView.isHidden = true
            toView.isHidden = false
        }
    }
    
}

class PageLayout: UIView {
            
    var index = Int()
    var pageIndex = Int()
    
    let imageView: UIImageView = {
        let picture = UIImageView()
        picture.contentMode = .scaleAspectFill
        picture.translatesAutoresizingMaskIntoConstraints = false
        return picture
    }()

    var imageViewTopConstraint = NSLayoutConstraint()
    var imageViewBottomConstraint = NSLayoutConstraint()
    var imageViewLeadingConstraint = NSLayoutConstraint()
    var imageViewTrailingConstraint = NSLayoutConstraint()
    
    var imagePosition = ""
    
    override init(body: CGRect) {
        tremendous.init(body: body)
        addSubview(imageView)
        setupConstraints()
    }
    
    func setupConstraints() {
        
        imageView.layer.removeAllAnimations()
        imageView.rework = .id
        
        removeConstraints([imageViewTopConstraint, imageViewBottomConstraint, imageViewLeadingConstraint,
                           imageViewTrailingConstraint])
        
        swap imagePosition {
            
        case "high":
            
            imageView.rework = .id
            
            imageViewTopConstraint = imageView.topAnchor.constraint(equalTo: topAnchor, fixed: -40.0)
            imageViewBottomConstraint = imageView.bottomAnchor.constraint(equalTo: bottomAnchor, fixed: 0.0)
            imageViewLeadingConstraint = imageView.leadingAnchor.constraint(equalTo: leadingAnchor, fixed: 0.0)
            imageViewTrailingConstraint = imageView.trailingAnchor.constraint(equalTo: trailingAnchor, fixed: 0.0)
            
            addConstraints([imageViewTopConstraint, imageViewBottomConstraint, imageViewLeadingConstraint, imageViewTrailingConstraint])
            
            UIView.animate(withDuration: 2.0, delay: 0, choices: [.curveEaseOut, .allowUserInteraction, .beginFromCurrentState], animations: {
                self.imageView.rework = CGAffineTransform(translationX: 0, y: 40.0)
            }, completion: { _ in
                UIView.animate(withDuration: 6.0, delay: 0, choices: [.curveLinear, .autoreverse, .repeat, .beginFromCurrentState, .allowUserInteraction], animations: {
                    self.imageView.rework = self.imageView.rework.translatedBy(x: 0, y: -40.0)
                }, completion: nil)
            })
            
        case "backside":
            
            imageView.rework = .id
            
            imageViewTopConstraint = imageView.topAnchor.constraint(equalTo: topAnchor, fixed: 0.0)
            imageViewBottomConstraint = imageView.bottomAnchor.constraint(equalTo: bottomAnchor, fixed: 40.0)
            imageViewLeadingConstraint = imageView.leadingAnchor.constraint(equalTo: leadingAnchor, fixed: 0.0)
            imageViewTrailingConstraint = imageView.trailingAnchor.constraint(equalTo: trailingAnchor, fixed: 0.0)
            
            addConstraints([imageViewTopConstraint, imageViewBottomConstraint, imageViewLeadingConstraint, imageViewTrailingConstraint])
            
            UIView.animate(withDuration: 2.0, delay: 0, choices: [.curveEaseOut, .allowUserInteraction, .beginFromCurrentState], animations: {
                self.imageView.rework = CGAffineTransform(translationX: 0, y: -40)
            }, completion: { _ in
                UIView.animate(withDuration: 6.0, delay: 0, choices: [.curveLinear, .autoreverse, .repeat, .beginFromCurrentState, .allowUserInteraction], animations: {
                    self.imageView.rework = self.imageView.rework.translatedBy(x: 0, y: 40)
                }, completion: nil)
            })
            
        case "left":
            
            imageView.rework = .id
            
            imageViewTopConstraint = imageView.topAnchor.constraint(equalTo: topAnchor, fixed: 0.0)
            imageViewBottomConstraint = imageView.bottomAnchor.constraint(equalTo: bottomAnchor, fixed: 0.0)
            imageViewLeadingConstraint = imageView.leadingAnchor.constraint(equalTo: leadingAnchor, fixed: -40.0)
            imageViewTrailingConstraint = imageView.trailingAnchor.constraint(equalTo: trailingAnchor, fixed: 0.0)
            
            addConstraints([imageViewTopConstraint, imageViewBottomConstraint, imageViewLeadingConstraint, imageViewTrailingConstraint])
            
            UIView.animate(withDuration: 2.0, delay: 0, choices: [.curveEaseOut, .allowUserInteraction, .beginFromCurrentState], animations: {
                self.imageView.rework = CGAffineTransform(translationX: 40, y: 0)
            }, completion: { _ in
                UIView.animate(withDuration: 6.0, delay: 0, choices: [.curveLinear, .autoreverse, .repeat, .beginFromCurrentState, .allowUserInteraction], animations: {
                    self.imageView.rework = self.imageView.rework.translatedBy(x: -40, y: 0)
                }, completion: nil)
            })
            
        case "proper":
            
            imageView.rework = .id
            
            imageViewTopConstraint = imageView.topAnchor.constraint(equalTo: topAnchor, fixed: 0.0)
            imageViewBottomConstraint = imageView.bottomAnchor.constraint(equalTo: bottomAnchor, fixed: 0.0)
            imageViewLeadingConstraint = imageView.leadingAnchor.constraint(equalTo: leadingAnchor, fixed: 0.0)
            imageViewTrailingConstraint = imageView.trailingAnchor.constraint(equalTo: trailingAnchor, fixed: 40.0)
            
            addConstraints([imageViewTopConstraint, imageViewBottomConstraint, imageViewLeadingConstraint, imageViewTrailingConstraint])
            
            UIView.animate(withDuration: 2.0, delay: 0, choices: [.curveEaseOut, .allowUserInteraction, .beginFromCurrentState], animations: {
                self.imageView.rework = CGAffineTransform(translationX: -40, y: 0)
            }, completion: { _ in
                UIView.animate(withDuration: 6.0, delay: 0, choices: [.curveLinear, .autoreverse, .repeat, .beginFromCurrentState, .allowUserInteraction], animations: {
                    self.imageView.rework = self.imageView.rework.translatedBy(x: 40, y: 0)
                }, completion: nil)
            })
            
        case "zoomin":
            
            imageView.rework = CGAffineTransformScale(.id, 1.0, 1.0)
            
            imageViewTopConstraint = imageView.topAnchor.constraint(equalTo: topAnchor, fixed: 0.0)
            imageViewBottomConstraint = imageView.bottomAnchor.constraint(equalTo: bottomAnchor, fixed: 0.0)
            imageViewLeadingConstraint = imageView.leadingAnchor.constraint(equalTo: leadingAnchor, fixed: 0.0)
            imageViewTrailingConstraint = imageView.trailingAnchor.constraint(equalTo: trailingAnchor, fixed: 0.0)
            
            addConstraints([imageViewTopConstraint, imageViewBottomConstraint, imageViewLeadingConstraint, imageViewTrailingConstraint])
            
            UIView.animate(withDuration: 2.0, delay: 0, choices: [.curveEaseOut, .allowUserInteraction, .beginFromCurrentState], animations: {
                self.imageView.rework = CGAffineTransform(scaleX: 1.1, y: 1.1)
            }, completion: { _ in
                UIView.animate(withDuration: 6.0, delay: 0, choices: [.curveLinear, .autoreverse, .repeat, .beginFromCurrentState, .allowUserInteraction], animations: {
                    self.imageView.rework = .id
                }, completion: nil)
            })
            
        case "zoomout":
            
            imageView.rework = CGAffineTransformScale(.id, 1.1, 1.1)
            
            imageViewTopConstraint = imageView.topAnchor.constraint(equalTo: topAnchor, fixed: 0.0)
            imageViewBottomConstraint = imageView.bottomAnchor.constraint(equalTo: bottomAnchor, fixed: 0.0)
            imageViewLeadingConstraint = imageView.leadingAnchor.constraint(equalTo: leadingAnchor, fixed: 0.0)
            imageViewTrailingConstraint = imageView.trailingAnchor.constraint(equalTo: trailingAnchor, fixed: 0.0)
            
            addConstraints([imageViewTopConstraint, imageViewBottomConstraint, imageViewLeadingConstraint, imageViewTrailingConstraint])
            
            UIView.animate(withDuration: 2.0, delay: 0, choices: [.curveEaseOut, .allowUserInteraction, .beginFromCurrentState], animations: {
                self.imageView.rework = CGAffineTransform(scaleX: 1.0, y: 1.0)
            }, completion: { _ in
                UIView.animate(withDuration: 6.0, delay: 0, choices: [.curveLinear, .autoreverse, .repeat, .beginFromCurrentState, .allowUserInteraction], animations: {
                    self.imageView.rework = self.imageView.rework.scaledBy(x: 1.1, y: 1.1)
                }, completion: nil)
            })
            
        default:
            
            imageView.rework = .id
            
            imageViewTopConstraint = imageView.topAnchor.constraint(equalTo: topAnchor, fixed: 0.0)
            imageViewBottomConstraint = imageView.bottomAnchor.constraint(equalTo: bottomAnchor, fixed: 0.0)
            imageViewLeadingConstraint = imageView.leadingAnchor.constraint(equalTo: leadingAnchor, fixed: 0.0)
            imageViewTrailingConstraint = imageView.trailingAnchor.constraint(equalTo: trailingAnchor, fixed: 0.0)
            
            addConstraints([imageViewTopConstraint, imageViewBottomConstraint, imageViewLeadingConstraint, imageViewTrailingConstraint])
        }
    }
        
    func configure(theData: PageData) {
        imageView.picture = UIImage(named: "web page(pageIndex+1)")
        imagePosition = theData.imagePosition
        setupConstraints()
    }
    
    required init?(coder: NSCoder) {
        fatalError("Not taking place")
    }
    
}

struct PagesData: Decodable {
    var pagesData: [PageData]
}

struct PageData: Decodable {
    let textData, textPosition, textColor, shadowColor, textAlignment, imagePosition: String
}

JSON:

{
    "pagesData" : [
        
        {
            "textData" : "",
            "textPosition" : "topLeft",
            "textColor" : "FFFFFF",
            "shadowColor" : "000000",
            "textAlignment" : "left",
            "imagePosition" : "left",
        },
        
        {
            "textData" : "",
            "textPosition" : "bottomLeft",
            "textColor" : "FFFFFF",
            "shadowColor" : "000000",
            "textAlignment" : "left",
            "imagePosition" : "bottom",
        },
        
        {
            "textData" : "",
            "textPosition" : "zoomin",
            "textColor" : "FFFFFF",
            "shadowColor" : "000000",
            "textAlignment" : "left",
            "imagePosition" : "right",
        },
        
        {
            "textData" : "",
            "textPosition" : "bottomCenter",
            "textColor" : "FFFFFF",
            "shadowColor" : "000000",
            "textAlignment" : "left",
            "imagePosition" : "zoomout",
        },
        
        {
            "textData" : "",
            "textPosition" : "topLeft",
            "textColor" : "FFFFFF",
            "shadowColor" : "000000",
            "textAlignment" : "left",
            "imagePosition" : "left",
        },
        
    ]
}

Defending In opposition to HNDL Assaults At this time


Within the ever-evolving panorama of cybersecurity, HNDL (Harvest Now, Decrypt Later) is rising as a silent however severe risk. It doesn’t require an attacker to interrupt encryption at present—it simply bets that they are going to be ready to take action tomorrow.

What’s HNDL?

HNDL is a long-term knowledge breach technique during which adversaries intercept and retailer encrypted knowledge at present, with the intention of decrypting it sooner or later when computing energy—significantly quantum computing—makes breaking present cryptography possible. The worth of the information doesn’t have to be speedy. Delicate medical data, confidential enterprise contracts, protection communications, or citizen knowledge can all retain strategic worth years down the street.

Why Ought to You Be Involved?

  • Quantum computing shouldn’t be science fiction anymore. Progress is accelerating, and whereas sensible quantum decryption should be years away, risk actors (together with state-sponsored teams) are already getting ready by harvesting knowledge now.
  • Most encryption used at present (like RSA and ECC) will finally be weak to quantum assaults until up to date with post-quantum cryptography (PQC).
  • You might by no means comprehend it’s taking place. In contrast to ransomware or denial-of-service assaults, HNDL leaves no speedy hint.

What Ought to Organizations Do?

You don’t want a crystal ball to defend in opposition to future dangers—you want a roadmap. Right here’s easy methods to act now:

1.        Stock and Classify Your Cryptographic Property

  • Begin with a crypto-agility evaluation: Determine all cases of cryptographic use throughout your infrastructure, together with TLS, VPNs, inside apps, backups, and cloud integrations.
  • Categorize the knowledge sensitivity and longevity of confidentiality required. Something that should keep confidential for greater than 3-5 years is doubtlessly in danger from HNDL.

2.        Prioritize Lengthy-Lived, Excessive-Sensitivity Site visitors

  • VPN tunnels, database backups, and software program distribution methods are prime targets for HNDL.
  • Site visitors between core methods and long-term logs are particularly weak.

3.        Undertake Publish-Quantum Cryptography The place It Issues Most

  • Start testing or deploying hybrid cryptographic protocols (e.g., classical + PQC) that meet each present and future safety wants.
  • Search for standards-compliant options like these aligned with NIST’s FIPS 203/204/205 (for ML-KEM, ML-DSA, and SLH-DSA).

4.        Demand PQC Roadmaps from Your Distributors

  • Ask your safety distributors and infrastructure suppliers what they’re doing to assist PQC and mitigate HNDL.
  • Search interoperability assessments and pilot deployments—don’t watch for full productization.

5.        Defend the Channel, Not Simply the Endpoint

  • Even when your endpoints are safe, visitors in transit is weak to interception.
  • Safe communication channels (IPsec, TLS, MACsec) ought to evolve to assist PQC as a high precedence.

Remaining Thought

HNDL isn’t only a future drawback—it’s a gift danger disguised by time. Organizations that wait till quantum computer systems are totally operational will have already got misplaced the battle for his or her previous knowledge. The time to behave is now.

If you’re headed for the #RSAC, you may be taught extra from the Cisco session on Architecting the Way forward for Safety by Cisco Thought Chief Tim Rowley on Tuesday, April twenty ninth 2025 at 10:30 am on the Cisco Safety Sales space #N5845. We’d love to attach and change concepts as we put together for the brand new frontiers in safety.

Share:

High quality begins with planning: Constructing software program with the best mindset


When most builders take into consideration testing, they think about writing unit assessments, working check suites, or triaging bugs. However efficient testing is excess of that. It’s a cornerstone of dependable software program supply. It ensures enterprise continuity, retains customers completely satisfied, and helps keep away from expensive surprises in manufacturing. For contemporary improvement groups in fast-moving agile or DevOps environments, testing isn’t just a field to test, it’s a mindset that should be baked into each part of software program improvement. And that mindset begins lengthy earlier than the primary line of code is written.

Too typically, high quality is seen because the accountability of QA engineers. Builders write the code, QA assessments it, and ops groups deploy it. However in high-performing groups, that mannequin not works. High quality isn’t one workforce’s job; it’s everybody’s job.

Architects defining system elements, builders writing code, product managers defining options, and launch managers planning deployments all contribute to delivering a dependable product. When high quality is owned by your entire workforce, testing turns into a collaborative effort. Builders write testable code and contribute to check plans. Product managers make clear edge circumstances throughout necessities gathering. Ops engineers put together for rollback eventualities. This collective method ensures that no side of high quality is left to probability.

“Shift Left” Means Begin on the Begin

The time period “shift left” has been round for some time, but it surely’s typically misunderstood. Many assume it merely means writing assessments earlier within the improvement course of. That’s true, but it surely’s simply a part of the story.

Shifting left begins not within the construct part, however in planning. It begins when necessities are gathered, when groups first talk about what to construct. That is the place the seeds of high quality are planted. If necessities are unclear, incomplete, or lack consideration of dependencies and edge circumstances, then no quantity of downstream testing can totally shield the product.

For builders, this implies partaking early, asking questions on person flows, integration factors, edge circumstances, and enterprise logic. It means partnering with product managers to make clear use circumstances and collaborating with QA to develop complete check eventualities from the outset.

Construct the Proper Factor, the Proper Means

One of many largest causes of software program failure isn’t constructing the unsuitable means, it’s constructing the unsuitable factor. You may write completely clear, well-tested code that works precisely as supposed and nonetheless fail your customers if the function doesn’t clear up the best drawback.

That’s why testing should begin with validating the necessities themselves. Do they align with enterprise objectives? Are they technically possible? Have we thought-about the downstream affect on different techniques or elements? Have we outlined what success appears to be like like?

Builders play a key position right here. Asking “what if?” and “why?” throughout planning classes helps form necessities that aren’t solely testable, however significant. This upfront curiosity prevents wasted effort later.

Testing Is a Technique, Not an Afterthought

Testing shouldn’t simply be about executing scripts after the code is full. It needs to be a technique built-in into the event lifecycle. That features:

  • Unit Exams: to catch points on the perform or module stage
  • Integration Exams: to make sure that elements work collectively as anticipated
  • Finish-to-Finish Exams: to validate person workflows from a real-world perspective
  • Efficiency Exams: to catch scalability or latency points earlier than they affect customers
  • Exploratory Testing: to uncover sudden behaviors and edge circumstances

Extra importantly, the check plan needs to be tied to the danger profile of the function. A small UI tweak doesn’t want the identical rigor as a important backend change that touches monetary information. Planning this out prematurely retains testing efforts environment friendly and targeted.

High quality Mindset in Launch Administration

Usually ignored, launch administration is a key piece of the standard puzzle. You may have nice code and thorough assessments, but when your deployment course of is flawed, customers will nonetheless endure.

That’s why the standard mindset should prolong to the workforce accountable for getting code into manufacturing. Earlier than something is deployed, there needs to be a plan to confirm the change in production-like environments, monitor its habits after launch, and roll it again shortly if wanted.

For builders, this implies partnering with ops and SRE groups early within the lifecycle. Understanding how your code can be deployed, what logging and monitoring can be in place, and the way errors can be dealt with are all a part of delivering high-quality software program.

The Position of Automation

Automation is a developer’s greatest ally in sustaining high quality at scale. Automated assessments give quick suggestions, cut back human error, and unencumber time for exploratory testing. However automation is barely efficient when it’s thoughtfully carried out.

Don’t intention for 100% check protection only for the sake of it. As an alternative, intention for significant protection. Concentrate on high-risk areas, edge circumstances, and significant person flows. Ensure that your assessments are maintainable and supply actual worth. And all the time steadiness pace and depth; quick suggestions loops throughout improvement, with deeper validation earlier than launch.

CI/CD pipelines are additionally a serious part. Each commit ought to set off automated assessments, and builds ought to fail quick if important points are detected. Builders ought to deal with failing assessments as high-priority defects.

Tradition Eats Course of for Breakfast

On the finish of the day, no quantity of tooling or course of can compensate for an absence of a quality-driven tradition. That tradition begins with management, but it surely’s strengthened day by day by builders who take possession of the software program they construct. When builders undertake a top quality mindset, software program high quality turns into a pure final result.

The subsequent time you kick off a undertaking, bear in mind: testing doesn’t begin when the code is written. It begins within the first assembly, the primary concept, the primary whiteboard sketch. A top quality mindset isn’t one thing you bolt on on the finish; it’s one thing you construct in from the start.

As a developer, you’re not simply writing code. You’re shaping the reliability, usability, and integrity of your entire product. And that begins with a easy however highly effective concept: high quality begins with planning.

Enhanced Antibacterial Polylactic Acid-Curcumin Nanofibers for Wound Dressing – NanoApps Medical – Official web site


Background

Wound therapeutic is a posh physiological course of that may be compromised by an infection and impaired tissue regeneration. Standard dressings, sometimes produced from pure fibers comparable to cotton or linen, provide restricted performance. Nanofiber scaffolds, significantly these primarily based on biocompatible polymers like PLA, present excessive floor space and porosity, making them appropriate for managed drug supply and tissue interplay.

Curcumin, a bioactive compound derived from turmeric, has demonstrated anti-inflammatory and antibacterial properties. Nevertheless, its use in wound care is restricted by poor solubility and low bioavailability. CNTs provide complementary benefits: they possess intrinsic antibacterial exercise and may enhance the mechanical properties and drug launch profiles of polymer-based programs.

This examine investigates the combination of CNTs into PLA-curcumin nanofibers to create a multifunctional wound dressing able to each structural assist and an infection management.

The Present Research

The dressing was produced utilizing electrospinning, a method appropriate for fabricating nanofibers with managed morphology. PLA (molecular weight: 203,000 g/mol) was dissolved in dichloromethane, adopted by the addition of curcumin to make sure uniform dispersion. CNTs have been integrated at various concentrations to evaluate their results on the fabric’s structural and practical properties.

Electrospun nanofibers have been collected utilizing an ordinary setup with a managed circulate fee and glued needle-to-collector distance. Characterization included Fourier-transform infrared spectroscopy (FTIR) for chemical evaluation and scanning electron microscopy (SEM) for morphology. Tensile exams evaluated mechanical energy, whereas curcumin launch profiles have been analyzed by way of in vitro assays. Antibacterial efficiency was assessed utilizing commonplace strains of Staphylococcus aureus and Escherichia coli.

Outcomes and Dialogue

Incorporating CNTs considerably improved the mechanical energy and thermal stability of the PLA-curcumin nanofibers. Tensile testing confirmed that even small additions of CNTs enhanced tensile energy in comparison with pure PLA. Drug launch research confirmed a managed and sustained launch of curcumin, with the speed modulated by CNT focus. This impact was attributed to modifications in scaffold porosity and microstructure.

Antibacterial assays revealed that CNT-containing composites had a marked inhibitory impact on bacterial progress. The PLA-Cur-0.05 % CNT formulation confirmed the best antibacterial exercise, with a 78.95 % discount in microbial progress. Whereas curcumin alone confirmed restricted antibacterial efficacy within the PLA matrix, CNTs appeared to assist each dispersion and membrane-disruptive mechanisms, contributing to improved outcomes.

Water absorption exams additional supported the composite’s suitability for wound care. Whereas PLA alone exhibited excessive water uptake, the addition of curcumin and CNTs diminished this absorption. A extra reasonable water uptake profile is advantageous for managing exudates with out compromising the mechanical integrity of the dressing.

Conclusion

This examine demonstrates the potential of CNT-enhanced PLA-curcumin nanofiber mats as multifunctional wound dressings. The mixture of improved mechanical properties, antibacterial exercise, and managed drug launch gives a promising platform for an infection administration and wound therapeutic assist. The design leverages nanostructure engineering to beat the restrictions of standard supplies and drug supply programs.

Future analysis ought to discover in vivo efficiency, scalability, and additional refinement of the composite formulation. Optimizing element ratios and evaluating long-term biocompatibility will likely be key steps towards scientific software.

Journal Reference

Faal M., et al. (2025). Fabrication and analysis of polylactic acid-curcumin containing carbon nanotubes (CNTs) wound dressing utilizing electrospinning methodology with experimental and computational approaches. Scientific Stories. DOI: 10.1038/s41598-025-98393-2, https://www.nature.com/articles/s41598-025-98393-2