6.3 C
New York
Friday, April 11, 2025
Home Blog Page 5

multicast – Sflow on Nexus returning defective interface values


Howdy fellow networking of us,

I am presently making an attempt to construct a small monitoring resolution for multicasts. In our lab we’ve a Nexus9000 C93108TC-EX operating model 7.0. I need to begin with this gadget and perhaps later proceed supporting others. The objective is to see for every interface: “Which multicasts are coming into and that are leaving.”

Sflow appears to be a viable resolution for this downside because it “simply” samples an outlined subset of all of the packets passing by the monitored interfaces. For every sampled packets Sflow supplies some further info. For me the Supply ID index and the Enter interface worth are most fascinating. I’m sticking to the sector descriptions offered by Wireshark since completely different sources consult with them in another way.

When a packets arrives from outdoors the change on one monitored interface, all the things works flawlessly. I can evaluate the 2 values to the values within the MIB-II interface description. Each values match as they need to.

When a packets is leaving the change the story goes in another way. The Enter interface worth is right so I can nonetheless see, on which bodily interface a packet entered the change. Supply ID index at all times shows hex 0x80000000. It ought to present the interface I’m monitoring proper now, the interface from wich the packet was sampled.

If the state of affairs stays like that I can solely correctly monitor incoming multicasts however I can not monitor by which interfaces packets go away the change.

In my view the Cisco documentation just isn’t actually clear if this habits is predicted or not. For NX-OS 10.5 I discovered

sFlow does not assist egress sampling for multicast, broadcast, or unknown unicast packets.

However the NX-OS 7 documentation states:

Egress sFlow of multicast visitors requires {hardware} multicast global-tx-span configuration.

which I attempted. The opposite sentence in there drove me completely nuts:

For an ingress sFlow pattern of multicast packets, the out port is reported as a number of ports with the precise variety of egress ports. This isn’t supported on Cisco Nexus 9300-EX and -FX/P platform switches.

Like, what does this even imply? I’d interpret it as: “You may see what number of interfaces an incoming packet will go to, however not in your gadget”. However that ought to not have an effect on what I can see on the sampled egress packet, proper?

I assume that both I’m not sensible sufficient to learn the documentation appropriately or the documentation just isn’t coherent. So my query is: Is it potential to appropriately pattern the knowledge for egress multicast visitors with my change and in that case, what must be accomplished.

If it’s not potential I’m how properly different distributors assist sflow monitoring of multicast packet (particularly Arista). Is it solely Cisco implementing it weirdly or is there an even bigger purpose for this.

I am additionally fascinated about potential options for my implementation and should you assume they may very well be potential:

  1. Mix the snooping and group report with the enter knowledge (present ip igmp snooping teams). This may be potential however isn’t any true monitoring. I would not know when the change doesn’t go a packet.
  2. Cycle the sflow monitoring port. If I monitor just one port at a time I at all times know the place a one multicast enters and the place it leaves
  3. I have a look at another interface knowledge (counters or one thing comparable) if there are any correlations I can use to match output multicasts to interfaces not directly.

When you’ve got any concepts I might recognize your assist.

Hussein Osman, Section Advertising and marketing Director at Lattice Semiconductor – Interview Collection

0


Hussein Osman is a semiconductor {industry} veteran with over twenty years of expertise bringing to market silicon and software program merchandise that combine sensing, processing and connectivity options, specializing in revolutionary experiences that ship worth to the tip person. Over the previous 5 years he has led the sensAI answer technique and go-to-market efforts at Lattice Semiconductor, creating high-performance AI/ML purposes. Mr. Osman obtained his bachelor’s diploma in Electrical Engineering from California Polytechnic State College in San Luis Obispo.

Lattice Semiconductor (LSCC -12.36%) is a supplier of low-power programmable options used throughout communications, computing, industrial, automotive, and client markets. The corporate’s low-power FPGAs and software program instruments are designed to assist speed up growth and help innovation throughout purposes from the Edge to the Cloud.

Edge AI is gaining traction as corporations search options to cloud-based AI processing. How do you see this shift impacting the semiconductor {industry}, and what position does Lattice Semiconductor play on this transformation? 

Edge AI is totally gaining traction, and it’s due to its potential to actually revolutionize complete markets. Organizations throughout a variety of sectors are leaning into Edge AI as a result of it’s serving to them obtain sooner, extra environment friendly, and safer operations — particularly in real-time purposes — than are potential with cloud computing alone. That’s the piece most individuals are inclined to concentrate on: how Edge AI is altering enterprise operations when carried out. However there’s this different journey that’s taking place in tandem, and it begins far earlier than implementation.

Innovation in Edge AI is pushing authentic gear producers to design system parts that may run AI fashions regardless of footprint constraints. Meaning light-weight, optimized algorithms, specialised {hardware}, and different developments that complement and/or amplify efficiency. That is the place Lattice Semiconductor comes into play.

Our Discipline Programmable Gate Arrays (FPGAs) present the extremely adaptable {hardware} vital for designers to satisfy strict system necessities associated to latency, energy, safety, connectivity, measurement, and extra. They supply a basis on which engineers can construct units able to retaining mission-critical Automotive, Industrial, and Medical purposes practical. It is a large focus space for our present innovation, and we’re excited to assist prospects overcome challenges and greet the period of Edge AI with confidence.

What are the important thing challenges that companies face when implementing Edge AI, and the way do you see FPGAs addressing these points extra successfully than conventional processors or GPUs?

, some challenges appear to be really common as any expertise advances. For instance, builders and companies hoping to harness the facility of Edge AI will doubtless grapple with widespread challenges, equivalent to:

  • Useful resource administration. Edge AI units need to carry out advanced processes reliably whereas working inside more and more restricted computational and battery capacities.
  • Though Edge AI gives the privateness advantages of native knowledge processing, it raises different safety issues, equivalent to the opportunity of bodily tampering or the vulnerabilities that include smaller-scale fashions.
  • Edge AI ecosystems may be extraordinarily various in {hardware} architectures and computing necessities, making it troublesome to streamline features like knowledge administration and mannequin updates at scale.

FPGAs provide companies a leg up in addressing these key points via their mixture of environment friendly parallel processing, low energy consumption, hardware-level safety capabilities, and reconfigurability. Whereas these might sound like advertising buzzwords, they’re important options for fixing prime Edge AI ache factors.

FPGAs have historically been used for capabilities like bridging and I/O growth. What makes them significantly well-suited for Edge AI purposes?

Sure, you’re precisely proper that FPGAs excel within the realm of connectivity — and that’s a part of what makes them so highly effective in Edge AI purposes. As you talked about, they’ve customizable I/O ports that permit them to interface with a wide selection of units and communication protocols. On prime of this, they will carry out capabilities like bridging and sensor fusion to make sure seamless knowledge trade, aggregation, and synchronization between totally different system parts, together with legacy and rising requirements. These capabilities are significantly essential as at this time’s Edge AI ecosystems develop extra advanced and the necessity for interoperability and scalability will increase.

Nonetheless, as we’ve been discussing, FPGAs’ connectivity advantages are solely the tip of the iceberg; it’s additionally about how their adaptability, processing energy, power effectivity, and safety features are driving outcomes. For instance, FPGAs may be configured and reconfigured to carry out particular AI duties, enabling builders to tailor purposes to their distinctive wants and meet evolving necessities.

Are you able to clarify how low-power FPGAs evaluate to GPUs and ASICs when it comes to effectivity, scalability, and real-time processing capabilities for Edge AI?

I gained’t faux that {hardware} like GPUs and ASICs don’t have the compute energy to help Edge AI purposes. They do. However FPGAs really have an “edge” on these different parts in different areas like latency and suppleness. For instance, each GPUs and FPGAs can carry out parallel processing, however GPU {hardware} is designed for broad attraction and isn’t as effectively suited to supporting particular Edge purposes as that of FPGAs. Alternatively, ASICs are focused for particular purposes, however their mounted performance means they require full redesigns to accommodate any vital change in use. FPGAs are purpose-built to supply the very best of each worlds; they provide the low latency that comes with customized {hardware} pipelines and room for post-deployment modifications at any time when Edge fashions want updating.

In fact, no single possibility is the solely proper one. It’s as much as every developer to resolve what is smart for his or her system. They need to rigorously contemplate the first capabilities of the appliance, the particular outcomes they’re attempting to satisfy, and the way agile the design must be from a future-proofing perspective. It will permit them to decide on the correct set of {hardware} and software program parts to satisfy their necessities — we simply occur to assume that FPGAs are normally the correct selection.

How do Lattice’s FPGAs improve AI-driven decision-making on the edge, significantly in industries like automotive, industrial automation, and IoT?

FPGAs’ parallel processing capabilities are a superb place to start. In contrast to sequential processors, the structure of FPGAs permits them to carry out many duties in parallel, together with AI computations, with all of the configurable logic blocks executing totally different operations concurrently. This permits for the excessive throughput, low latency processing wanted to help real-time purposes in the important thing verticals you named — whether or not we’re speaking about autonomous automobiles, sensible industrial robots, and even sensible dwelling units or healthcare wearables. Furthermore, they are often personalized for particular AI workloads and simply reprogrammed within the subject as fashions and necessities evolve over time. Final, however not least, they provide hardware-level safety features to make sure AI-powered techniques stay safe, from boot-up to knowledge processing and past.

What are some real-world use instances the place Lattice’s FPGAs have considerably improved Edge AI efficiency, safety, or effectivity?

Nice query! One software that I discover actually intriguing is the methods engineers are utilizing Lattice FPGAs to energy the following technology of sensible, AI-powered robots. Clever robots require real-time, on-device processing capabilities to make sure protected automation, and that’s one thing Edge AI is designed to ship. Not solely is the demand for these assistants rising, however so is the complexity and class of their capabilities. At a current convention, the Lattice workforce demonstrated how using FPGAs allowed a sensible robotic to trace the trajectory of a ball and catch it in midair, exhibiting simply how briskly and exact these machines may be when constructed with the correct applied sciences.

What makes this so attention-grabbing to me, from a {hardware} perspective, is how design techniques are altering to accommodate these purposes. For instance, as an alternative of relying solely on CPUs or different conventional processors, builders are starting to combine FPGAs into the combination. The principle profit is that FPGAs can interface with extra sensors and actuators (and a extra various vary of those parts), whereas additionally performing low-level processing duties close to these sensors to liberate the principle compute engine for extra superior computations.

With the rising demand for AI inference on the edge, how does Lattice guarantee its FPGAs stay aggressive in opposition to specialised AI chips developed by bigger semiconductor corporations?

There’s little doubt that the pursuit of AI chips is driving a lot of the semiconductor {industry} — simply have a look at how corporations like Nvidia pivoted from creating online game graphics playing cards to changing into AI {industry} giants. Nonetheless, Lattice brings distinctive strengths to the desk that make us stand out even because the market turns into extra saturated.

FPGAs usually are not only a part we’re selecting to put money into as a result of demand is rising; they’re a essential piece of our core product line. The strengths of our FPGA choices — from latency and programmability to energy consumption and scalability — are the results of years of technical growth and refinement. We additionally present a full vary of industry-leading software program and answer stacks, constructed to optimize the utilization of FPGAs in AI designs and past.

We’ve refined our FPGAs via years of steady enchancment pushed by iteration on our {hardware} and software program options and relationships with companions throughout the semiconductor {industry}. We’ll proceed to be aggressive as a result of we’ll preserve true to that path, working with design, growth, and implementation companions to make sure that we’re offering our prospects with essentially the most related and dependable technical capabilities.

What position does programmability play in FPGAs’ skill to adapt to evolving AI fashions and workloads?

In contrast to fixed-function {hardware}, FPGAs may be retooled and reprogrammed post-deployment. This inherent adaptability is arguably their largest differentiator, particularly in supporting evolving AI fashions and workloads. Contemplating how dynamic the AI panorama is, builders want to have the ability to help algorithm updates, rising datasets, and different vital modifications as they happen with out worrying about fixed {hardware} upgrades.

For instance, FPGAs are already enjoying a pivotal position within the ongoing shift to post-quantum cryptography (PQC). As companies brace in opposition to looming quantum threats and work to exchange susceptible encryption schemes with next-generation algorithms, they’re utilizing FPGAs to facilitate a seamless transition and guarantee compliance with new PQC requirements.

How do Lattice’s FPGAs assist companies stability the trade-off between efficiency, energy consumption, and value in Edge AI deployments?

In the end, builders shouldn’t have to decide on between efficiency and risk. Sure, Edge purposes are sometimes hindered by computational limitations, energy constraints, and elevated latency. However with Lattice FPGAs, builders are empowered with versatile, power environment friendly, and scalable {hardware} that’s greater than able to mitigating these challenges. Customizable I/O interfaces, for instance, allow connectivity to varied Edge purposes whereas decreasing complexity.

Publish-deployment modification additionally makes it simpler to regulate to help the wants of evolving fashions. Past this, preprocessing and knowledge aggregation can happen on FPGAs, reducing the facility and computational pressure on Edge processors, decreasing latency, and in flip reducing prices and rising system effectivity.

How do you envision the way forward for AI {hardware} evolving within the subsequent 5-10 years, significantly in relation to Edge AI and power-efficient processing?

Edge units will must be sooner and extra highly effective to deal with the computing and power calls for of the ever-more-complex AI and ML algorithms companies have to thrive — particularly as these purposes turn out to be extra commonplace. The capabilities of the dynamic {hardware} parts that help Edge purposes might want to adapt in tandem, changing into smaller, smarter and extra built-in. FPGAs might want to broaden on their present flexibility, providing low latency and low energy capabilities for greater ranges of demand. With these capabilities, FPGAs will proceed to assist builders reprogram and reconfigure with ease to satisfy the wants of evolving fashions — be they for extra subtle autonomous automobiles, industrial automation, sensible cities, or past.

Thanks for the good interview, readers who want to study extra ought to go to Lattice Semiconductor.

UK Power Minister to talk at All-Power 2025 in Glasgow



Michael Shanks
UK Power Minister, Michael Shanks MP will communicate on the opening plenary session.

With 5 weeks to go earlier than All-Power opens at Glasgow’s SEC, the UK’s largest renewable and low carbon power exhibition and convention has introduced that following a keynote tackle by Scotland’s First Minister, The Rt Hon John Swinney MSP, the UK Power Minister, Michael Shanks MP shall be talking within the convention opening plenary session on Wednesday 14 Could. He’ll then be a part of a panel discussing Britain’s Clear Energy Mission.

“When the 2 Ministers have completed their keynote addresses we transfer to a what I do know shall be a stimulating panel dialogue on the Clear Energy Mission 2030,” defined Occasion Supervisor Anam Khan of RX, homeowners and organisers of the 2 day occasion. “We’re delighted that Minister Shanks is staying on to participate within the panel dialogue earlier than endeavor a lightning tour of the exhibition.”

The 90-minute session shall be chaired by Keith Anderson, CEO of ScottishPower who may even give a brief tackle after the Lord Provost’s Civic Welcome and the displaying of a particular video message from Professor Sir Jim Skea, Chair of the Intergovernmental Panel on Local weather Change (IPCC). That is to remind the viewers why Clear Energy 2030 and in the end Internet Zero are so vital to the planet. Then come the 2 political keynotes and at last the panel dialogue.

Along with Minister Shanks, panellists Juergen Maier, Chair of Nice British Power; Andrew Lever, Director – Power Transition on the Carbon Belief; and Councillor Susan Aitken will set the scene from their organisation’s perspective, after which the inter panel dialogue will start.

“The Day 2 (15 Could) plenary periods follows an identical format,” defined Anam Khan. “This time Professor Sir Jim McDonald, Principal and Vice Chancellor of the College of Strathclyde shall be within the chair. The keynote speaker shall be Chris Stark CBE, Head of UK’s Mission for Clear Energy, Division for Power Safety and Internet Zero (DESNZ).

“The panellists are Professor Keith Bell, ScottishPower Chair in Good Grids, College of Strathclyde, Co-Director UK Power Analysis Centre, and member of the Local weather Change Committee; Tom Glover, UK Nation Chair, RWE; Dhara Vyas, Chief Government, Power UK; Darren Davidson, Vice President UK & Eire, Siemens Power; and Rachel Fletcher, Director of Regulation and Economics, Octopus Power.”

Following the 2 plenary periods, the All-Power convention breaks into 11 parallel streams overlaying all types of renewable power; grid and networks; decarbonisation of warmth; – Scotland’s Performing Minister for Local weather Motion, Dr Alasdair Allan MSP will ship a keynote tackle within the first of 4 periods on warmth. There are additionally streams and periods on the decarbonisation of cities – two of them on Glasgow’s Internet Zero Routemap; transport and trade. Along with the principle convention, there are seven present ground theatres throughout the exhibition.

At lunch time on Day 2, Tim Decide MBE, Commissioner, Clear Energy 2030 Advisory Fee; *Chair of the Offshore Wind Development Partnership shall be having fun with a hearth chat with Ed Reed, Editor of E-FWD.

Anam Khan explains: “That’s not the place it ends, removed from it, different subjects that come underneath the convention highlight embrace periods on subjects as different as funding to expertise and coaching, Buy Energy Agreements (PPAs) to hydrogen; attracting younger individuals to the industries we serve to power storage. Added to which there are periods on Fairness, Variety and Inclusion (ED&I) and Psychological Well being within the office. The complete programme is on-line.”

Registration is open at https://www.all-energy.co.uk/PR All-Power is free to attend for all with related enterprise, governmental and educational pursuits and contains admission to the key exhibition, the principle convention and present ground theatres; and the Civic Reception, held courtesy of the Rt Hon Lord Provost of Glasgow, which is an integral a part of the Big Networking Night on 14 Could on the Glasgow Science Centre.

The All-Power exhibition has lengthy been famend for its dynamic ambiance with a excessive degree of enterprise exercise happening and this shall be very a lot the case once more on 14 and 15 Could. This 12 months’s 270+ exhibitors come from 17 nations – Canada, China, the Czech Republic, Denmark, France, Finland, Germany, Italy, Eire, Netherlands, Poland, Portugal, Spain, Sweden, Switzerland, Taiwan from all around the UK and Northern Eire. The complete exhibitor listing is on-line.

All-Power is a ‘Good Occasion’ which means guests now not have to gather enterprise playing cards or carry round flyers or brochures. They only must look out for the Colleqt QR Code at each stand and have space to shortly get their info utilizing their smartphone and thus sharing their particulars with exhibitors. They may obtain a abstract e-mail every day with detailed info on every exhibitor they’ve scanned.

All-Power’s headline sponsor, Shepherd and Wedderburn, celebrates its eleventh 12 months within the function; different sponsors embrace Noventa, Hitachi Power UK, Statkraft, SEFC, Black & Veatch, SGS, Flexitricity, AMSC, XING Mobility Inc and the College of Sheffield. Glasgow Conference Bureau is All-Power’s official associate; and The Society for Underwater Expertise is its Realized Society Patron. All mix to make this 12 months’s present, set proper in the course of Glasgow’s Local weather Week, and through Glasgow’s 850 12 months celebrations, a really particular two days.

For additional info go to https://www.all-energy.co.uk/25

safari – iOS WKWebView’s addUserScript doesn’t work first time for “about:clean”


I’m able to show my difficulty utilizing the beneath easy instance.

I principally want to make use of the addUserScript to load completely different scripts based mostly upon which web site the net view has navigated to. This works advantageous for all web sites nevertheless it would not work for about:clean.

For instance, beneath works advantageous for hackerNews (“https://information.ycombinator.com”) however fails for homePage (“about:clean”):

import UIKit
import WebKit
import SnapKit

extension String {
    static let hackerNews = "https://information.ycombinator.com"
    static let homePage = "about:clean"
}

class ViewController: UIViewController, WKScriptMessageHandler, WKNavigationDelegate {
        
    override func viewDidLoad() {
        tremendous.viewDidLoad()
        
        let configuration = WKWebViewConfiguration()
        configuration.mediaTypesRequiringUserActionForPlayback = .all
        
        let browser = WKWebView(body: .zero, configuration: configuration)
        browser.isInspectable =  true
        browser.navigationDelegate = self
        
        view.addSubview(browser)
        browser.snp.makeConstraints { make in
            make.edges.equalToSuperview()
        }
        
        browser.load(URLRequest(url: URL(string: .homePage)!))
        
    }
    
    func webView(_ webView: WKWebView, decidePolicyFor navigationAction: WKNavigationAction, decisionHandler: @escaping @MainActor (WKNavigationActionPolicy) -> Void) {
        
        guard let urlString = navigationAction.request.url?.absoluteString else {
            return
        }
        
        print("Navigating to url: (urlString)")
        webView.configuration.userContentController.removeAllUserScripts()
        webView.configuration.userContentController.removeAllScriptMessageHandlers()
        webView.configuration.userContentController.removeAllContentRuleLists()
        
        let world = WKContentWorld.world(title: "MyWorld")
        webView.configuration.userContentController.add(self, contentWorld: world, title: "handleConsoleLog")
        
        let userScript = """

perform logConsole(err) {
    console.log(`logConsole: ${err}`);
    if (window.webkit) {
        window.webkit.messageHandlers.handleConsoleLog.postMessage(err.toString());
    }
}

logConsole(`Whats up: ${window.location}`);

"""
        
        webView.configuration.userContentController.addUserScript(WKUserScript(supply: userScript, injectionTime: .atDocumentStart, forMainFrameOnly: false, in: world))
        
        decisionHandler(.enable)
    }
    
    func userContentController(_ userContentController: WKUserContentController, didReceive message: WKScriptMessage) {
        print("userContentController: (message.title), ((message.physique as? String)?.prefix(100))...")
    }

}

The above instance doesn’t name my handler on first web page load.

But when I alter .homePage within the line:

browser.load(URLRequest(url: URL(string: .homePage)!))

to make use of .hackerNews, then it really works even on first web page load.

If I exploit macOS Safari develop menu’s inspector’s console tab and “reload” the web page, then it really works too:

enter image description here

It solely would not work on “about:clean” on first web page load.

How can I get it to work?

Infoblox, Google Cloud accomplice to guard hybrid and multicloud enterprise sources



“Google Cloud DNS Armor, powered by Infoblox delivers highly effective, preemptive safety towards a variety of contemporary cyber threats by utilizing DNS as a primary line of protection. As a result of almost each cyberattack touches DNS in some unspecified time in the future—whether or not it’s ransomware calling house, knowledge being exfiltrated, or menace actors utilizing area era algorithms—DNS Armor is uniquely positioned to detect malicious exercise early within the assault chain,” Gupta defined.

“The answer inspects DNS queries in actual time to determine suspicious habits like command-and-control communications, zero-day threats, and domains tied to identified adversaries. In contrast to conventional safety instruments that react after harm is finished, DNS Armor proactively flags and blocks threats—usually greater than two months earlier than different methods catch them. And since it’s natively built-in into Google Cloud, it’s simple for purchasers to activate and handle straight from the console with out including complexity or requiring new infrastructure. This implies stronger safety with much less effort, and a safer setting for cloud workloads,” Gupta mentioned.

With this product, enterprise prospects can activate and configure DNS menace detection straight on the Google Cloud console, and directors can monitor DNS queries and entry DNS menace logs to allow early menace detection.

“As cyber threats develop extra refined, the collaboration between Infoblox and Google Cloud delivers a game-changing method to community safety,” mentioned Bob Walker, senior area community engineer, Lloyds Banking Group, in a assertion. “Google Cloud’s DNS Armor, powered by Infoblox, harnesses one of the best of each applied sciences—cutting-edge DNS menace intelligence and scalable cloud structure—to offer enterprises with strong safety towards rising threats.”

Based on Infoblox, DNS Armor makes use of Infoblox DNS expertise to realize visibility to search out DNS-based assaults with a low false constructive fee. The visibility reduces the danger of malware, knowledge breaches, and cyberattacks for Google Cloud prospects.

“Infoblox powers Google Cloud’s DNS Armor with intelligence past only a DNS block listing—monitoring exercise of potential adversaries to uncover and flag each nook of their malicious community,” mentioned Chris Kissel, analysis vp, safety and belief, IDC in a assertion. “The primary problem to cybersecurity is it’s usually reactive, and DNS Armor, powered by Infoblox, offers a preemptive resolution to securing cloud workloads that doesn’t add extra complexity or compute.”