Home Blog Page 3

AI updates from the previous week: Docker MCP Catalog, Solo.io’s Agent Gateway, and AWS SWE-PolyBench — April 25, 2025


Software program corporations are always making an attempt so as to add increasingly AI options to their platforms, and AI corporations are always releasing new fashions and options. It may be exhausting to maintain up with all of it, so we’ve written this roundup to share a number of notable updates round AI that software program builders ought to find out about. 

Docker MCP Catalog to launch subsequent month with 100+ verified MCP instruments

Docker is introducing new MCP-related choices to supply builders with instruments for working with the Mannequin Context Protocol (MCP).

Coming in Might, Docker MCP Catalog shall be a market the place builders can uncover verified and curated MCP instruments. The corporate partnered with a number of corporations to construct the catalog, together with Stripe, Elastic, Heroku, Pulumi, Grafana Labs, Kong, Neo4j, New Relic, and Proceed.dev.

The catalog incorporates over 100 instruments, and every device comes with writer verification, versioned releases, and curated collections.

Solo.io launches Agent Gateway, Agent Mesh

Agent Gateway is an open supply information aircraft that gives safety, observability, and governance for each agent-to-agent and agent-to-tool communication. It helps well-liked interoperability protocols like Agent2Agent (A2A) and Mannequin Context Protocol (MCP), and likewise integrates with agent frameworks like LangGraph, AutoGen, Brokers SDK, kagent, and Claude Desktop. 

Agent Mesh supplies safety, observability, discovery, and governance throughout all agent interactions, regardless of the place the brokers are deployed. Key capabilities embrace multitenant throughout boundaries and controls, normal agent connectivity with A2A and MCP, automated assortment and centralized reporting of agent telemetry, and a self-service agent developer portal to help discovery, configuration, observeability, and debugging instruments. 

AWS creates new benchmark for AI Coding Brokers

SWE-PolyBench is a benchmark that evaluates the coding skills of AI brokers. It consists of greater than 2,000 curated points in 4 completely different languages (Java, JavaScript, TypeScript, and Python), a stratified subset of 500 points for fast experimentation, a leaderboard with a wealthy set of metrics, and a wide range of duties, encompassing bug fixes, characteristic requests, and code refactoring. 

The benchmark is publicly obtainable and its dataset will be accessed on HuggingFace. There’s additionally a paper about SWE-PolyBench on arxiv

“This open strategy invitations the worldwide developer neighborhood to construct upon this work and advance the sphere of AI-assisted software program engineering. As coding brokers proceed to evolve, benchmarks like SWE-PolyBench play a vital position in making certain they’ll meet the various wants of real-world software program improvement throughout a number of programming languages and process sorts,” AWS wrote in a weblog submit

OpenAI provides picture technology mannequin to API

OpenAI launched its newest picture technology mannequin, gpt-image-1, in ChatGPT final month, and this week, that mannequin was added to the API. This addition will allow builders so as to add picture technology capabilities into their very own purposes. 

“The mannequin’s versatility permits it to create pictures throughout numerous kinds, faithfully comply with customized tips, leverage world information, and precisely render textual content—unlocking numerous sensible purposes throughout a number of domains,” OpenAI wrote in a weblog submit

NVIDIA NeMo microservices now obtainable

NVIDIA NeMo microservices present builders with a platform for creating and deploying AI workflows. Builders can use it to create brokers which can be enhanced with enterprise information, and may take person preferences under consideration. 

Among the microservices included in NVIDIA NeMo are:

  • NeMo Customizer, which makes use of post-training strategies to speed up fine-tuning
  • NeMo Evaluator, which simplifies evaluating AI fashions on well-liked benchmarks
  • NeMo Guardrails, which helps builders implement compliance and safety safeguards

“The microservices have develop into typically obtainable at a time when enterprises are constructing large-scale multi-agent techniques, the place a whole bunch of specialised brokers — with distinct objectives and workflows — collaborate to sort out advanced duties as digital teammates, working alongside staff to help, increase and speed up work throughout features,” NVIDIA wrote in a weblog submit

https://blogs.nvidia.com/weblog/nemo-enterprises-ai-teammates-employee-productivity/ 

Zencoder acquires Machinet to additional enhance its AI coding brokers

Zencoder, an organization that gives an AI coding agent, has introduced that it acquired one other firm within the AI coding agent enterprise: Machinet. 

In response to Zencoder, this acquisition will solidify the corporate’s place within the AI coding assistant market and allow it to increase its multi-integration ecosystem into extra improvement environments. 

Machinet is a plugin for JetBrains IDEs, and whereas Zencoder already supported JetBrains, Machinet had much more specialised experience within the ecosystem.

Machinet’s area and market presence shall be transferred to Zencoder, and present Machinet prospects will obtain directions on easy methods to transition to Zencoder’s platform.

Vercacode provides new AI capabilities to its DAST providing

The newest capabilities are designed to allow organizations to reply to safety threats extra shortly. The brand new Enterprise Mode in DAST Necessities consists of options like superior crawling and auditing, AI-assisted auto-login to scale back authentication failures, Inside Scan Administration (ISM), an intuitive interface, and real-time flaw reporting. 

“DAST Enterprise Mode empowers safety groups to work quicker, smarter, and safer,” mentioned Derek Maki, head of product at Veracode. “With real-time evaluation in a unified platform, it eliminates the problem of fragmented instruments and allows mature, resilient threat administration with centralized visibility and management.”


Learn final week’s bulletins right here: AI updates from the previous week: New OpenAI fashions, NVIDIA AI-Q Blueprint, and Anthropic’s Google Workspace integration — April 18, 2025

Widgets on lock display screen: FAQ



Widgets on lock display screen: FAQ

Posted by Tyler Beneke – Product Supervisor, and Lucas Silva – Software program Engineer

Widgets are actually obtainable in your Pixel Pill lock screens! Lock display screen widgets empower customers to create a personalised, always-on expertise. Whether or not you wish to simply handle good residence units like lights and thermostats, or construct dashboards for fast entry and management of important data, this weblog submit will reply your key questions on lock display screen widgets on Android. Learn on to find when, the place, how, and why they’re going to be on a lock display screen close to you.

Widgets on lock display screen: FAQ

Lock display screen widgets in clock-wise order: Clock, Climate, Shares, Timers, and Google Dwelling App. Within the prime proper is a customization call-to-action.

Q: When will lock display screen widgets be obtainable?

A: Lock display screen widgets will likely be obtainable in AOSP for tablets and cellular beginning with the discharge after Android 16 (QPR1). This replace is scheduled to be pushed to AOSP in late Summer season 2025. Lock display screen widgets are already obtainable on Pixel Tablets.

Q: Are there any particular necessities for widgets to be allowed on the lock display screen?

A: No, widgets allowed on the lock display screen have the identical necessities as another widgets. Widgets on the lock display screen ought to observe the identical high quality tips as residence display screen widgets together with high quality, sizing, and configuration. If a widget launches an exercise from the lock display screen, customers should authenticate to launch the exercise, or the exercise ought to declare android:showWhenLocked=”true” in its manifest entry.

Q: How can I take a look at my widget on the lock display screen?

A: At the moment, lock display screen widgets may be examined on Pixel Pill units. You may allow lock display screen widgets and add your widget.

Q: Which widgets may be displayed on this expertise?

A: All widgets are appropriate with the lock display screen widget expertise. To prioritize person selection and customization, we have made all widgets obtainable. For the most effective expertise, please be certain your widget helps dynamic coloration and dynamic resizing. Lock display screen widgets are sized to roughly 4 cells huge by 3 cells tall on the launcher, however precise dimensions fluctuate by machine.

Q: Can my widget opt-out of the expertise?

A:Necessary: Apps can select to limit the usage of their widgets on the lock display screen utilizing an opt-out API. To opt-out, use the widget class “not_keyguard” in your appwidget data xml file. Place this file in an xml-36 useful resource folder to make sure backwards compatibility.

Q: Are there any CDD necessities particularly for lock display screen widgets?

A: No, there are not any particular CDD necessities solely for lock display screen widgets. Nonetheless, it is essential to make sure that any widgets and screensavers that combine with the framework adhere to the usual CDD necessities for these options.

Q: Will lock display screen widgets be enabled on current units?

A: Sure, lock display screen widgets have been launched on the Pixel Pill in 2024 Different machine producers could replace their units as effectively as soon as the function is accessible in AOSP.

Q: Does the machine should be docked to make use of lock display screen widgets?

A: The mechanism that triggers the lock display screen widget expertise is customizable by the OEM. For instance, OEMs can select to make use of charging or docking standing as triggers. Third-party OEMs might want to implement their very own posture detection if desired.

Q: Can OEMs set their very own default widgets?

A: Sure! {Hardware} suppliers can pre-set and routinely show default widgets.

Q: Can OEMs customise the person interface for lock display screen widgets?

A: Customization of the lock display screen widget person interface by OEMs just isn’t supported within the preliminary launch. All lock display screen widgets can have the identical developer expertise on all units.

Lock display screen widgets are poised to provide your customers new methods to work together along with your app on their units. Immediately you possibly can leverage your current widget designs and experiences on the lock display screen with Pixel Tablets. To be taught extra about constructing widgets, please take a look at our sources on developer.android.com


This weblog submit is a part of our collection: Highlight Week on Widgets, the place we offer sources—weblog posts, movies, pattern code, and extra—all designed that can assist you design and create widgets. You may learn extra within the overview of Highlight Week: Widgets, which will likely be up to date all through the week.

Phillip Burr, Head of Product at Lumai – Interview Collection

0


Phillip Burr is the Head of Product at Lumai, with over 25 years of expertise in international product administration, go-to-market and management roles inside main semiconductor and know-how firms, and a confirmed observe document of constructing and scaling services.

Lumai is a UK-based deep tech firm growing 3D optical computing processors to speed up synthetic intelligence workloads. By performing matrix-vector multiplications utilizing beams of sunshine in three dimensions, their know-how presents as much as 50x the efficiency and 90% much less energy consumption in comparison with conventional silicon-based accelerators. This makes it notably well-suited for AI inference duties, together with giant language fashions, whereas considerably decreasing vitality prices and environmental impression.

What impressed the founding of Lumai, and the way did the concept evolve from College of Oxford analysis right into a industrial enterprise?

The preliminary spark was ignited when one of many founders of Lumai, Dr. Xianxin Guo, was awarded an 1851 Analysis Fellowship on the College of Oxford. The interviewers understood the potential for optical computing and requested whether or not Xianxin would think about patents and spinning out an organization if his analysis was profitable. This received Xianxin’s artistic thoughts firing and when he, alongside certainly one of Lumai’s different co-founders Dr. James Spall, had confirmed that utilizing gentle to do the computation on the coronary heart of AI may each dramatically increase AI efficiency and scale back the vitality, the stage was set. They knew that current silicon-only AI {hardware} was (and nonetheless is) struggling to extend efficiency with out considerably rising energy and price and, therefore, if they might remedy this drawback utilizing optical compute, they might create a product that prospects needed. They took this concept to some VCs who backed them to kind Lumai. Lumai lately closed its second spherical of funding, elevating over $10m, and bringing in extra buyers who additionally consider that optical compute can proceed to scale and meet rising AI efficiency demand with out rising energy.

You’ve had a powerful profession throughout Arm, indie Semiconductor, and extra — what drew you to hitch Lumai at this stage?

The quick reply is crew and know-how. Lumai has a powerful crew of optical, machine studying and information heart consultants, bringing in expertise from the likes of Meta, Intel, Altera, Maxeler, Seagate and IBM (together with my very own expertise in Arm, indie, Mentor Graphics and Motorola).  I knew {that a} crew of outstanding folks so centered on fixing the problem of slashing the price of AI inference may do superb issues.

I firmly consider that way forward for AI calls for new, revolutionary breakthroughs in computing. The promise of having the ability to supply 50x the AI compute efficiency in addition to chopping the price of AI inference to 1/tenth in comparison with right now’s options was simply too good a possibility to overlook.

What have been among the early technical or enterprise challenges your founding crew confronted in scaling from a analysis breakthrough to a product-ready firm?

The analysis breakthrough proved that optics might be used for quick and really environment friendly matrix-vector multiplication. Regardless of the technical breakthroughs, the largest problem was convincing those that Lumai may succeed the place different optical computing startups had failed. We needed to spend time explaining that Lumai’s strategy was very completely different and that as a substitute of counting on a single 2D chip, we used 3D optics to succeed in the degrees of scale and effectivity. There are after all many steps to get from lab analysis to know-how that may be deployed at scale in a knowledge heart. We acknowledged very early on that the important thing to success was bringing in engineers who’ve expertise in growing merchandise in excessive quantity and in information facilities. The opposite space is software program – it’s important that the usual AI frameworks and fashions can profit from Lumai’s processor, and that we offer the instruments and frameworks to make this as seamless as potential for AI software program engineers.

Lumai’s know-how is claimed to make use of 3D optical matrix-vector multiplication. Are you able to break that down in easy phrases for a common viewers?

AI techniques have to do plenty of mathematical calculations referred to as matrix-vector multiplication. These calculations are the engine that powers AI responses. At Lumai, we do that utilizing gentle as a substitute of electrical energy. This is the way it works:

  1. We encode info into beams of sunshine
  2. These gentle beams journey by 3D area
  3. The sunshine interacts with lenses and particular supplies
  4. These interactions full the mathematical operation

By utilizing all three dimensions of area, we will course of extra info with every beam of sunshine. This makes our strategy very environment friendly – decreasing the vitality, time and price wanted to run AI techniques.

What are the principle benefits of optical computing over conventional silicon-based GPUs and even built-in photonics?

As a result of the speed of development in silicon know-how has considerably slowed, every step up in efficiency of a silicon-only AI processor (like a GPU) ends in a big improve in energy. Silicon-only options eat an unimaginable quantity of energy and are chasing diminishing returns, which makes them extremely advanced and costly. The benefit of utilizing optics is that after within the optical area there’s virtually no energy being consumed. Power is used to get into the optical area however, for instance, in Lumai’s processor we will obtain over 1,000 computation operations for every beam of sunshine, each single cycle, thus making it very environment friendly. This scalability can’t be achieved utilizing built-in photonics on account of each bodily measurement constraints and sign noise, with the variety of computation operations of silicon-photonic answer at solely at 1/eighth of what Lumai can obtain right now.

How does Lumai’s processor obtain near-zero latency inference, and why is that such a important issue for contemporary AI workloads?

Though we wouldn’t declare that the Lumai processor presents zero-latency, it does execute a really giant (1024 x 1024) matrix vector operation in a single cycle. Silicon-only options sometimes divide up a matrix into smaller matrices, that are individually processed step-by-step after which the outcomes must be mixed. This takes time and ends in extra reminiscence and vitality getting used. Decreasing the time, vitality and price of AI processing is important to each permitting extra companies to profit from AI and for enabling superior AI in probably the most sustainable means.

Are you able to stroll us by how your PCIe-compatible kind issue integrates with current information heart infrastructure?

The Lumai processor makes use of PCIe kind issue playing cards alongside a typical CPU, all inside a typical 4U shelf. We’re working with a spread of information heart rack tools suppliers in order that the Lumai processor integrates with their very own tools. We use commonplace community interfaces, commonplace software program, and so forth. in order that externally the Lumai processor will simply seem like another information heart processor.
Knowledge heart vitality utilization is a rising international concern. How does Lumai place itself as a sustainable answer for AI compute?

Knowledge heart vitality consumption is rising at an alarming price. In response to a report from the Lawrence Berkeley Nationwide Laboratory, information heart energy use within the U.S. is predicted to triple by 2028, consuming as much as 12% of the nation’s energy. Some information heart operators are considering putting in nucleus energy to offer the vitality wanted. The business wants to take a look at completely different approaches to AI, and we consider that optics is the reply to this vitality disaster.

Are you able to clarify how Lumai’s structure avoids the scalability bottlenecks of present silicon and photonic approaches?

The efficiency of the primary Lumai processor is barely the beginning of what’s achievable. We anticipate that our answer will proceed to offer large leaps in efficiency: by rising optical clock speeds and vector widths, all with out a corresponding improve in vitality consumed. No different answer can obtain this. Normal digital silicon-only approaches will proceed to eat an increasing number of value and energy for each improve in efficiency. Silicon photonics can’t obtain the vector width wanted and therefore firms who have been built-in photonics for information heart compute have moved to handle different elements of the info heart – for instance, optical interconnect or optical switching.

What function do you see optical computing enjoying in the way forward for AI — and extra broadly, in computing as a complete?

Optics as a complete will play an enormous half in information facilities going ahead – from optical interconnect, optical networking, optical switching and naturally optical AI processing. The calls for that AI is putting on the info heart is the important thing driver of this transfer to optical.  Optical interconnect will allow sooner connections between AI processors, which is important for big AI fashions. Optical switching will allow extra environment friendly networking, and optical compute will allow sooner, extra power-efficient and lower-cost AI processing.  Collectively they may assist allow much more superior AI, overcoming the challenges of the slowdown in silicon scaling on the compute aspect and the velocity limitations of copper on the interconnect aspect.

Thanks for the good interview, readers who want to study extra ought to go to Lumai.

AI plus digital twins might be the pairing enterprises want



The third level of cooperation is the linkage of AI with transactional workflows. Firms have already got purposes that take orders, ship items, transfer part elements round, and so forth. It’s these purposes that presently drive the business aspect of a enterprise, however taking an order or scheduling cargo doesn’t load a truck or label a package deal. Digital twins have traditionally been linked to the true world through sensors that detect precise motion, work. How do they seize the business purposes, the transactions? An open information framework like OpenUSD is a potential reply, and if AI additionally helps OpenUSD, then there’s a mechanism that not solely lets AI “learn” workflow information from present purposes, but additionally generate work to be launched into these flows.

All of this, together, makes AI and digital twins a companion in a enterprise at each the transactional and real-world useful degree. The brand new mixture introduces an entire new type of automation, automation of a whole enterprise. Automation that may contact each course of, each employee.

Do not forget that our present IT spending is justified nearly solely by enhancing the productiveness of the 60% of staff who’re concerned with the business/transactional aspect of issues? That’s left 40% of the workforce out of the productiveness image, and their worth of labor is definitely a bit greater than that of the 60% of staff we’ve already reached. What sort of IT spending might reaching these staff justify? Digital twins, mixed with AI, might completely remake IT, and completely remake networking.

Why networking? The 40% of “useful” staff, those on the market pushing packing containers, driving, even preventing fires or defending the inhabitants, will want their very own information collections to be empowered efficiently. We’ll must know what’s occurring in the true world at a degree we don’t even method at present. For those who imagine that autonomous autos can navigate the streets, then you definitely imagine that we are able to construct a digital twin of these streets and an AI controller to get the car safely (and legally) to the vacation spot. For those who imagine that, then you definitely imagine we are able to perceive and optimize the motion of products and the offering of companies in the identical manner.

What we’ve discovered with AI and with the IoT/digital-twin initiatives up to now is that low apples don’t present the perfect ROI, and looking for alternatives for brand new applied sciences individually is much less possible to achieve success than planning to make use of them cooperatively with a view to handle the complexity that’s an everyday a part of our work, our lives. And the excellent news is that, behind the hype, firms are beginning to provide the instruments wanted to harness all the good things that we’ve talked about, and companies are beginning to see tips on how to undertake these instruments. So don’t be distracted by all this AI hype, or by obscure claims of autonomous operation. Actuality could also be nearer, and higher, than you assume.

ios – How debug root causes for xcodebuild error 65


I’m attempting to get a Flutter app to construct utilizing match and fastlane.

Match appears to work as All required keys, certificates and provisioning profiles are put in

When working build_ios_app() I get the next output:

[08:13:05]: ▸ The next construct instructions failed:
[08:13:05]: ▸   Archiving workspace Runner with scheme Runner
[08:13:05]: ▸ (1 failure)
[...]
** ARCHIVE FAILED **


The next construct instructions failed:
    Archiving workspace Runner with scheme Runner
(1 failure)
[08:13:05]: Exit standing: 65

+-----------------------------------------+
|            Construct setting            |
+---------------+-------------------------+
| xcode_path    | /Functions/Xcode.app |
| gym_version   | 2.227.1                 |
| export_method | app-store               |
| sdk           | iPhoneOS18.4.sdk        |
+---------------+-------------------------+

The command that failed is the next:

/Functions/Xcode.app/Contents/Developer/usr/bin/xcodebuild -workspace ios/Runner.xcworkspace -scheme Runner -configuration Launch -destination generic/platform=iOS -archivePath "/Customers/builder/Library/Developer/Xcode/Archives/2025-04-24/Runner 2025-04-24 06.01.57.xcarchive" archive

After attempting now for a number of days random solutions from a wide range of points on GitHub with none of them working I’m questioning if there’s a structured strategy to drill down and debug the difficulty.

When inspecting the log or working the command manually there are loads of observe: and warning log strains however no error / failure or every other telling message.

Not having used xcodebuild previously I’m questioning if there may be any choice to extend the verbosity (aside from -verbose – which didn’t assist) or someway dissect this downside to search out the basis trigger (and if by any probability somebody is aware of the answer this could be much more superior 😀).