Home Blog Page 3

Cisco Retailer boosts in-store analytics with Splunk insights


The Cisco Store makes use of unified safety and observability platform to show knowledge into actionable insights and allow strategic resolution making that enhances buy conversions for in-store guests. 

In right this moment’s data-driven world, companies throughout all industries face the identical dilemma: we’re producing mountains of information that maintain thrilling potential, however wrestle to successfully harness it. This problem is essentially because of an incapacity to centrally monitor various knowledge sources throughout more and more complicated know-how environments. 

For these of us in brick-and-mortar retail, that is very true. We’re not simply coping with knowledge complexity – we’re additionally working to guard buyer data, optimize bodily areas and stock, and hold tempo with the analytics benefits of e-commerce.   

On the Cisco Retailer, we grappled with all these similar points. Most not too long ago, we turned our concentrate on the way to bridge the hole with e-commerce to carry online-level analytics and buyer insights into our bodily retailer environments. We needed to know: How can we acquire the identical depth of understanding about our in-store prospects that we now have about our internet buyers?

In regards to the Cisco Retailer: Extra than simply merch

Cisco Retailer is the official merchandiser for Cisco branded attire and equipment. We function each on-line and thru bodily retail areas, together with two everlasting places and varied pop-up touring shops that seem at Cisco Dwell and different business occasions worldwide. 

By tapping into the facility of Cisco’s broad portfolio, we found a singular alternative to revolutionize our brick-and-mortar retail expertise. This inspiration led to the Cisco Retailer Tech Lab – a retail technical showcase surroundings that demonstrates the total energy of Cisco and companion applied sciences. 

The Cisco Retailer Tech Lab has advanced into a standard testing surroundings and “buyer zero” for groundbreaking improvements from Cisco product groups. In our function as buyer zero function, we deploy and check rising options earlier than buyer basic launch, offering essential suggestions that permits product groups to boost their choices for optimum buyer satisfaction. 

The roadblocks that impressed our transformation

Through the years, our know-how stack advanced into a classy but complicated system that generates huge quantities of information from varied options and gadgets. This was the basis of our knowledge drawback – we noticed the thrilling potential of this knowledge, however lacked the power to show it into strategic insights. This resulted in different challenges:

  • Buyer visibility and analytics: We have been unsure of the demographics of retailer guests which created challenges in figuring out the way to optimize stock and flooring plans, in addition to the way to personalize experiences. 
  • Stock administration: We have been experiencing fluctuating demand and inadequate visibility into inventory ranges at journey pop-up journey shops and needed a method to optimize product availability and decrease gross sales loss.  
  • IT complexity: With our Cisco Retailer Tech Lab producing all this knowledge, we would have liked a method to simplify the monitoring to know in real-time when and the place IT points happen.  

We would have liked a method to acquire unified visibility throughout quite a lot of knowledge sources inside the environment. Splunk enabled us to consolidate these various knowledge sources right into a single, cohesive view and procure real-time insights, empowering us to realize our aim of bringing online-level analytics to our bodily retailer environments.

Overcoming roadblocks: The options that paved the way in which to success

We carried out a number of Splunk options that remodeled how we effectively collect, analyze, and leverage knowledge throughout our retail ecosystem. These observability options combine with the remainder of our know-how stack to create a complete monitoring system that bridges the analytics hole between our on-line and bodily shops: 

  • Consolidates various knowledge sources right into a single view and allows creation of customized dashboards that supply actionable insights to boost resolution making in areas equivalent to retailer operations, infrastructure efficiency, stock administration, buyer analytics, and safety.

 

  • Splunk IT Service Intelligence (ITSI): Makes use of a broad vary of information within the Splunk platform to provide a holistic and interactive view of the Cisco Retailer operations and know-how stack. This permits for setup of customized KPIs/thresholds (POS occasions, carbon monoxide, noise ranges, and so on.) and easy monitoring of well being and efficiency, enabling quick identification and backbone of areas needing consideration.

  • Splunk Actual Person Monitoring (RUM): This functionality supplies complete, end-to-end visibility into net software consumer experiences, enabling our group to rapidly determine and troubleshoot points together with a the place we leveraged historic monitoring knowledge to promptly notify affected prospects and decrease harm.  
  • : Processes knowledge domestically in real-time for areas equivalent to stock and stocking. This native processing ensures continuity within the occasion of a community outage and is especially useful for pop-up and journey shops the place web entry is perhaps unreliable.

The deployment course of: Organising Splunk for real-time insights and strategic influence

Step 1: Deciding which gadgets and options to combine first

We first decided which points of our tech stack to combine into Splunk. We prioritized primarily based on influence to buyer expertise and strategic worth to the shop together with these core methods: 

  • Level-of-sales methods: Essential for transaction monitoring and figuring out bottlenecks within the buy course of.
  • Sensible Constructing Know-how:  Meraki Sensible Cameras and Sensors supplied invaluable knowledge on retailer visitors patterns and environmental situations.
  • Community infrastructure: Important for making certain connectivity and optimum efficiency throughout all methods.

By specializing in these core methods, we might set up a basis that would supply speedy worth whereas permitting for future growth. 

Step 2: Organising and feeding knowledge into Splunk Enterprise 

With our priorities established, we started the method of feeding knowledge into Splunk Enterprise. This concerned:

  1. Set up: With the assistance of Splunk Validated Architectures, we put in Splunk Enterprise on our server and ensured it met all of the {hardware} and software program stipulations.
  2. Knowledge connectors: Utilizing Splunk’s intuitive interface, we used Splunk’s Know-how Add-ons, out there in Splunkbase, to determine connections with our chosen gadgets. This concerned specifying the information supply sort, equivalent to syslog or community gadgets and SNMP protocols for community gadgets to make sure clean knowledge assortment. 
  3. Splunk Common Forwarders: These have been put in on distant gadgets to assemble and ahead knowledge to our Splunk deployment, enabling environment friendly knowledge assortment from endpoints that aren’t immediately related to the server.  
  4. Assortment protocols and sampling charges: Primarily based on system knowledge sorts, we decided applicable sampling charges to make sure a balanced method between the quantity of information collected and the extent of element wanted for insightful evaluation. 

Step 3: Customizing dashboards

As soon as we had our knowledge flowing into Splunk, it was time visualize the insights by means of customized dashboards.

  1. Dashboard creation: Utilizing Splunk Dashboard Studio, we created dashboards tailor-made to show key metrics equivalent to community visitors patterns, system well being, and safety alerts. 
  2. Customization: We custom-made the dashboards with varied visualization panels like charts and graphs, making the data extra complete primarily based on the intent of the information.
  3. Alerts and thresholds: Preliminary alerts and thresholds have been configured to set off notifications primarily based on knowledge patterns and operational norms.

Step 4: Monitoring Analytics for strategic decision-making

With our dashboards in place, we started utilizing the platform to actively monitor our operations and inform strategic selections. This wasn’t nearly passive monitoring – it was about producing insights that result in actions and constantly evolving and enhancing.

Transformative outcomes that redefined our success

The implementation of Splunk throughout our know-how ecosystem didn’t disappoint. Inside only a 12 months we’ve seen exceptional outcomes. Most importantly, we’ve achieved our major mission: bringing online-level analytics to our bodily shops.

  • Strengthened operational effectivity: We now have full visibility into our complete know-how ecosystem, from back-office methods to in-store gadgets. This complete monitoring of our complicated, interconnected applied sciences give us a granular understanding of system interdependencies and potential failure factors, permitting for proactive administration.
  • Improved community reliability: Essential insights into community well being allow us to proactively tackle connectivity challenges to make sure dependable retailer operations and optimum buyer interactions. The discount in WiFi bandwidth consumption achieved by means of Splunk Edge Hub has been significantly useful for our touring shops.
  • Actual-time monitoring and visibility: With real-time monitoring and end-to-end visibility into consumer experiences, we are able to rapidly determine and troubleshoot points.  When confronted with a cybersecurity incident, the Splunk Actual Person Monitoring functionality was particularly invaluable by enabling us to promptly notify affected customers and decrease harm.
  • Elevated buyer expertise: Our improved understanding of buyer engagement patterns by means of customized dashboards permits us to optimize retailer layouts and create extra customized interactions. The flexibility to watch and decrease point-of-sale transaction occasions additionally immediately enhances the shopper purchasing expertise.
  • Strengthened digital resilience: With the collective use of those Splunk options, the Cisco Retailer is supplied with complete visibility to detect and examine points earlier and have higher knowledge entry and management, permitting us to rapidly determine and remediate points. Moreover, the group can now correlate IT service well being with enterprise KPIs and cut back noise by grouping associated alerts.
  • Elevated income: Enhanced stock administration methods and improved buyer insights have contributed on to income progress of 40% year-over-year. By minimizing inventory outages and higher anticipating buyer wants, we’ve considerably diminished missed gross sales alternatives. 

The transformation we’ve skilled by means of integrating Splunk into our tech tack has been recreation altering. What started as an answer to knowledge visibility challenges has advanced right into a complete platform that enhances each side of our retail operations. 

Our journey with Splunk represents the way forward for retail analytics—the place bodily and digital channels are now not separate domains however a part of a steady, data-rich surroundings that permits actually knowledgeable decision-making. As we proceed to iterate and increase our use of those instruments, I’m excited to see how far we are able to push the boundaries of what’s attainable in fashionable retail. 

 

Extra Assets:

  • Learn the Cisco Retailer success story to find extra methods we’re redefining our retail success 
  • To get extra details about how the Cisco Retailer is utilizing Sensible constructing know-how, learn this weblog put up 
  • Uncover extra Cisco on Cisco success tales 
  • For excursions and extra details about the Cisco Retailer Tech Lab, discover our movies 
  • Browse the Cisco Retailer 

Share:

Why and how one can unlock proprietary knowledge to drive AI success


Lately, just about each firm is utilizing AI – and generally, they’re utilizing it by way of off-the-shelf AI applied sciences, like Copilot, that provide the identical capabilities to each buyer.

This begs the query: How can a enterprise truly stand out within the age of AI? Moderately than simply adopting AI as a manner of maintaining with opponents, how can firms leverage AI to realize an precise edge?

The reply is easy, however simply missed: Proprietary knowledge. Though a lot of the dialog surrounding AI transformation focuses on buzzworthy matters like which vendor has one of the best fashions or how greatest to handle evolving AI compliance wants, what arguably issues greater than anything in AI success is the power to leverage your organization’s proprietary knowledge to most impact.

Right here’s why, together with tips about how one can take advantage of proprietary knowledge as a part of a contemporary AI technique.

The function of proprietary knowledge in AI success

To grasp why proprietary knowledge is the important thing differentiator for AI transformation, you could first perceive how cutting-edge generative and agentic AI expertise works.

It’s all powered by giant language fashions, or LLMs. The factor about these generic LLMs, nonetheless, is that they’re educated on generic knowledge. They excel at working with publicly obtainable data. However relating to understanding the distinctive wants, priorities and operations of your organization, they fall brief, as a result of they weren’t educated in your firm’s inner knowledge.

That is the place proprietary knowledge is available in. Utilizing strategies like fine-tuning and retrieval augmented era (RAG), it’s attainable to offer a pretrained LLM with extra knowledge – together with proprietary knowledge distinctive to a selected group. Doing so equips the LLM to generate content material or information agent-based decision-making in ways in which could be unattainable for a mannequin that lacks perception into the interior workings of a corporation.

Therefore why proprietary knowledge performs such a vital function in AI success: It’s what differentiates firms that use AI for primary and generic duties (like responding to buyer queries primarily based on publicly obtainable data) from those who leverage AI for advanced, bespoke wants (similar to troubleshooting a novel buyer downside by drawing on inner product documentation).

Unlocking entry to proprietary knowledge for AI

Now, connecting main AI platforms to proprietary knowledge sources is kind of straightforward. For example, if your organization makes use of Microsoft Copilot, you may configure personal knowledge sources with just some clicks.

However until the proprietary knowledge you make obtainable to an AI mannequin is correctly managed and ruled, you’re unlikely to get pleasure from a lot success in supporting superior AI use circumstances. To be efficient, proprietary knowledge should meet the next circumstances:

  • Top quality: The information must be freed from errors, redundancies and different high quality issues, which might prohibit the LLM’s capability to interpret it successfully.
  • Obtainable: The information should be constantly obtainable in order that the AI service can entry it at any time when wanted.
  • Safe: The information should be safe within the sense that you realize which delicate data it comprises and may verify that it’s acceptable to reveal that data to a third-party AI service.

Failure to satisfy these necessities is the place organizations are inclined to fall brief relating to leveraging proprietary knowledge to bolster the effectiveness of AI instruments. Too typically, companies merely level their AI platforms to SharePoint websites, documentation databases or different knowledge assets with out having efficient knowledge administration and governance procedures in place for the knowledge. In consequence, the customized knowledge sources add little worth.

Constructing AI-ready knowledge platforms

To keep away from this pitfall, companies should put money into AI-ready knowledge platforms. In different phrases, they should deploy the instruments, processes and knowledge architectures essential to handle all of their knowledge successfully.

An AI-ready knowledge platform is able to taking all the proprietary knowledge owned by a corporation and doing the next:

  • Structured and unstructured knowledge processing: Regardless of the kind or kind knowledge exists in – whether or not it’s rows in a database, a Phrase doc on a file system or anything – the platform should be capable of handle it.
  • Information governance: An AI-ready knowledge platform can implement efficient knowledge high quality, safety and privateness controls over knowledge uncovered to AI providers.
  • Observability: The information platform ought to empower the group to grasp how its proprietary knowledge is used, together with by third-party AI providers.
  • Change administration: As knowledge and AI fashions evolve, the AI-ready knowledge platform should evolve with them in order that AI providers are all the time up-to-date with the most recent inner enterprise insights.

These capabilities are the one manner to make sure that proprietary knowledge will truly improve the efficiency of AI instruments. Whenever you construct an information platform that unlocks the worth of proprietary data on this manner, you open the door to a number of recent AI-driven use circumstances that make your corporation not simply one other AI adopter, however an precise standout within the race for AI success.

Understanding gadgets on Linux techniques



$ df -h
Filesystem Dimension Used Avail Use% Mounted on
/dev/sda3 14G 6.7G 6.2G 52% /
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 886M 96K 886M 1% /dev/shm
efivarfs 64K 6.0K 53K 11% /sys/firmware/efi/efivars
tmpfs 355M 1.7M 353M 1% /run
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-journald.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-network-generator.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-udev-load-credentials.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-tmpfiles-setup-dev-early.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-sysctl.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-tmpfiles-setup-dev.service
tmpfs 886M 16K 886M 1% /tmp
/dev/sda3 14G 6.7G 6.2G 52% /dwelling
/dev/sda2 974M 358M 549M 40% /boot
/dev/sda1 599M 20M 580M 4% /boot/efi
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-tmpfiles-setup.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-resolved.service
tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-vconsole-setup.service
tmpfs 178M 180K 177M 1% /run/consumer/1000
tmpfs 178M 76K 177M 1% /run/consumer/1001

You’ll be able to scale back the output to a extra concise itemizing like this (eradicating the tmpfs file techniques):

$ df -h | grep -v tmpfs
Filesystem Dimension Used Avail Use% Mounted on
/dev/sda3 14G 6.7G 6.2G 52% /
efivarfs 64K 6.0K 53K 11% /sys/firmware/efi/efivars
/dev/sda3 14G 6.7G 6.2G 52% /dwelling
/dev/sda2 974M 358M 549M 40% /boot
/dev/sda1 599M 20M 580M 4% /boot/efi

You’ll be able to have the df command provide info on one partition with a command like this:

$ df -h /dwelling
Filesystem Dimension Used Avail Use% Mounted on
/dev/sda3 14G 6.7G 6.2G 52% /dwelling

Utilizing the mount command with out arguments will show all the mounted file techniques. To checklist file techniques by sort, you should utilize a command like this that lists solely ext4 file techniques:

$ mount -t ext4
/dev/sda2 on /boot sort ext4 (rw,relatime,seclabel)

Passing the output of the mount command to the column command will present a list that can probably be simpler to learn because the output can be displayed with separated columns of information.

$ mount | column -t

The fdisk -l command will show particulars on file techniques, however requires root entry.

$ sudo fdisk -l
Disk /dev/sda: 14.91 GiB, 16013942784 bytes, 31277232 sectors
Disk mannequin: KINGSTON SNS4151
Models: sectors of 1 * 512 = 512 bytes
Sector measurement (logical/bodily): 512 bytes / 512 bytes
I/O measurement (minimal/optimum): 512 bytes / 512 bytes
Disklabel sort: gpt
Disk identifier: 9645D103-5519-4B2A-82FB-636FED806E1B

Machine Begin Finish Sectors Dimension Sort
/dev/sda1 2048 1230847 1228800 600M EFI System
/dev/sda2 1230848 3327999 2097152 1G Linux prolonged boot
/dev/sda3 3328000 31277055 27949056 13.3G Linux filesystem

Disk /dev/zram0: 1.73 GiB, 1855979520 bytes, 453120 sectors
Models: sectors of 1 * 4096 = 4096 bytes
Sector measurement (logical/bodily): 4096 bytes / 4096 bytes
I/O measurement (minimal/optimum): 4096 bytes / 4096 bytes

The lspci command will show info on pci (peripheral part interconnect) gadgets.

Autonomous coding brokers: A Codex instance


Up to now few weeks, a number of “autonomous background coding brokers” have been launched.

  • Supervised coding brokers: Interactive chat brokers which are pushed and steered by a developer. Create code domestically, within the IDE. Instrument examples: GitHub Copilot, Windsurf, Cursor, Cline, Roo Code, Claude Code, Aider, Goose, …
  • Autonomous background coding brokers: Headless brokers that you just ship off to work autonomously via a complete activity. Code will get created in an atmosphere spun up completely for that agent, and normally ends in a pull request. A few of them are also runnable domestically although. Instrument examples: OpenAI Codex, Google Jules, Cursor background brokers, Devin, …

I gave a activity to OpenAI Codex and another brokers to see what I can be taught. The next is a report of 1 explicit Codex run, that can assist you look behind the scenes and draw your individual conclusions, adopted by a few of my very own observations.

The duty

We’ve an inner software known as Haiven that we use as a demo frontend for our software program supply immediate library, and to run some experiments with totally different AI help experiences on software program groups. The code for that software is public.

The duty I gave to Codex was relating to the next UI subject:

Precise:

Autonomous coding brokers: A Codex instance

What I needed from the agent:

Screenshot of what we want: The labels should be more human readable, "Delivery Management", "Client Research"

Immediate:
Once we create the filter labels on the dashboard, we create human readable labels based mostly on the classes we have now. "client-research" is was "Consumer-Analysis", "deliveryManagement" into "DeliveryManagement". Enhance the sophistication of the "category-to-human-readable" logic in order that the labels are showing to the consumer as "Consumer Analysis" and "Supply Administration"

This activity is

  • Comparatively small
  • A typical “cosmetics” activity that’s not tremendous pressing and would possibly get deprioritised repeatedly

Agent logs

I’m sharing the detailed logs right here to be able to get an impression of what’s going on beneath the hood.

Right here’s what the logs view appears to be like like in Codex:

A screenshot of Codex's "Logs" view, showing examples of step summaries and command executions

Amassing info first

The primary 17 log entries had been all about studying AGENTS.md and README.md:

  • “I must test the AGENTS.md within the repository root for directions.”
  • appears to be like for AGENTS.md, however I had truly known as it AGENT.md, which led to some forwards and backwards till it figured that out
  • reads README.md
  • Seems to be at .pre-commit, finds a take a look at setup

The next tables checklist the remainder of the log messages verbatim. After every of these messages, the agent executed a command, for a few of them I added that precise command for context. I’m certain that the log messages are simply summaries of the tokens that had been truly occurring behind the scenes, as a result of the messages are method too quick to signify the complete “considering” the mannequin will need to have been going via.

Looking for the related code with numerous grep

Message Command (samples) Notes
I must seek for “category-to-human-readable” to find the place to make modifications for the frontend labels. grep -R "human readable" -n  
I’ll search for any mentions of “class” within the mission.    
Grep output is very large due to node_modules; I’ll restrict it to src as a substitute.    
I’ll seek for “to human” within the src listing to seek out related references.    
I’ll open and test index.js to see what’s inside.    
I’m checking if the category-to-human-readable transformation is dealt with in _navigation_items or elsewhere.    
I’m going to search for “human readable” to get extra data on the subject. grep -R "human-readable" -n ui/src | head  
I’m wondering if there’s a perform for changing classes into one thing extra readable.    
Filtering the search to only the “src” folder to keep away from node_modules noise.   second time it’s operating into this, didn’t ‘keep in mind’
I’ll search for a perform like “toHumanReadable”. Let’s see what I discover.    
I’m checking the entire repo in case the logic is within the backend or a library. grep -R "Human" -n | head  
I’ll restrict the scope to app/ and ui/src to keep away from the massive node_modules folder.   third time it’s operating into this, didn’t ‘keep in mind’
I believe the logic may be lacking, so I’ll test _navigation_items and different modules for classes. grep -R "human" app -n | head  
I believe it’s time to look a bit deeper. Let’s dive in and discover. grep -R "human" app | wc -l To “look deeper”, it searches for “human” once more, however pipes it into wc as a substitute of head?

Lastly, it’s getting nearer! The labels had been beforehand crudely capitalized through CSS, it finds that

Message Command (samples) Notes
I’m checking for references to “capitalize” or class transformation within the frontend. Output:
root@0c5de6379b32:/workspace/haiven# grep -R "capitalize" -n ui/src | head
ui/src/pages/index.js:250: className="capital
ize"
ui/src/types/globals.css:1915:.capitalize {
ui/src/types/globals.css:1916: text-transform: capitalize;
ui/src/types/globals.css:1931: text-transform: capitalize;
 
I’ll test across the Tag part in index.js.    
Tag show capitalizes phrases incorrectly as a result of hyphen or camelCase; I’ll take a look at filter classes.    

Now it’s beginning to test for extra related code within the app folder, the place the backend is. It doesn’t discover something, which is appropriate, there’s nothing related there

Message Command (samples) Notes
I’ll test the repo for the way classes are formatted, specializing in hyphens and camelCase. grep -R "classes" -n app | head  
Time to attempt a brand new search! grep -R "consumer" -n app | head  
I’ll take a more in-depth take a look at the prompts within the take a look at knowledge. grep -R "classes:" -n app/checks | head Why?
It appears to be like just like the take a look at knowledge has restricted classes. I’ll test the docs for any variations. grep -R "client-research" -n  
I’ll begin by opening the recordsdata to take a more in-depth look. sed -n '1,100p' ui/src/app/_navigation_items.js  

Now it’s going again to looking within the ui folder – it discovered the CSS beforehand, and is now on the lookout for current code that it may possibly reuse to unravel the duty. That’s nice, as a result of we certainly have already got an analogous perform, and it finds it:

Message Command (samples) Notes
I want a perform to transform tags like “client-research” to “Consumer Analysis” for filters. grep -R "to.*Readable" -n ui/src | head  
I’m opening the toReadableText perform inside _dynamic_data_renderer.js to discover it.    
I’m shifting via the web page to get the main points I want.    

It now finds a niche within the current perform’s implementation:

Message Command (samples) Notes
I see that hyphens aren’t dealt with by the perform. I want to regulate it for hyphens.    
I’ll modify toReadableText to deal with hyphens, making certain correct capitalization and spacing. Let’s test checks!   It truly didn’t test any checks
I’ll test _dynamic_data_renderer.js to look at toReadableText and modify it for hyphen-case.    
I’ll test checks/_sidebar.take a look at.js to evaluate the associated classes there.    
I’ll test different recordsdata to see if toReadableText seems wherever.    
I’ll test if _sidebar or others include a toReadableText perform.    

Replace the take a look at for the perform that it desires to vary:

Message Command (samples) Notes
I’ll regulate the perform to deal with hyphen-case, then replace checks to replicate that.    
I’ll test the take a look at file after line 36 to make sure hyphen-case is roofed.    
I’ll scroll via strains 60 to 120 to look at the remainder of the file.    
I’m scrolling the remaining to seize all of the related info.    
I’ll add checks for hyphen-case and replace toReadableText, together with modifying index.js for classes.    
I’ll regulate _dynamic_data_renderer.js and index.js to make use of toReadableText and replace checks. git standing --short  

You’ll be able to configure some atmosphere startup instructions in Codex, I had configured that to be yarn set up, which led to the next:

Message Command (samples) Notes
There are untracked .yarn/ and .yarnrc recordsdata. I’ll test and clear them up.    
I’ll test the basis of ‘ui’ and take away any pointless Yarn 4 entries.    
I’ll take a look at the commit historical past to see if these .yarn recordsdata had been added later. git log --oneline -n 1  
I’ll evaluate the repository’s .gitignore to substantiate if Node modules are excluded. grep yarn .gitignore  
I see no entry for .yarn or .yarnrc. I’ll test and take away them if wanted. cat ui/.gitignore  
I’ll test if I can take away .yarn or .yarnrc recordsdata since they shouldn’t be dedicated.    
It appears to be like like yarn.lock modifications had been made. Time to reset to HEAD.    

Precise code modifications:

Message Command (samples) Notes
Time to change the code. I’ll start the mandatory updates now. sed -n '1,30p' ...  
I’m going so as to add a hyphen to the mandatory a part of the code.    
I’m going to make modifications to the index.js file now.    
I’ll import toReadableText and begin utilizing it within the code.    
I’ll replace index.js to import toReadableText and regulate the _dynamic_data_renderer checks.    
I want to interchange {tag} with {toReadableText(tag)} for higher rendering.    
I’ll check out the variations to raised perceive the modifications. git diff  
I’ll evaluate the modifications in index.js to see what’s been modified.    
I’ll add a brand new take a look at for hyphenated circumstances in _dynamic_data_renderer.take a look at.js.    
I’ll take into account inserting a take a look at after line 35.    

Operating our code linter subsequent, good:

Message Command (samples) Notes
I’ll format the code utilizing Prettier as instructed. cd ui && npx prettier -w ... Prettier is talked about within the AGENTS.md. It appears to run prettier efficiently, although not one of the recordsdata had been modified
I’ll evaluate the logs to collect extra info.    

Attempting to run the checks – unsuccessfully:

Message Command (samples) Notes
I’ll run the yarn checks to test if all the things’s working superb.    
Assessments would possibly’ve been cached or want community—output’s empty, so I’ll test the log.    
The lockfile isn’t up to date. I’ll attempt operating ‘yarn set up’ because the atmosphere permits it.    
The atmosphere is offline, so yarn set up can’t be run, however I see a node_modules folder.    
I see node_modules are already put in. I’ll test the log for why checks failed.    
The error occurs as a result of Yarn 4 depends on .yarn and .yarnrc, which we eliminated.   ?
It appears to be like like .yarnrc.yml isn’t tracked. I’ll additionally test .yarn for monitoring.    
I’m double-checking the package deal.json to see if “haiven-ui” is the proper title.   ?
I’ll test the Node model to substantiate if it’s 18 or 20 by operating node -v.   bizarre, the node model laid out in package deal.json is 22.6
I believe switching to npx vitest run might repair the error. Let’s attempt that.    
The output is lengthy—now I’ll seize the final abstract strains. Error: Not applied: window.getComputedStyle(elt, pseudoElt)  
The output is simply too lengthy, so I’ll extract the important thing factors for a abstract.    

Occurring a tangent about establishing the venv, which is completely pointless as a result of it solely plans modifications for frontend code:

Message Command (samples) Notes
The checks failed as a result of case variations, however now I’ll activate the digital atmosphere.   AGENTS.md comprises directions to activate venv for the Python backend, however we’re within the frontend right here, so…
Hmm, the .venv isn’t right here. I ought to test if it wants creation or already exists.    
It appears to be like just like the .venv listing is lacking. Let’s evaluate AGENT’s directions fastidiously.    
The atmosphere appears offline, however there may be a script to arrange dependencies domestically.    
I can’t run “poetry set up” offline, however I’ll test the .cursor folder for dependencies.    
I can’t run backend checks with the present setup, however I’ll attempt pytest anyway.    

Gave up on the checks, now wrapping up the modifications:

Message Command (samples) Notes
Assessments failed as a result of lacking dependencies. Now, I’ll test standing and proceed with committing.    
I believe the .yarnrc.yml was left over from a earlier yarn set up. Eradicating it.    
I’ll seize the related file strains for the citations now.    
I’ll get the encompassing line vary for the modifications in index.js. nl -ba ui/src/pages/index.js | sed -n '12,28p'  
I’ll seize the related strains round these modifications now.    
It’s time to finalize the PR message and confirm the commit.    
UI checks failed as a result of atmosphere points. I’ll reference the final failure strains. Accomplished!    

Screenshot of the final results view in OpenAI Codex

Right here is the complete pull request created by Codex.

Observations

I discovered it fascinating to see that Codex, and a lot of the different coding assistants I’m utilizing, are resorting increasingly to easy textual content search to seek out related code. Within the log above you see how Codex goes via a bunch of search phrases with grep, to seek out related items of code (“human”, “human readable”, “humanReadable”, …). The rationale why I discover it fascinating is as a result of there have been quite a lot of seemingly extra refined code search mechanisms applied, like semantic search over codebase indices with vectors / embeddings (Cursor, GH Copilot, Windsurf), or utilizing the summary syntax tree as a place to begin (Aider, Cline). The latter remains to be fairly easy, however doing textual content search with grep is the only attainable.

It looks like the device creators have discovered that this straightforward search remains to be the best in any case – ? Or they’re making some sort of trade-off right here, between simplicity and effectiveness?

The distant dev atmosphere is vital for these brokers to work “within the background”

Here’s a screenshot of Codex’s atmosphere configuration display screen (as of finish of Might 2024). As of now, you’ll be able to configure a container picture, atmosphere variables, secrets and techniques, and a startup script. They level out that after the execution of that startup script, the atmosphere won’t have entry to the web anymore, which might sandbox the atmosphere and mitigate a few of the safety dangers.

Screenshot of Codex's environment configuration screen (as of end of May 2024); Shows configurable container image, environment variables, secrets, and a setup script.

For these “autonomous background brokers”, the maturity of the distant dev atmosphere that’s arrange for the agent is essential, and it’s a tough problem. On this case e.g., Codex didn’t handle to run the checks.

And it turned out that when the pull request was created, there have been certainly two checks failing due to regression, which is a disgrace, as a result of if it had recognized, it will have simply been in a position to repair the checks, it was a trivial repair:

Screenshot of the Codex error messages indicating that it couldn't run the tests.

This explicit mission, Haiven, truly has a scripted developer security web, within the type of a fairly elaborate .pre-commit configuration. () It will be very best if the agent might execute the complete pre-commit earlier than even making a pull request. Nonetheless, to run all of the steps, it will must run

  • Node and yarn (to run UI checks and the frontend linter)
  • Python and poetry (to run backend checks)
  • Semgrep (for security-related static code evaluation)
  • Ruff (Python linter)
  • Gitleaks (secret scanner)

…and all of these must be out there in the proper variations as effectively, after all.

Determining a easy expertise to spin up simply the proper atmosphere for an agent is vital for these agent merchandise, if you wish to actually run them “within the background” as a substitute of a developer machine. It isn’t a brand new downside, and to an extent a solved downside, in any case we do that in CI pipelines on a regular basis. But it surely’s additionally not trivial, and in the mean time my impression is that atmosphere maturity remains to be a difficulty in most of those merchandise, and the consumer expertise to configure and take a look at the atmosphere setups is as irritating, if no more, as it may be for CI pipelines.

Answer high quality

I ran the identical immediate 3 occasions in OpenAI Codex, 1 time in Google’s Jules, 2 occasions domestically in Claude Code (which isn’t absolutely autonomous although, I wanted to manually say ‘sure’ to all the things). Regardless that this was a comparatively easy activity and resolution, turns on the market had been high quality variations between the outcomes.

Excellent news first, the brokers got here up with a working resolution each time (leaving breaking regression checks apart, and to be sincere I didn’t truly run each single one of many options to substantiate). I believe this activity is an effective instance of the categories and sizes of duties that GenAI brokers are already effectively positioned to work on by themselves. However there have been two facets that differed when it comes to high quality of the answer:

  • Discovery of current code that may very well be reused: Within the log right here you’ll discover that Codex discovered an current element, the “dynamic knowledge renderer”, that already had performance for turning technical keys into human readable variations. Within the 6 runs I did, solely 2 occasions did the respective agent discover this piece of code. Within the different 4, the brokers created a brand new file with a brand new perform, which led to duplicated code.
  • Discovery of a further place that ought to use this logic: The staff is presently engaged on a brand new function that additionally shows class names to the consumer, in a dropdown. In one of many 6 runs, the agent truly found that and steered to additionally change that place to make use of the brand new performance.
Discovered the reusable code Went the additional mile and located the extra place the place it ought to be used
Sure Sure
Sure No
No Sure
No No
No No
No No

I put these outcomes right into a desk as an instance that in every activity given to an agent, we have now a number of dimensions of high quality, of issues that we need to “go proper”. Every agent run can “go improper” in a single or a number of of those dimensions, and the extra dimensions there are, the much less doubtless it’s that an agent will get all the things accomplished the way in which we wish it.

Sunk price fallacy

I’ve been questioning – let’s say a staff makes use of background brokers for this sort of activity, the forms of duties which are sort of small, and neither necessary nor pressing. Haiven is an internal-facing software, and has solely two builders assigned in the mean time, so this sort of beauty repair is definitely thought-about low precedence because it takes developer capability away from extra necessary issues. When an agent solely sort of succeeds, however not absolutely – wherein conditions would a staff discard the pull request, and wherein conditions would they make investments the time to get it the final 20% there, though spending capability on this had been deprioritised? It makes me surprise concerning the tail finish of unprioritised effort we would find yourself with.

Cisco Networking App Market Companions at Cisco Dwell 2025


Be part of us at Cisco Dwell 2025 from June 8-12 in San Diego—or just about from anyplace and expertise the place innovation meets collaboration.

This yr, we’re thrilled to host greater than 25 ecosystem expertise companions from the Cisco Networking App Market, all there to showcase their groundbreaking options. These companions usually are not simply exhibitors; they’re integral to our mission of delivering seamless and complete community automation experiences.

Why Cisco Networking App Market Issues  

The Cisco Networking App Market is an unique catalog of turn-key expertise associate software program functions developed on prime of Cisco’s networking platform. These functions improve Cisco expertise utilizing software program and AI, serving to clients save time, scale back complexity, develop income, and clear up industry-specific enterprise issues.

The Market affords a wealthy catalog of over 350 functions designed to extend community visibility and automate incident decision and clear up a myriad of buyer challenges throughout all industries – from retail to healthcare, manufacturing to finance, and extra.

Be taught extra about Cisco Networking App Market at Cisco Dwell.

Be part of our Product Technique Overview, “Simplifying Networks with Cisco Ecosystem Options” session on Monday, June 9 @ 3:30pm. (Catalog session PSOENT-1021). There, we are going to share extra about how Cisco’s ecosystem partnerships simplify administration, improve efficiency, and allow scalable, unified community infrastructures-helping companies adapt rapidly and effectively.

Interact on-site with Our Companions 

  • The place to Discover Them: Our companions will probably be each featured and current at varied areas all through the occasion, together with throughout the sponsorship areas within the World of Options, the Cisco Platform Benefit space within the Cisco Showcase and the DevNet Zone. Make sure you go to these areas to see stay demonstrations and interact with our companions.
  • Talking Classes: You can too attend periods that includes our companions to find out how they’re leveraging Cisco’s platform to create revolutionary options. Examine the Cisco Dwell Session Catalog schedule for particular instances and areas.
  • Full Checklist of Companions: We could have a complete checklist of all companions on-site within the World of Options Cisco Platform Benefit space. This 1– web page visible under can even assist function a fast information that will help you navigate and join with the companions that align with your small business wants.

 

Cisco Dwell 2025 is not only an occasion; it’s a chance to ignite innovation and foster collaboration. We encourage all attendees to have interaction with our Cisco Networking App Market companions to discover joint options that may drive enterprise success.

Here’s a streamlined schedule of Cisco Networking App Market companions internet hosting periods at Cisco Dwell 2025:

Monday, June 9 

  • 1:00 PM – 1:15 PM PDT: VILEWN-2000 - 5 Enterprise Methods for Optimizing Wi-Fi Efficiency and Reliability
    Speaker: Anil Gupta, CTO, Wyebot
  • 2:30 PM – 2:45 PM PDT: VILNWT-2000 - Finish Consumer Expertise + NMS = Auvik FTW!
    Speaker: Steve Petryschuk, Director & Tech Evangelist, Auvik
  • 3:30 PM – 3:45 PM PDT: VILSEC-2005 - Asset Intelligence in Motion: AI-Powered Assault Floor Administration
    Speaker: Joe Alapat, Founder and Chief Technique Officer, Liongard
  • 4:00 PM – 4:15 PM PDT: VILSEC-1041 - ORDR AI for Whole Cisco Visibility, Safety, and Coverage Automation
    Speaker: Craig Hyps, Fellow, Cisco Dwell Distinguished Speaker, ORDR

Tuesday, June 10 

  • 1:30 PM – 1:45 PM PDT: VILOBS-2003 - AI-Powered Community Operations: Proactive Detection, Evaluation & Automation
    Speaker: Bibi Rosenbach, CEO, NetOp CLD

Wednesday, June 11 

  • 2:00 PM – 2:15 PM PDT: VILOBS-1000 - Fixing the Community Visibility Hole with AI: Optimize Digital Expertise in Any Surroundings
    Speaker: Eric Camulli, Buyer Success Officer, 7SIGNAL, Inc.
  • 2:35 PM – 3:05 PM PDT: CNCEWN-2000 - Designing the Final Wi-Fi 7 Community with Cisco APs & Ekahau
    Speaker: Matthew Sterling, Sr. Director, Product Advertising & ECSE, Ekahau / Speedtest
  • 3:00 PM – 3:15 PM PDT: VILSEC-1043 - ORDR AI: The Quick Observe to Cisco Segmentation and Zero Belief
    Speaker: Craig Hyps, Fellow, Cisco Dwell Distinguished Speaker, ORDR
  • 3:00 PM – 3:15 PM PDT: VILCOL-1062 - Empowering Cisco Know-how with Singlewire Options
    Speaker: Ken Rosko, Channel Supervisor, Singlewire Software program

Make sure you schedule your periods earlier than arriving onsite and get your calendar prepared!

Collectively, we will develop the probabilities of community automation and ship distinctive outcomes for all our clients and extra.

Be taught extra and request demos at Cisco Networking App Market right here:


We’d love to listen to what you suppose. Ask a Query, Remark Under, and Keep Related with #CiscoPartners on social!

Cisco Companions Fb  |  @CiscoPartners X/Twitter  |  Cisco Companions LinkedIn

Share: