Thursday, September 18, 2025
Home Blog

Autonomous Firewall Evolution: AIOps to AgenticOps

0

IT operations are getting into a transformative period the place long-standing strategies like dashboards, alarm floods, and guide fixes are now not sufficient. In right this moment’s hyperconnected panorama, with billions of indicators and complicated threats at each flip, conventional approaches inevitably result in blind spots, elevated danger, and fixed group fatigue.

Complexity has outpaced human intervention. Information silos, misconfigurations, and rising shortages in expert personnel demand a shift from reactive troubleshooting to proactive, clever motion. What the trendy enterprise wants isn’t simply extra knowledge, it’s smarter, actionable intelligence delivered by way of techniques that may autonomously act, adapt, and safe at scale.

That’s why we constructed AIOps for Firewalls, step one in shifting from reactive firefighting to proactive remediation.

Security Cloud Control, AIOps Insights dashboardSecurity Cloud Control, AIOps Insights dashboard
Fig. 1: Safety Cloud Management, AIOps Insights dashboard

Cisco AIOps options superior, purpose-built engines that analyze configurations, well being standing, diagnostics, and visitors patterns to proactively detect anomalies and configuration drift in actual time. By harnessing a mix of statistical and machine studying fashions with dynamic baselines and fast sliding window checks, the system delivers exact, context-aware alerts solely escalating when repeated points breach rigorously tuned thresholds. This distinctive anomaly detection framework permits correlated occasions to be surfaced with root trigger evaluation, triggering automated remediations and paving the best way for future self-healing actions.

What actually units this aside is its domain-specific understanding of firewall telemetry, hybrid modeling, adaptive sensitivity controls, and tokenized fault sample matching, all pushed by behavioral studying from occasion knowledge.

Constructing on these clever capabilities, Cisco AIOps extends operational excellence by introducing focused options that handle right this moment’s most crucial safety and community administration wants.

  • Coverage Analyzer & Optimizer: Automated detection and remediation of coverage anomalies.
  • Finest Follow Suggestions: Tailor-made steerage that retains configurations and safety at their strongest.
  • Characteristic Adoption: Ensures groups unlock the advantages of recent options.
  • Improve Planners: Helps with planning for software program updates and renewals.
  • Visibility into Person dangers: Supplies an integration to Cisco Id Intelligence and helps with visibility into consumer dangers and mitigation solutions.
  • Site visitors & Capability Insights: Predicts community anomalies, supporting proactive fixes earlier than customers are affected.

Go to the docs web page to study extra.

Security Cloud Control, AIOps Software Upgrade PlannerSecurity Cloud Control, AIOps Software Upgrade Planner
Fig. 2: Safety Cloud Management, AIOps Software program Improve Planner

Cisco AIOps leverages a long time of skilled insights from TAC circumstances, config opinions, and bug knowledge embedded into an ML-compatible system. This powers automated, real-time suggestions that establish misconfigurations, strengthen posture, predict bugs and PSIRTs, and information upgrades.

  • They don’t seem to be generic; they’re frequently developed and refined from real-world help knowledge and subject experience.
  • Diagnostics are securely collected from units, structured, and evaluated in opposition to field-proven finest practices; prioritized findings are surfaced inline, protecting groups forward of threats and configuration drift.

This method turns years of operational expertise into dynamic steerage, serving to groups optimize firewalls and strengthen safety with out guide effort or the necessity to continually seek the advice of exterior consultants.

Most innovation rolls out within the cloud, however many shoppers stay on-premises. The query is: how will we ship the facility of AI-driven insights with out forcing a whole migration to the cloud?

That’s precisely why we constructed Cloud-Help.

Cloud-Help offers AI-driven insights for firewalls, prioritizing knowledge sovereignty and deployment flexibility. It uniquely operates throughout on-prem, hybrid, and sovereign cloud fashions with out forcing clients to fully migrate to cloud. Clients management knowledge sharing, choosing their area (EU, APJC, AMER) to keep up compliance. Its “outcome-bound telemetry” means solely mandatory knowledge is shared for particular insights. This method offers safe, compliant, and constant AI-powered safety throughout all buyer environments, respecting knowledge management whereas delivering high-value analytics.

Go to the Cisco Safety Cloud Integration docs web page to study extra.

The journey doesn’t finish right here, these breakthroughs in AIOps lay the groundwork for what’s subsequent: AgenticOps. The following chapter is powered by AI brokers that transcend suggestions, taking autonomous actions. Image a digital ops group that by no means sleeps, continually studying and appearing to shift operations from reactive firefighting to proactive resilience.

The age of static dashboards and limitless alerts is closing; the longer term is agentic, autonomous, and frequently adaptive.

Cisco AIOps for Firewalls is right here, redefining what’s doable in community and safety operations. Enterprises can now acquire actionable intelligence, agentic automation, and safe excellence throughout any deployment. Get hands-on, witness the transformation, and be part of the wave of AI-powered, agent-driven safety.


We’d love to listen to what you assume! Ask a query and keep linked with Cisco Safety on social media.

Cisco Safety Social Media

LinkedIn
Fb
Instagram
X


MCP in Follow – O’Reilly

0

The next was initially printed in Asimov’s Addendum, September 11, 2025.

Be taught extra in regards to the AI Disclosures Venture right here.

1. The Rise and Rise of MCP

Anthropic’s Mannequin Context Protocol (MCP) was launched in November 2024 as a approach to make instruments and platforms model-agnostic. MCP works by defining servers and purchasers. MCP servers are native or distant finish factors the place instruments and sources are outlined. For instance, GitHub launched an MCP server that permits LLMs to each learn from and write to GitHub. MCP purchasers are the connection from an AI utility to MCP servers—they permit an LLM to work together with context and instruments from totally different servers. An instance of an MCP consumer is Claude Desktop, which permits the Claude fashions to work together with 1000’s of MCP servers.

In a comparatively quick time, MCP has grow to be the spine of a whole lot of AI pipelines and functions. Main gamers like Anthropic and OpenAI have constructed it into their merchandise. Developer instruments corresponding to Cursor (a coding-focused textual content editor or IDE) and productiveness apps like Raycast additionally use MCP. Moreover, 1000’s of builders use it to combine AI fashions and entry exterior instruments and information with out having to construct a complete ecosystem from scratch.

In earlier work printed with AI Frontiers, we argued that MCP can act as an awesome unbundler of “context”—the information that helps AI functions present extra related solutions to shoppers. In doing so, it will probably assist decentralize AI markets. We argued that, for MCP to actually obtain its targets, it requires help from:

  1. Open APIs: In order that MCP functions can entry third-party instruments for agentic use (write actions) and context (learn)
  2. Fluid reminiscence: Interoperable LLM reminiscence requirements, accessed through MCP-like open protocols, in order that the reminiscence context accrued at OpenAI and different main builders doesn’t get caught there, stopping downstream innovation

We broaden upon these two factors in a current coverage observe, for these trying to dig deeper.

Extra usually, we argue that protocols, like MCP, are literally foundational “guidelines of the highway” for AI markets, whereby open disclosure and communication requirements are constructed into the community itself, relatively than imposed after the actual fact by regulators. Protocols are essentially market-shaping units, architecting markets by means of the permissions, guidelines, and interoperability of the community itself. They’ll have a huge impact on how the industrial markets constructed on prime of them operate too.

1.1 However how is the MCP ecosystem evolving?

But we don’t have a transparent concept of the form of the MCP ecosystem at this time. What are the most typical use circumstances of MCP? What kind of entry is being given by MCP servers and utilized by MCP purchasers? Is the information accessed through MCP “read-only” for context, or does it enable brokers to “write” and work together with it—for instance, by enhancing recordsdata or sending emails?

To start answering these questions, we take a look at the instruments and context which AI brokers use through MCP servers. This provides us a clue about what’s being constructed and what’s getting consideration. On this article, we don’t analyze MCP purchasers—the functions that use MCP servers. We as an alternative restrict our evaluation to what MCP servers are making obtainable for constructing.

We assembled a big dataset of MCP servers (n = 2,874), scraped from Pulse.1 We then enriched it with GitHub star-count information on every server. On GitHub, stars are much like Fb “likes,” and builders use them to indicate appreciation, bookmark initiatives, or point out utilization.

In apply, whereas there have been loads of MCP servers, we discovered that the highest few garnered many of the consideration and, doubtless by extension, many of the use. Simply the highest 10 servers had practically half of all GitHub stars given to MCP servers.

A few of our takeaways are:

  1. MCP utilization seems to be pretty concentrated. Which means that, if left unchecked, a small variety of servers and (by extension) APIs may have outsize management over the MCP ecosystem being created.
  2. MCP use (instruments and information being accessed) is dominated by simply three classes: Database & Search (RAG), Laptop & Internet Automation, and Software program Engineering. Collectively, they obtained practically three-quarters (72.6%) of all stars on GitHub (which we proxy for utilization).
  3. Most MCP servers help each learn (entry context) and write (change context) operations, exhibiting that builders need their brokers to have the ability to act on context, not simply eat it.

2. Findings

To start out with, we analyzed the MCP ecosystem for focus danger.

2.1 MCP server use is concentrated

We discovered that MCP utilization is concentrated amongst a number of key MCP servers, judged by the variety of GitHub stars every repo obtained.

Regardless of there being 1000’s of MCP servers, the highest 10 servers make up practically half (45.7%) of all GitHub stars given to MCP servers (pie chart under) and the highest 10% of servers make up 88.3% of all GitHub stars (not proven).

The top 10 servers received 45.7% of all GitHub stars in our dataset of 2,874 servers.
The highest 10 servers obtained 45.7% of all GitHub stars in our dataset of 2,874 servers.

Which means that the vast majority of real-world MCP customers are doubtless counting on the identical few providers made obtainable through a handful of APIs. This focus doubtless stems from community results and sensible utility: All builders gravitate towards servers that remedy common issues like internet shopping, database entry, and integration with extensively used platforms like GitHub, Figma, and Blender. This focus sample appears typical of developer-tool ecosystems. A number of well-executed, broadly relevant options are inclined to dominate. In the meantime, extra specialised instruments occupy smaller niches.

2.2 The highest 10 MCP servers actually matter

Subsequent, the highest 10 MCP servers are proven within the desk under, together with their star depend and what they do.

Among the many prime 10 MCP servers, GitHub, Repomix, Context7, and Framelink are constructed to help with software program improvement: Context7 and Repomix by gathering context, GitHub by permitting brokers to work together with initiatives, and Framelink by passing on the design specs from Figma on to the mannequin. The Blender server permits brokers to create 3D fashions of something, utilizing the favored open supply Blender utility. Lastly, Activepieces and MindsDB join the agent to a number of APIs with one standardized interface: in MindsDB’s case, primarily to learn information from databases, and in Activepieces to automate providers.

The top 10 MCP servers with short descriptions, design courtesy of Claude.
The highest 10 MCP servers with quick descriptions, design courtesy of Claude.

The dominance of agentic shopping, within the type of Browser Use (61,000 stars) and Playwright MCP (18,425 stars), stands out. This displays the elemental want for AI techniques to work together with internet content material. These instruments enable AI to navigate web sites, click on buttons, fill out varieties, and extract information similar to a human would. Agentic shopping has surged, despite the fact that it’s far much less token-efficient than calling an API. Looking brokers usually have to wade by means of a number of pages of boilerplate to extract slivers of information a single API request may return. As a result of many providers lack usable APIs or tightly gate them, browser-based brokers are sometimes the only—typically the one—approach to combine, underscoring the bounds of at this time’s APIs.

A number of the prime servers are unofficial. Each the Framelink and Blender MCP are servers that work together with only a single utility, however they’re each “unofficial” merchandise. Which means that they aren’t formally endorsed by the builders of the appliance they’re integrating with—those that personal the underlying service or API (e.g., GitHub, Slack, Google). As a substitute, they’re constructed by unbiased builders who create a bridge between an AI consumer and a service—usually by reverse-engineering APIs, wrapping unofficial SDKs, or utilizing browser automation to imitate person interactions.

It’s wholesome that third-party builders can construct their very own MCP servers, since this openness encourages innovation. But it surely additionally introduces an middleman layer between the person and the API, which brings dangers round belief, verification, and even potential abuse. With open supply native servers, the code is clear and may be vetted. Against this, distant third-party servers are more durable to audit, since customers should belief code they will’t simply examine.

At a deeper stage, the repos that at the moment dominate MCP servers spotlight three encouraging info in regards to the MCP ecosystem:

  1. First, a number of distinguished MCP servers help a number of third-party providers for his or her performance. MindsDB and Activepieces function gateways to a number of (usually competing) service suppliers by means of a single server. MindsDB permits builders to question totally different databases like PostgreSQL, MongoDB, and MySQL by means of a single interface, whereas Taskmaster permits the agent to delegate duties to a spread of AI fashions from OpenAI, Anthropic, and Google, all with out altering servers.
  2. Second, agentic shopping MCP servers are getting used to get round probably restrictive APIs. As famous above, Browser Use and Playwright entry web providers by means of an online browser, serving to to bypass API restrictions, however they as an alternative run up in opposition to anti-bot protections. This circumvents the restrictions that APIs can impose on what builders are in a position to construct.
  3. Third, some MCP servers do their processing on the developer’s laptop (domestically), making them much less depending on a vendor sustaining API entry. Some MCP servers examined right here can run fully on an area laptop with out sending information to the cloud—which means that no gatekeeper has the ability to chop you off. Of the ten MCP servers examined above, solely Framelink, Context7, and GitHub depend on only a single cloud-only API dependency that may’t be run domestically end-to-end in your machine. Blender and Repomix are utterly open supply and don’t require any web entry to work, whereas MindsDB, Browser Use, and Activepieces have native open supply implementations.

2.3 The three classes that dominate MCP use

Subsequent, we grouped MCP servers into totally different classes primarily based on their performance.

After we analyzed what kinds of servers are hottest, we discovered that three dominated: Laptop & Internet Automation (24.8%), Software program Engineering (24.7%), and Database & Search (23.1%).

Software engineering, computer and web automation, and database and search received 72.6% of all stars given to MCP servers.
Software program Engineering, Laptop & Internet Automation, and Database & Search obtained 72.6% of all stars given to MCP servers.

Widespread use of Software program Engineering (24.7%) MCP servers aligns with Anthropic’s financial index, which discovered that an outsize portion of AI interactions have been associated to software program improvement.

The recognition of each Laptop & Internet Automation (24.8%) and Database & Search (23.1%) additionally is smart. Earlier than the appearance of MCP, internet scraping and database search have been extremely built-in functions throughout platforms like ChatGPT, Perplexity, and Gemini. With MCP, nevertheless, customers can now entry that very same search performance and join their brokers to any database with minimal effort. In different phrases, MCP’s unbundling impact is extremely seen right here.

2.4 Brokers work together with their environments

Lastly, we analyzed the capabilities of those servers: Are they permitting AI functions simply to entry information and instruments (learn), or as an alternative do agentic operations with them (write)?

Throughout all however two of the MCP server classes checked out, the most well-liked MCP servers supported each studying (entry context) and writing (agentic) operations—proven in turquoise. The prevalence of servers with mixed learn and write entry means that brokers will not be being constructed simply to reply questions primarily based on information but additionally to take motion and work together with providers on a person’s behalf.

Showing MCP servers by category. Dotted red line at 10,000 stars (likes). The most popular servers support both read and write operations by agents. In contrast, almost no servers support just write operations.
Exhibiting MCP servers by class. Dotted crimson line at 10,000 stars (likes). The most well-liked servers help each learn and write operations by brokers. In distinction, nearly no servers help simply write operations.

The 2 exceptions are Database & Search (RAG) and Finance MCP servers, by which read-only entry is a standard permission given. That is doubtless as a result of information integrity is important to making sure reliability.

3. The Significance of A number of Entry Factors

A number of implications of our evaluation may be drawn out at this preliminary stage.

First, concentrated MCP server use compounds the dangers of API entry being restricted. As we mentioned in “Protocols and Energy,” MCP stays constrained by “what a selected service (corresponding to GitHub or Slack) occurs to reveal by means of its API.” A number of highly effective digital service suppliers have the ability to close down entry to their servers.

One necessary hedge in opposition to API gatekeeping is that most of the prime servers strive to not depend on a single presentr. As well as, the next two safeguards are related:

  • They provide native processing of information on a person’s machine at any time when potential, as an alternative of sending the information for processing to a third-party server. Native processing ensures that performance can’t be restricted.
  • If working a service domestically just isn’t potential (e.g., e-mail or internet search), the server ought to nonetheless help a number of avenues of getting on the wanted context by means of competing APIs. For instance, MindsDB capabilities as a gateway to a number of information sources, so as an alternative of counting on only one database to learn and write information, it goes to nice lengths to help a number of databases in a single unified interface, primarily making the backend instruments interchangeable.

Second, our evaluation factors to the truth that present restrictive API entry insurance policies will not be sustainable. Internet scraping and bots, accessed through MCP servers, are most likely getting used (not less than partly) to avoid overly restrictive API entry, complicating the more and more frequent apply of banning bots. Even OpenAI is coloring exterior the API traces, utilizing a third-party service to entry Google Search’s outcomes by means of internet scraping, thereby circumventing its restrictive API.

Increasing structured API entry in a significant means is significant. This ensures that respectable AI automation runs by means of steady, documented finish factors. In any other case, builders resort to brittle browser automation the place privateness and authorization haven’t been correctly addressed. Regulatory steerage may push the market on this route, as with open banking within the US.

Lastly, encouraging larger transparency and disclosure may assist establish the place the bottlenecks within the MCP ecosystem are.

  • Builders working fashionable MCP servers (above a sure utilization threshold) or offering APIs utilized by prime servers ought to report utilization statistics, entry denials, and rate-limiting insurance policies. This information would assist regulators establish rising bottlenecks earlier than they grow to be entrenched. GitHub would possibly facilitate this by encouraging these disclosures, for instance.
  • Moreover, MCP servers above sure utilization thresholds ought to clearly checklist their dependencies on exterior APIs and what fallback choices exist if the first APIs grow to be unavailable. This isn’t solely useful in figuring out the market construction, but additionally important data for safety and robustness for downstream functions.

The purpose is to not eradicate all focus within the community however to make sure that the MCP ecosystem stays contestable, with a number of viable paths for innovation and person alternative. By addressing each technical structure and market dynamics, these recommended tweaks may assist MCP obtain its potential as a democratizing pressure in AI improvement, relatively than merely shifting bottlenecks from one layer to a different.


Footnotes

  1. For this evaluation, we categorized every repo into certainly one of 15 classes utilizing GPT-5 mini. We then human-reviewed and edited the highest 50 servers that make up round 70% of the whole star depend in our dataset.

Appendix

Dataset

The total dataset, together with descriptions of the classes, may be discovered right here (constructed by Sruly Rosenblat):

https://huggingface.co/datasets/sruly/MCP-In-Follow

Limitations

There are a number of limitations to our preliminary analysis:

  • GitHub stars aren’t a measure of obtain counts and even essentially a repo’s reputation.
  • Solely the identify and outline have been used when categorizing repos with the LLM.
  • Categorization was topic to each human and AI errors and lots of servers would doubtless match into a number of classes.
  • We solely used the Pulse checklist for our dataset; different lists had totally different servers (e.g., Browser Use isn’t on mcpmarket.com).
  • We excluded some repos from our evaluation, corresponding to people who had a number of servers and people we weren’t in a position to fetch the star depend for. We might miss some fashionable servers by doing this.

MCP Server Use Over Time

The growth of the top nine repos’ star count over time from MCP’s launch date on November 25, 2024, until September 2025. NOTE: We were only able to track the Browser-Use’s repo until 40,000 stars; hence the flat line for its graph. In reality, roughly 21,000 stars were added over the next few months (the other graphs in this blog are properly adjusted).
The expansion of the highest 9 repos’ star depend over time from MCP’s launch date on November 25, 2024, till September 2025.

Word: We have been solely in a position to monitor Browser Use’s repo till 40,000 stars; therefore the flat line for its graph. In actuality, roughly 21,000 stars have been added over the following few months. (The opposite graphs on this publish are correctly adjusted.)

UTA Pronounces MAVRC, New Superior Drone Analysis Facility

0

The College of Texas at Arlington just lately unveiled the Maverick Autonomous Car Analysis Heart (MAVRC), an out of doors netted drone facility to assist testing and improvement of autonomous expertise. Situated at UTARI, UT Arlington’s analysis institute in Fort Value, the 120-by-150-foot construction is 40 toes excessive and is provided with 26 motion-capture cameras, wi-fi web and nighttime lighting. A linked 720-square-foot management and remark heart permits researchers, college students and trade companions to check superior autonomous aerial autos in a managed surroundings compliant with FAA requirements. 

“Situated in Fort Value, the Aviation and Protection Capital of Texas, this state-of-the-art heart positions our area on the forefront of autonomous techniques analysis,” stated Wayne Atchley, vp for regional campuses. “MAVRC isn’t just a constructing; it’s a hub the place college, college students and trade come collectively to resolve real-world issues. It will likely be a catalyst for regional development, getting ready college students for rising careers, constructing robust trade partnerships and guaranteeing Fort Value stays a frontrunner in superior air mobility.”

In response to UTA’s press launch, MAVRC will help a variety of initiatives, from bettering drones’ potential to navigate obstacles and carry payloads to testing surveillance and safety techniques to monitoring crops and the surroundings to inspecting bridges, energy strains and provide chains.

“This can be a partnership between the School of Engineering and UTARI,” stated Peter Crouch, dean of the School of Engineering. “Given the difficult restrictions of flying unmanned aerial autos, this house will play a pivotal function in serving to us innovate and advance the expertise in a protected surroundings.”

Eileen Clements, interim govt director of UTARI, added that MAVRC may even serve trade and authorities companions.

“This can be a good addition to UTARI, the place we specialise in making use of cutting-edge applied sciences to real-world engineering issues,” she stated. “We’re excited for all the brand new alternatives this facility will allow for UTA college, researchers and college students, in addition to our enterprise companions within the North Texas neighborhood.”

Extra details about the MAVRC facility will be discovered on UTA’s web site

About The College of Texas at Arlington (UTA)

Celebrating its 130th anniversary in 2025, The College of Texas at Arlington is a rising public analysis college within the coronary heart of the thriving Dallas-Fort Value metroplex. With a scholar physique of over 42,700, UTA is the second-largest establishment within the College of Texas System, providing greater than 180 undergraduate and graduate diploma applications. Acknowledged as a Carnegie R-1 college, UTA stands among the many nation’s high 5% of establishments for analysis exercise. UTA and its 280,000 alumni generate an annual financial influence of $28.8 billion for the state. The College has obtained the Innovation and Financial Prosperity designation from the Affiliation of Public and Land Grant Universities and has earned recognition for its give attention to scholar entry and success, thought of key drivers to financial development and social progress for North Texas and past.

Learn Extra

Swisslog Healthcare, Diligent Robotics to convey last-mile supply to hospitals

0

Swisslog Healthcare, Diligent Robotics to convey last-mile supply to hospitals

Moxi can securely deal with high-risk or tightly managed drugs, together with pediatric chemo and narcotics. | Supply: Swisslog

Swisslog Healthcare yesterday introduced a strategic alliance with Diligent Robotics, the developer of the Moxie robotic for hospitals. The companions mentioned they plan to convey autonomous last-mile supply robots to hospitals throughout the U.S.

The collaboration enhances Swisslog Healthcare’s transport and pharmacy automation suite with Diligent Robotics’ autonomous cell manipulation robots (AMMRs). The businesses mentioned they they’ll streamline workflows, automate duties, and empower hospital pharmacies to raise affected person care.

“Our strategic alliance with Diligent Robotics provides a complementary pillar to our automation choices and reinforces our imaginative and prescient to guide change for higher care,” acknowledged Cory Kwarta, president and CEO of Swisslog Healthcare. “Final-mile supply robots reinforce our dedication to modern options and addresses our prospects’ evolving wants.”

By integrating automation, value-added providers, and clever software program, Swisslog Healthcare mentioned it allows healthcare suppliers to answer sufferers’ wants rapidly and with larger accuracy. The Buchs, Switzerland-based firm claimed that it could decrease many sources of operational waste so suppliers can enhance productiveness and affected person care.

Robots to enhance pneumatic tubes in hospitals

The companions famous that the service robotics market is predicted to develop 15% to 18% yearly over the subsequent three to 5 years.

“By becoming a member of forces, each corporations search to deal with the rising demand for environment friendly, end-to-end hospital logistics options that drive operational effectivity and permit healthcare workers to focus extra on affected person care,” mentioned Diligent Robotics.

Swisslog Healthcare mentioned the collaboration will strengthen its management in healthcare transport automation by extending present pneumatic tube techniques put in by subsidiary TransLogic with Diligent Robotics’ AMMRs. The robots may even improve the power to trace and hint items transported in hospitals, it mentioned.

As well as, Diligent and Swisslog mentioned they’ll tackle “last-mile” deliveries and transport gadgets too giant or delicate for tubes, akin to samples.

“For our prospects who’ve been asking for dependable, environment friendly last-mile supply robots that combine seamlessly with present workflows, this cooperative effort with Diligent Robotics permits us to attach them with a confirmed resolution for smarter and safer deliveries that workers and sufferers belief,” mentioned Ray Castro, director of strategic packages for transport automation at Swisslog Healthcare. He’ll lead the coordination of Swisslog’s actions with Diligent Robotics.

Diligent’s Moxi aligns with Swisslog Healthcare mission

Diligent Robotics mentioned its Moxi robotic is already deployed at greater than 25 U.S. hospitals. It has efficiently accomplished over 1 million deliveries, together with 300,000 pharmacy deliveries, famous the Austin, Texas-based firm. This confirmed observe report in fast-paced, high-pressure medical environments aligns with Swisslog Healthcare’s mission to boost operational effectivity in healthcare settings, it mentioned.

“Along with Swisslog Healthcare, we’re setting a brand new commonplace for healthcare logistics,” mentioned Dr. Andrea Thomaz, co-founder and CEO of Diligent Robotics. “Swisslog Healthcare’s unmatched attain, mixed with their deep belief and a long time of expertise in pharmacy automation, makes them the perfect accomplice to convey our autonomous supply robots to well being techniques at scale.”

Since its founding in 2017, Diligent has centered on advancing healthcare automation with Moxi. The corporate has additionally accomplished greater than 100,000 autonomous elevator rides.

Editor’s notice: Vivian Chu, co-founder and chief innovation officer of Diligent Robotics, can be talking at RoboBusiness 2025, which takes place Oct. 15 and 16 on the Santa Clara Conference Heart. She is going to talk about how real-time adaptation, vision-language brokers, and conduct tree restoration assist Moxi function in advanced, human-facing environments.

RoboBusiness, the premiere occasion for industrial robotics builders and suppliers, can be co-located with DeviceTalks West, which serves the medical gadget. Registration is now open.



Belgian startup Eagl secures €825K to automate month-end shut with AI brokers

0

Ghent-based AI startup Eagl has raised €825,000 in funding to increase its engineering group, scale throughout Europe, and additional develop its AI-native monetary operations platform. Based in April 2025, Eagl goals to remodel how in-house finance groups handle month-end shut, audits, and reporting by eliminating repetitive guide duties.

The funding spherical was led by Syndicate One and CNBB Fairness Companions, with participation from a number of well-known SaaS founders. Backers embrace Matthias Geeroms (Lighthouse), Joris Van Der Gucht (Silverfin, Ravical), Jeroen De Wit (Teamleader), Lorenz Bogaert (Rydoo, StarApps), Louis Jonckheere (Showpad), Roeland Delrue (Aikido Safety), in addition to Jorn Vanysacker, Gilles Mattelin, and Wouter Van Respaille (Henchman).

Eagl’s platform integrates immediately with ERP and accounting techniques to automate accounting and controlling workflows, constantly monitor information high quality, and resolve anomalies in actual time. By guaranteeing accuracy and structured information, the system helps finance groups shut sooner, cut back errors, and shorten audit cycles.

“Most finance groups are stretched skinny, fixing issues as a substitute of including actual worth. Eagl turns days of guide checking into instantaneous insights, empowering finance leaders to concentrate on strategic subjects whereas auditors get cleaner books,” stated Samuel Van Innis, Co-founder and Chief Govt Officer.

“Past automating workflows, we constantly monitor information high quality and resolve anomalies in actual time. This implies finance groups can shut sooner, scale effectively, and reduce down the time and stress of audits,” added Frederik Bakx, Co-founder and Chief Monetary Officer.

Backers see Eagl as addressing a key bottleneck in monetary operations. “Eagl is tackling probably the most irritating bottlenecks in finance with expertise that really understands enterprise context. That’s the reason many individuals within the SaaS ecosystem consider Eagl will essentially change how finance groups function,” stated Matthias Geeroms, Founder and CFO of Belgian unicorn Lighthouse.

Finance leaders at the moment spend as much as 70% of their time gathering, cleansing, and reconciling information reasonably than analysing it. By plugging immediately into finance stacks, Eagl allows CFOs to make sure accuracy whereas producing real-time, context-rich experiences.


Uber will (as soon as once more) check delivering meals by drone this yr

0

Uber Eats is testing out drone deliveries but once more, this time in partnership with Flytrex. To notice, Flytrex is among the drone operators behind Walmart’s drone deliveries and already delivers meals in some components of the US. It is one of many 4 suppliers that acquired authorization by the FAA for Past Visible Line of Sight (BVLOS) operations, through which pilots cannot see the plane they’re controlling instantly. The partnership, the businesses stated, will allow Uber Eats clients to obtain orders inside minutes whereas lowering street congestion. They’re planning to launch their drone supply service for pilot markets within the US by the tip of the yr.

As a part of their partnership, Uber can also be investing in Flytrex, giving it the assets wanted to have the ability to speed up the deployment of its drone know-how within the US. “Autonomous know-how is reworking mobility and supply quicker than ever earlier than” stated Sarfraz Maredia, the President of Autonomous Mobility and Supply at Uber. “With Flytrex, we’re coming into the subsequent chapter — bringing the velocity and sustainability of drone supply to the Uber Eats platform, at scale, for the primary time.”

It seems like Uber is anticipating this pilot testing program to have a a lot wider attain that its earlier try. Approach again in 2018, Uber CEO Dara Khosrowshahi talked concerning the chance of beginning a drone-based meals supply service by 2021. Uber, underneath a product referred to as Uber Elevate, then made just a few check deliveries in partnership with McDonald’s in 2020, however the service did not get a large launch. The corporate additionally unveiled its personal unmanned aerial system able to vertical take-off and touchdown that was designed to hold meals for 2 folks, however it’s unclear what occurred to that individual challenge.

The brand new Apple Watch Hermès comes with Clarus the Dogcow

0

The Apple Watch Hermès has at all times appeared to function nothing greater than a approach for Apple to separate fashionistas from their cash. However the Sequence 11 mannequin comes with the last word new function for longtime Apple fans …

Apple Watch Hermès

After the unique gold Apple Watch Version failed to draw the curiosity Apple had hoped for, the corporate tried a brand new tactic for promoting a fashion-focused version of the wearable. The Apple Watch Hermès provided a collection of designer bands that value greater than many of the watches, full with an unique watch face.

Because the initiative has survived all through from the unique Apple Watch to the most recent Sequence 11, one has to imagine it’s moderately profitable.

Clarus the Dogcow

Macintosh icon designer Susan Kare created the dogcow glyph for the unique mannequin again in 1984. It shortly got here to be liked by each the Macintosh group and early consumers of the machine.

A pleasant easter egg

The most recent Sequence 11 model comes with 24 watch face animations, one for every hour of the day and evening. Fundamental Apple Man famous that one in every of these options Clarus the dogcow.

As fantastic as that is, not everyone appears to really feel that this fairly justifies the $1,400 asking value.

Highlighted equipment

Picture: Fundamental Apple Man

FTC: We use revenue incomes auto affiliate hyperlinks. Extra.

Oakley Meta Vanguard introduced 3K video recording, enhanced audio system and IP67 score

0

The Meta Join 2025 occasion noticed the introduction of three new pairs of good glasses from the model. We already lined the Ray-Ban Show and Ray-Ban Meta (Gen 2) in a separate put up and we are going to now deal with the Oakley Meta Vanguard.

Oakley Meta Vanguard announced 3K video recording, enhanced speakers and IP67 rating

These characteristic the well-known visor design popularized by Oakley and add many of the good options discovered within the Ray-Ban Meta (Gen 2), like hands-free entry to Meta AI, as much as 3K video recording and open-ear audio system, which Meta claims are the “strongest” on any of its glasses to this point.

Oakley Meta Vanguard announced 3K video recording, enhanced speakers and IP67 rating

All variations of the Vanguard characteristic Oakley Prizm lenses with gold, black, street and sapphire coloration choices. On the core, the brand new glasses characteristic the identical 12MP ultrawide digital camera with a 122° discipline of view like on the Oakley Meta HSTN however the sensor is now positioned within the center, proper above the nostril pad. This permits movies to really feel extra real looking with a first-person perspective.


Oakley Meta Vanguard
Oakley Meta Vanguard
Oakley Meta Vanguard

Oakley Meta Vanguard

The glasses file as much as 3K video at 30fps and help slow-motion and hyperlapse recording modes. You get 32GB storage and help for Wi-Fi 6 and Bluetooth 5.3 connectivity.

These are Meta’s first IP67-rated AI glasses, they usually additionally characteristic exercise sharing with Garmin units and Strava.


Oakley Meta Vanguard fetaure IP67 ingress protection
Oakley Meta Vanguard fetaure IP67 ingress protection

Oakley Meta Vanguard characteristic IP67 ingress safety

Elsewhere, the Vanguard will get 5 microphones with two positioned within the body and three close to the nostril area. Meta additionally talked concerning the enhanced audio system, which it claims are 6dB louder than Oakley Meta HSTN.

Battery life is rated at as much as 6 hours of steady audio playback and 9 hours of common utilization. The case gives a further 36 hours of charging. A full cost ought to take 75 minutes.

Oakley Meta Vanguard shall be accessible within the US, Canada, UK, Eire, France, Italy, Spain, Austria, Belgium, Australia, Germany, Sweden, Norway, Finland, Denmark, Switzerland, and the Netherlands at launch. They’re additionally set to reach in Mexico, India, Brazil, and the United Arab Emirates later this yr.

They’re up for pre-order for $499/€549/£499 from meta.com and oakley.com. Open gross sales begin on October 21.

Supply

Faye Zhang on Utilizing AI to Enhance Discovery – O’Reilly

0

O'Reilly Media

O’Reilly Media

Generative AI within the Actual World: Faye Zhang on Utilizing AI to Enhance Discovery



Loading





/

On this episode, Ben Lorica and AI Engineer Faye Zhang speak about discoverability: use AI to construct search and suggestion engines that really discover what you need. Pay attention in to learn the way AI goes manner past easy collaborative filtering—pulling in many alternative sorts of information and metadata, together with photographs and voice, to get a significantly better image of what any object is and whether or not or not it’s one thing the consumer would need.

Concerning the Generative AI within the Actual World podcast: In 2023, ChatGPT put AI on everybody’s agenda. In 2025, the problem will likely be turning these agendas into actuality. In Generative AI within the Actual World, Ben Lorica interviews leaders who’re constructing with AI. Be taught from their expertise to assist put AI to work in your enterprise.

Try different episodes of this podcast on the O’Reilly studying platform.

Transcript

This transcript was created with the assistance of AI and has been flippantly edited for readability.

0:00: At present we now have Faye Zhang of Pinterest, the place she’s a employees AI engineer. And so with that, very welcome to the podcast.

0:14: Thanks, Ben. Big fan of the work. I’ve been lucky to attend each the Ray and NLP Summits. I do know the place you function chairs. I additionally love the O’Reilly AI podcast. The latest episode on A2A and the one with Raiza Martin on NotebookLM have been actually inspirational. So, nice to be right here. 

0:33: All proper, so let’s leap proper in. So one of many first issues I actually needed to speak to you about is that this work round PinLanding. And also you’ve revealed papers, however I suppose at a excessive stage, Faye, possibly describe for our listeners: What drawback is PinLanding attempting to deal with?

0:53: Yeah, that’s a terrific query. I believe, briefly, attempting to resolve this trillion-dollar discovery disaster. We’re residing by way of the best paradox of the digital financial system. Basically, there’s infinite stock however little or no discoverability. Image one instance: A bride-to-be asks ChatGPT, “Now, discover me a marriage costume for an Italian summer season winery ceremony,” and she or he will get nice common recommendation. However in the meantime, someplace in Nordstrom’s a whole lot of catalogs, there sits the right terracotta Soul Committee costume, by no means to be discovered. And that’s a $1,000 sale that can by no means occur. And when you multiply this by a billion searches throughout Google, SearchGPT, and Perplexity, we’re speaking a few $6.5 trillion market, in accordance with Shopify’s projections, the place each failed product discovery is cash left on the desk. In order that’s what we’re attempting to resolve—primarily resolve the semantic group of all platforms versus consumer context or search. 

2:05: So, earlier than PinLanding was developed, and when you look throughout the trade and different firms, what could be the default—what could be the incumbent system? And what could be inadequate about this incumbent system?

2:22: There have been researchers throughout the previous decade engaged on this drawback; we’re undoubtedly not the primary one. I believe primary is to know the catalog attribution. So, again within the day, there was multitask R-CNN technology, as we bear in mind, [that could] establish vogue purchasing attributes. So you’d cross in-system a picture. It might establish okay: This shirt is pink and that materials could also be silk. After which, lately, due to the leverage of enormous scale VLM (imaginative and prescient language fashions), this drawback has been a lot simpler. 

3:03: After which I believe the second route that individuals are available is by way of the content material group itself. Again within the day, [there was] analysis on be part of graph modeling on shared similarity of attributes. And a variety of ecommerce shops additionally do, “Hey, if folks like this, you may also like that,” and that relationship graph will get captured of their group tree as effectively. We make the most of a imaginative and prescient massive language mannequin after which the muse mannequin CLIP by OpenAI to simply acknowledge what this content material or piece of clothes may very well be for. After which we join that between LLMs to find all prospects—like eventualities, use case, worth level—to attach two worlds collectively. 

3:55: To me that means you might have some rigorous eval course of or perhaps a separate crew doing eval. Are you able to describe to us at a excessive stage what’s eval like for a system like this? 

4:11: Positively. I believe there are inside and exterior benchmarks. For the exterior ones, it’s the Fashion200K, which is a public benchmark anybody can obtain from Hugging Face, on a normal of how correct your mannequin is on predicting vogue gadgets. So we measure the efficiency utilizing the recall top-k metrics, which says whether or not the label seems among the many top-end prediction attribute precisely, and consequently, we have been in a position to see 99.7% recall for the highest ten.

4:47: The opposite subject I needed to speak to you about is suggestion techniques. So clearly there’s now speak about, “Hey, possibly we will transcend correlation and go in direction of reasoning.” Are you able to [tell] our viewers, who will not be steeped in state-of-the-art suggestion techniques, how you’d describe the state of recommenders nowadays?

5:23: For the previous decade, [we’ve been] seeing super motion from foundational shifts on how RecSys primarily operates. Simply to name out a number of huge themes I’m seeing throughout the board: Primary, it’s sort of transferring from correlation to causation. Again then it was, hey, a consumer who likes X may also like Y. However now we truly perceive why contents are linked semantically. And our LLM AI fashions are in a position to motive in regards to the consumer preferences and what they really are. 

5:58: The second huge theme might be the chilly begin drawback, the place firms leverage semantic IDs to resolve the brand new merchandise by encoding content material, understanding the content material straight. For instance, if it is a costume, then you definately perceive its colour, model, theme, and many others. 

6:17: And I consider different greater themes we’re seeing; for instance, Netflix is merging from [an] remoted system right into a unified intelligence. Simply this previous 12 months, Netflix [updated] their multitask structure the place [they] shared representations, into one they referred to as the UniCoRn system to allow company-wide enchancment [and] optimizations. 

6:44: And really lastly, I believe on the frontier aspect—that is truly what I realized on the AI Engineer Summit from YouTube. It’s a DeepMind collaboration, the place YouTube is now utilizing a big suggestion mannequin, primarily instructing Gemini to talk the language of YouTube: of, hey, a consumer watched this video, then what would possibly [they] watch subsequent? So a variety of very thrilling capabilities occurring throughout the board for positive. 

7:15: Typically it sounds just like the themes from years previous nonetheless map over within the following sense, proper? So there’s content material—the distinction being now you might have these basis fashions that may perceive the content material that you’ve extra granularly. It will possibly go deep into the movies and perceive, hey, this video is just like this video. After which the opposite supply of sign is conduct. So these are nonetheless the 2 fundamental buckets?

7:53: Right. Sure, I might say so. 

7:55: And so the muse fashions aid you on the content material aspect however not essentially on the conduct aspect?

8:03: I believe it depends upon the way you wish to see it. For instance, on the embedding aspect, which is a sort of illustration of a consumer entity, there have been transformations [since] again within the day with the BERT Transformer. Now it’s received lengthy context encapsulation. And people are all with the assistance of LLMS. And so we will higher perceive customers, to not subsequent or the final clicks, however to “hey, [in the] subsequent 30 days, what would possibly a consumer like?” 

8:31: I’m unsure that is occurring, so appropriate me if I’m flawed. The opposite factor that I might think about that the muse fashions may help with is, I believe for a few of these techniques—like YouTube, for instance, or possibly Netflix is a greater instance—thumbnails are necessary, proper? The actual fact now that you’ve these fashions that may generate a number of variants of a thumbnail on the fly means you’ll be able to run extra experiments to determine consumer preferences and consumer tastes, appropriate? 

9:05: Sure. I might say so. I used to be fortunate sufficient to be invited to one of many engineer community dinners, [and was] talking with the engineer who truly works on the thumbnails. Apparently it was all personalised, and the method you talked about enabled their fast iteration of experiments, and had undoubtedly yielded very constructive outcomes for them. 

9:29: For the listeners who don’t work on suggestion techniques, what are some common classes from suggestion techniques that typically map to different types of ML and AI functions? 

9:44: Yeah, that’s a terrific query. Lots of the ideas nonetheless apply. For instance, the information distillation. I do know Certainly was attempting to sort out this. 

9:56: Perhaps Faye, first outline what you imply by that, in case listeners don’t know what that’s. 

10:02: Sure. So information distillation is basically, from a mannequin sense, studying from a dad or mum mannequin with bigger, greater parameters that has higher world information (and the identical with ML techniques)—to distill into smaller fashions that may function a lot quicker however nonetheless hopefully encapsulate the educational from the dad or mum mannequin. 

10:24: So I believe what Certainly again then confronted was the basic precision versus recall in manufacturing ML. Their binary classifier wants to essentially filter out the batch job that you’d suggest to the candidates. However this course of is clearly very noisy, and sparse coaching information may cause latency and likewise constraints. So I believe again within the work they revealed, they couldn’t actually get efficient separate résumé content material from Mistral and possibly Llama 2. After which they have been blissful to be taught [that] out-of-the-box GPT-4 achieved one thing like 90% precision and recall. However clearly GPT-4 is costlier and has near 30 seconds of inference time, which is far slower.

11:21: So I believe what they do is use the distillation idea to fine-tune GPT 3.5 on labeled information, after which distill it into a light-weight BERT-based mannequin utilizing the temperature scale softmax, and so they’re in a position to obtain millisecond latency and a comparable recall-precision trade-off. So I believe that’s one of many learnings we see throughout the trade that the standard ML methods nonetheless work within the age of AI. And I believe we’re going to see much more within the manufacturing work as effectively. 

11:57: By the way in which, one of many underappreciated issues within the suggestion system house is definitely UX in some methods, proper? As a result of principally good UX for delivering the suggestions truly can transfer the needle. The way you truly current your suggestions would possibly make a cloth distinction.  

12:24: I believe that’s very a lot true. Though I can’t declare to be an skilled on it as a result of I do know most suggestion techniques cope with monetization, so it’s difficult to place, “Hey, what my consumer clicks on, like interact, ship by way of social, versus what proportion of that…

12:42: And it’s additionally very platform particular. So you’ll be able to think about TikTok as one single feed—the advice is simply on the feed. However YouTube is, you already know, the stuff on the aspect or no matter. After which Amazon is one thing else. Spotify and Apple [too]. Apple Podcast is one thing else. However in every case, I believe these of us on the skin underappreciate how a lot these firms put money into the precise interface.

13:18: Sure. And I believe there are a number of iterations occurring on any day, [so] you would possibly see a distinct interface than your mates or household since you’re truly being grouped into A/B assessments. I believe that is very a lot true of [how] the engagement and efficiency of the UX have an effect on a variety of the search/rec system as effectively, past the information we simply talked about. 

13:41: Which brings to thoughts one other subject that can also be one thing I’ve been considering, over many, a few years, which is that this notion of experimentation. Lots of the most profitable firms within the house even have invested in experimentation instruments and experimentation platforms, the place folks can run experiments at scale. And people experiments may be achieved way more simply and may be monitored in a way more principled manner in order that any sort of issues they do are backed by information. So I believe that firms underappreciate the significance of investing in such a platform. 

14:28: I believe that’s very a lot true. Lots of bigger firms truly construct their very own in-house A/B testing experiment or testing frameworks. Meta does; Google has their very own and even inside totally different cohorts of merchandise, when you’re monetization, social. . . They’ve their very own area of interest experimentation platform. So I believe that thesis could be very a lot true. 

14:51: The final subject I needed to speak to you about is context engineering. I’ve talked to quite a few folks about this. So each six months, the context window for these massive language fashions expands. However clearly you’ll be able to’t simply stuff the context window full, as a result of one, it’s inefficient. And two, truly, the LLM can nonetheless make errors as a result of it’s not going to effectively course of that total context window anyway. So speak to our listeners about this rising space referred to as context engineering. And the way is that taking part in out in your personal work? 

15:38: I believe it is a fascinating subject, the place you’ll hear folks passionately say, “RAG is useless.” And it’s actually, as you talked about, [that] our context window will get a lot, a lot greater. Like, for instance, again in April, Llama 4 had this staggering 10 million token context window. So the logic behind this argument is kind of easy. Like if the mannequin can certainly deal with thousands and thousands of tokens, why not simply dump every thing as an alternative of doing a retrieval?

16:08: I believe there are fairly a number of elementary limitations in direction of this. I do know of us from contextual AI are captivated with this. I believe primary is scalability. Lots of occasions in manufacturing, a minimum of, your information base is measured in terabytes or petabytes. So not tokens. So one thing even bigger. And quantity two I believe could be accuracy.

16:33: The efficient context home windows are very totally different. Truthfully, what we see after which what’s marketed in product launches. We see efficiency degrade lengthy earlier than the mannequin reaches its “official limits.” After which I believe quantity three might be the effectivity and that sort of aligns with, truthfully, our human conduct as effectively. Like do you learn a complete ebook each time you want to reply one easy query? So I believe the context engineering [has] slowly developed from a buzzword, a number of years in the past, to now an engineering self-discipline. 

17:15: I’m appreciative that the context home windows are growing. However at some stage, I additionally acknowledge that to some extent, it’s additionally sort of a feel-good transfer on the a part of the mannequin builders. So it makes us really feel good that we will put extra issues in there, however it might not truly assist us reply the query exactly. Truly, a number of years in the past, I wrote sort of a tongue-and-cheek publish referred to as “Construction Is All You Want.” So principally no matter construction you might have, you need to assist the mannequin, proper? If it’s in a SQL database, then possibly you’ll be able to expose the construction of the information. If it’s a information graph, you leverage no matter construction you need to present the mannequin higher context. So this entire notion of simply stuffing the mannequin with as a lot data, for all the explanations you gave, is legitimate. But additionally, philosophically, it doesn’t make any sense to do this anyway.

18:30: What are the issues that you’re trying ahead to, Faye, when it comes to basis fashions? What sorts of developments within the basis mannequin house are you hoping for? And are there any developments that you just suppose are beneath the radar? 

18:52: I believe, to higher make the most of the idea of “contextual engineering,” that they’re primarily two loops. There’s primary throughout the loop of what occurred. Sure. Inside the LLMs. After which there’s the outer loop. Like, what are you able to do as an engineer to optimize a given context window, and many others., to get one of the best outcomes out of the product throughout the context loop. There are a number of methods we will do: For instance, there’s the vector plus Excel or regex extraction. There’s the metadata fillers. After which for the outer loop—it is a quite common observe—individuals are utilizing LLMs as a reranker, generally throughout the encoder. So the thesis is, hey, why would you overburden an LLM with a 20,000 rating when there are issues you are able to do to scale back it to prime hundred or so? So all of this—context meeting, deduplication, and diversification—would assist our manufacturing [go] from a prototype to one thing [that’s] extra actual time, dependable, and in a position to scale extra infinitely. 

20:07: One of many issues I want—and I don’t know, that is wishful considering—is possibly if the fashions could be a little extra predictable, that may be good. By that, I imply, if I ask a query in two alternative ways, it’ll principally give me the identical reply. The inspiration mannequin builders can in some way enhance predictability and possibly present us with a bit extra rationalization for a way they arrive on the reply. I perceive they’re giving us the tokens, and possibly among the, among the reasoning fashions are a bit extra clear, however give us an concept of how these items work, as a result of it’ll affect what sorts of functions we’d be comfy deploying these items in. For instance, for brokers. If I’m utilizing an agent to make use of a bunch of instruments, however I can’t actually predict their conduct, that impacts the forms of functions I’d be comfy utilizing a mannequin for. 

21:18: Yeah, undoubtedly. I very a lot resonate with this, particularly now most engineers have, you already know, AI empowered coding instruments like Cursor and Windsurf—and as a person, I very a lot recognize the prepare of thought you talked about: why an agent does sure issues. Why is it navigating between repositories? What are you taking a look at when you’re doing this name? I believe these are very a lot appreciated. I do know there are different approaches—take a look at Devin, that’s the totally autonomous engineer peer. It simply takes issues, and also you don’t know the place it goes. However I believe within the close to future there will likely be a pleasant marriage between the 2. Nicely, now since Windsurf is a part of Devin’s dad or mum firm. 

22:05: And with that, thanks, Faye.

22:08: Superior. Thanks, Ben.

WatchGuard warns of essential vulnerability in Firebox firewalls

0

WatchGuard warns of essential vulnerability in Firebox firewalls

WatchGuard has launched safety updates to handle a distant code execution vulnerability impacting the corporate’s Firebox firewalls.

Tracked as CVE-2025-9242, this essential safety flaw is brought on by an out-of-bounds write weak point that may enable attackers to execute malicious code remotely on susceptible gadgets following profitable exploitation.

CVE-2025-9242 impacts firewalls operating Fireware OS 11.x (finish of life), 12.x, and 2025.1, and was fastened in variations 12.3.1_Update3 (B722811), 12.5.13, 12.11.4, and 2025.1.1.

Whereas Firebox firewalls are solely susceptible to assaults if they’re configured to make use of IKEv2 VPN, WatchGuard added that they could nonetheless be susceptible to compromise, even when the susceptible configurations have been deleted, if a department workplace VPN to a static gateway peer remains to be configured.

“An Out-of-bounds Write vulnerability within the WatchGuard Fireware OS iked course of might enable a distant unauthenticated attacker to execute arbitrary code. This vulnerability impacts each the cell person VPN with IKEv2 and the department workplace VPN utilizing IKEv2 when configured with a dynamic gateway peer,” the corporate warned in a Wednesday advisory.

“If the Firebox was beforehand configured with the cell person VPN with IKEv2 or a department workplace VPN utilizing IKEv2 to a dynamic gateway peer, and each of these configurations have since been deleted, that Firebox should still be susceptible if a department workplace VPN to a static gateway peer remains to be configured.”





Product department Weak firewalls
Fireware OS 12.5.x T15, T35
Fireware OS 12.x T20, T25, T40, T45, T55, T70, T80, T85, M270, M290, M370, M390, M470, M570, M590, M670, M690, M440, M4600, M4800, M5600, M5800, Firebox Cloud, Firebox NV5, FireboxV
Fireware OS 2025.1.x T115-W, T125, T125-W, T145, T145-W, T185

WatchGuard additionally supplies a short lived workaround for directors who cannot instantly patch gadgets operating susceptible software program configured with Department Workplace VPN (BOVPN) tunnels to static gateway friends.

This requires them to disable dynamic peer BOVPNs, add new firewall insurance policies, and disable the default system insurance policies that deal with VPN visitors, as outlined in this help doc, which supplies detailed directions on learn how to safe entry to BOVPNs that use IPSec and IKEv2.

Whereas this essential vulnerability is just not but being exploited within the wild, admins are nonetheless suggested to patch their WatchGuard Firebox gadgets, as menace actors think about firewalls a sexy goal. As an illustration, the Akira ransomware gang is actively exploiting CVE-2024-40766, a year-old critical-severity vulnerability, to compromise SonicWall firewalls.

Two years in the past, in April 2022, the Cybersecurity and Infrastructure Safety Company (CISA) additionally ordered federal civilian companies to patch an actively exploited bug impacting WatchGuard Firebox and XTM firewall home equipment.

WatchGuard collaborates with over 17,000 safety resellers and repair suppliers to guard the networks of greater than 250,000 small and mid-sized firms worldwide,

46% of environments had passwords cracked, practically doubling from 25% final yr.

Get the Picus Blue Report 2025 now for a complete have a look at extra findings on prevention, detection, and knowledge exfiltration developments.