It increasingly feels like we are in for a Cambrian explosion of AI agents in the coming months. The purported Anthropic Mythos and OpenAI Spud model performance is off the charts. But one challenge remains: The web is too human for agents.
The internet, software, and the authentication infrastructure that gates it, is currently constructed for humans. That creates barriers for agents (or more specifically, agent developers). MFA, 2-factor, captchas, and so many more are all methods built with humans at their center.
Gradient portfolio company, Anchor Browser, recently launched OmniConnect to manage end-to-end authentication for agents on behalf of developers. This is a huge step in making the web more traversable for agents, particularly agents that need to access high value data that sits behind complex authentication flows.
Is the web really too human?
There's a security heuristic called "impossible travel." If the same account logs in from New York and then Singapore 20 minutes later, the system flags it, because no human can physically move that fast. Elegant naming for a cheap security check to run at log-in time and detect compromised credentials.
Unfortunately agents travel at machine speed and can originate from cloud infrastructure that spans multiple continents.
Impossible travel isn't a quirky edge case. It's one of dozens of web security mechanisms that test the same hypothesis: is this user a human being? Session timeouts assume humans walk away. CAPTCHAs test for eyeballs. Device fingerprinting assumes one device equals one person. Rate limiting assumes human click speed. IP reputation assumes a stable location. Behavioral analytics score mouse movements and keystroke timing against biological baselines.
Most web trust layers, at their core, are a human detector. Agents aren't human, and this presents a problem for agent developers working on behalf of their humans.
The Agent Builder's Problem
We’ve invested in teams building browser agents for healthcare systems, financial services portals, and logistics freight boards. The AI works fine (through the devoted work of many, many AI engineers and parallel Claude sessions). For many of these startups, the limiter is actually the authentication layer. Many startups staff an FDE at a customer just to build a series of browser use-based auth integrations into legacy software. These FDEs have the unenviable job of handling a mess of authentication.
Authentication & session lifecycle: There is no standard login interface for the web. Every app authenticates differently: custom forms, OAuth, SSO, SAML, MFA, multi-step wizards. Sessions are opaque. Cookies expire, tokens rotate, servers force re-auth based on internal risk scoring the agent can't predict. The agent must continuously detect auth state by interpreting the page itself, re-execute full login flows, and resume mid-task.
Browser fingerprint: Bot detection stacks (Cloudflare, Akamai, PerimeterX) aggregate dozens of signals into a confidence score: canvas rendering, WebGL, font enumeration, screen dimensions. The TLS handshake leaks whether you're Chrome on macOS or headless Chromium in a container (JA3/JA4 fingerprinting), and some WAFs block on this alone. A cloud browser has a detectably non-human fingerprint across all of these vectors, and you need to pass every one.
Network identity: Cloud IPs are flagged by default. You need residential-grade IPs with geographic consistency, or you trigger impossible travel detection and the session dies. Even with a proxy, browsers can leak real IPs through WebRTC or DNS.
Anti-bot & behavioral: CAPTCHAs have evolved. Cloudflare Turnstile relies on behavioral signals, not image puzzles. WAFs score mouse trajectories, scroll velocity, click timing. JS challenge pages probe your runtime. You need a full browser with convincing behavioral simulation.
These systems will all have to evolve, and they already are. Last year I wrote about Cloudflare’s partnership with Anchor Browser, Browserbase, and a few other infra providers to equip agents with a cryptographic passport that helps them navigate the web.
I’m thrilled to see Anchor Browser continuing their market leading work! Yesterday, Idan and the team launched OmniConnect, which can handle all types of authentication complexity on behalf of the agent. “Self-Healing Session Recovery”, is a huge asset for any agent developer moving quickly.
The Application’s Problem
Of course, this is a two sided problem, and I’m highly interested in companies helping merchants, applications, infra providers, etc. manage the inbound from “agents”.
A well-built agent using the stack above is indistinguishable from a human in your WAF, analytics, and session logs. You can't see them. You can't scope them (your IAM has no concept of "agent acting on behalf of user X," and agents bypass OAuth entirely by logging in through the UI). You can't meter them (per-seat pricing assumes human velocity, not 10,000 tasks per day on one login).
And your audit trail can't tell human actions from agent actions, which is a compliance problem before it's a technical one. Should we trust DAU / MAU data anymore if it doesn’t breakout agent vs human “users”? I’m not sure that I would!
Applications, merchants, compute providers, media, etc. all want this traffic but may want to interact with it differently. Customers automating on top of your product means it's valuable (most of the time). The move isn't to block agents. It's to serve, scope, and price them.
If you’re starting a company to help anyone that lives on the web understand, interface, and transact with agents, please reach out!
It’s time to build [agent auth products]
Both sides want the same handshake.
The agent: "I'm an agent, here's who authorized me, here's what I need."
The merchant: "Here's your scope, your rate limit, your price."
With Cloudflare’s Web Bot Auth and products like Anchor Browser’s OmniConnect, we are on a path to enable agents to genuinely interop with the world on behalf of their users. I’d be remiss not to plug another great Gradient portfolio company, Sapiom, which is working on financial interoperability here.
But HTTPS took roughly 15 years to reach near-universal adoption, and that had a clear forcing function (Google ranked you higher). Agent identity standards have no equivalent incentive yet, and in fact, there are certainly parties that would like to avoid this kind of thing. Look no further than Slack’s significant rate limits for agents.
So in the meantime, the infrastructure, networking, and identity opportunities are plentiful! Together, we can de-humanize the web!