Introducing Browserless MCP Server: Connect Browser Automation and AI

Your assistant can already write browser code. The missing piece is execution. Browserless's new MCP gives compatible clients direct access to hosted browser automation, so your assistant can scrape pages, crawl the entire site, search the web, map sites, run performance audits, export content, download files, and run custom Puppeteer code without you standing up new browser infrastructure first.

What you can do with the MCP

The MCP turns your AI client into a browser-aware operator. Instead of stopping at code suggestions, it can act on the live web through Browserless's hosted server and managed browser infrastructure.

That gives you a cleaner path for workflows such as:

  • Searching the web and scraping the top results.
  • Extracting content from pages that may need a normal fetch, a proxy, full rendering, or CAPTCHA handling.
  • Downloading files triggered by browser interactions.
  • Exporting pages as HTML, PDFs, images, or offline ZIP archives.
  • Mapping a site before deciding what to scrape.
  • Running custom Puppeteer code when the built-in tools are not enough.
  • Crawling entire sites and scraping every discovered page into structured, LLM-ready output.
  • Running Lighthouse audits to check performance, accessibility, SEO, and best practices.

Prerequisites

Before you start with the MCP, you'll need:

  • A Browserless account. Get an API token from your account dashboard, or use OAuth sign-in where supported.
  • An MCP-compatible client (Claude Desktop, Cursor, VS Code, Windsurf, or Claude.ai).

Supported tools in the Browserless MCP

The hosted server exposes eight tools. That mix makes the MCP useful in day-to-day work: you can stay on a higher-level tool for common jobs, then drop to custom browser code only when you need more control.

Smart Scraper

Use browserless_smartscraper when you want one page in and usable output out. It uses cascading strategies that can move from HTTP fetch to proxying, a headless browser, and CAPTCHA solving, then returns the best successful result automatically.

Supported formats include markdown, html, screenshot, pdf, and links, with markdown as the default.

Function

Use browserless_function when your assistant needs full control over the page. It runs custom Puppeteer JavaScript on Browserless, passes your function a page object and optional context, and expects a { data, type } return shape so you can control the response payload and content type.

Download

Use browserless_download when the thing you need is a file, not page text. It runs Puppeteer code that triggers a browser download and returns the downloaded file, making it useful for CSV exports, reports, generated PDFs, and other browser-initiated assets.

Export

Use browserless_export when you want a page artifact rather than extracted content. It exports a URL in its native format, and can bundle linked assets into a ZIP archive for offline use when you enable includeResources.

Use browserless_search when discovery comes first. It searches across the web, news, or images, and can optionally scrape each result into formats such as markdown, HTML, links, or screenshots.

Map

Use browserless_map when your target is one site and you want a clean URL inventory before you scrape anything. It discovers pages through sitemap parsing and link extraction, then returns mapped URLs with optional titles and descriptions -- and can then rank them by relevance to a query.

Performance

Use browserless_performance when you want to audit a page's quality, not scrape its content. It runs a Lighthouse performance audit on any URL and returns scores and detailed metrics for accessibility, best practices, performance, PWA, and SEO. Optionally filter by category or supply performance budgets to check against.

Crawl

Use browserless_crawl when you need to scrape an entire site, not just a single page. It asynchronously crawls from a seed URL, follows links up to a configurable depth, and scrapes every discovered page into structured output (markdown, HTML, or raw text) along with metadata. Set waitForCompletion to false to get a crawl ID immediately and poll for results as pages complete.

Learn more about our four new API endpoints, including Smart Scrape, Search, Map, and Crawl, in our announcement blog.

How to set up the new MCP

Connect to the hosted server

Browserless provides the MCP as a hosted server at https://mcp.browserless.io/mcp. It's ready to use with no installation or environment variables required, keeping setup light if you already work in an MCP-compatible client.

{
  "mcpServers": {
    "browserless": {
      "url": "https://mcp.browserless.io/mcp",
      "headers": {
        "Authorization": "Bearer your-token-here"
      }
    }
  }
}

Choose the right auth method

The hosted server supports three auth methods:

  1. OAuth for clients that support it.
  2. An Authorization header for clients that accept custom headers.
  3. A token query parameter for URL-only clients.

When more than one method is present, Browserless evaluates them in this order: Authorization header first, then token, then OAuth JWT.

That straightforward setup path works no matter which client your team uses. OAuth is the cleanest option for clients that support it. Token-in-URL is there for clients that only accept a connector URL. The evaluation order just determines which method wins if more than one is present in a single request -- it's not a recommendation.

Client-specific setup for Claude Desktop, Cursor, VS Code, Windsurf, and Claude.ai

The main difference between each client is whether it accepts headers or only a URL. Here's how to get started with all five.

Claude Desktop

Claude Desktop can connect directly to the hosted MCP server with a standard mcpServers entry. Clients that support OAuth can also prompt you to sign in with your Browserless account instead of requiring a pasted token.

{
  "mcpServers": {
    "browserless": {
      "url": "https://mcp.browserless.io/mcp",
      "headers": {
        "Authorization": "Bearer your-token-here"
      }
    }
  }
}

Cursor

Cursor uses the same basic configuration pattern as Claude Desktop. Like Claude Desktop, Cursor also supports OAuth -- your client will prompt you to sign in with your Browserless account instead of requiring a pasted token. You can use the same code as shown above.

VS Code

VS Code uses an mcp.servers entry in settings.json, with the server type set to http. Other than that, the setup is nearly identical:

{
  "mcp": {
    "servers": {
      "browserless": {
        "type": "http",
        "url": "https://mcp.browserless.io/mcp",
        "headers": {
          "Authorization": "Bearer your-token-here"
        }
      }
    }
  }
}

Windsurf

Windsurf uses the same configuration pattern as Claude Desktop and Cursor:

{
  "mcpServers": {
    "browserless": {
      "url": "https://mcp.browserless.io/mcp",
      "headers": {
        "Authorization": "Bearer your-token-here"
      }
    }
  }
}

Claude.ai

Claude.ai custom connectors only accept a URL, so Browserless documents token auth through the query string instead of headers. That is the main client-specific edge case worth calling out.

  1. Go to Settings > Connectors in Claude.ai.
  2. Click Add custom connector.
  3. Enter a name (e.g., Browserless) and the following URL: https://mcp.browserless.io/mcp?token=your-token-here
  4. Click Add.

Route MCP traffic to the right Browserless region

By default, the hosted MCP server connects to the US West Browserless region in San Francisco.

Browserless also supports regional routing for London and Amsterdam, using either the x-browserless-api-url header or the browserlessUrl query parameter for URL-only clients.

RegionEndpoint
US West -- San Francisco (default)https://production-sfo.browserless.io
Europe -- Londonhttps://production-lon.browserless.io
Europe -- Amsterdamhttps://production-ams.browserless.io

The MCP in action with real-world examples

Here are two examples of how you can use the Browserless MCP in an AI client.

Use Claude Desktop to research a topic without leaving the chat

Instead of asking your assistant for a summary based on stale context, you can let it search live sources and scrape the relevant pages in the same workflow. The assistant evolves from something that suggests research into something that can actually do the research for you.

A practical example is competitive research. You can ask Claude Desktop to find recent coverage on a topic, scrape the top results into markdown, and turn that into a brief without switching tabs or wiring a separate script.

Use Cursor or VS Code to pull live page data while you code

Inside an editor, the useful MCP pattern is tight feedback between code and the live web. Your assistant can scrape a page, inspect the output, and help you adjust your ingestion or automation logic in the same session.

When you're building scraping pipelines or debugging extraction issues, this functionality is especially useful. Instead of bouncing between your IDE, a browser window, and a terminal, you can ask the assistant to check the live page and refine your implementation while you stay in flow.

How to use the MCP with other Browserless features

Use Smart Scraper for resilient extraction

The clearest connection is browserless_smartscraper, which brings the same Smart Scrape API capability into MCP.

If you're already using the /smart-scrape REST endpoint in a pipeline, browserless_smartscraper is the same capability surfaced as an MCP tool, making it useful when you want your assistant to handle extraction inline without switching to a separate API call.

Use Search and Map as discovery layers

browserless_search and browserless_map work well as the top of the funnel. Search helps your assistant find relevant public pages, while Map helps it discover and rank pages within a known site. From there, you can move into scraping, exports, or custom browser code only for the URLs that matter.

Add Browserless Docs MCP alongside it

Browserless now offers two MCP servers for two different jobs. The Browserless MCP Server is for live browser work, while the Docs MCP is the read-only companion for documentation lookups -- no API token required. Use both together if you want both execution and docs access inside the same client.

When to use MCP vs. REST APIs vs. BrowserQL

These three integration paths solve different problems. The easiest way to think about them is where the operator lives: in your AI client, in your application code, or in a query-driven automation layer.

Use MCP when your assistant should do the browser work

Choose MCP when the workflow starts inside Claude Desktop, Cursor, VS Code, Windsurf, or another MCP-compatible client. It's the right fit when you want your assistant to search, scrape, export, download, map, or run browser code as part of an interactive workflow.

Use REST APIs when Browserless belongs inside your product or jobs

Choose REST APIs when browser work needs to live inside your application stack. It's usually the better path for backend services, scheduled jobs, ingestion pipelines, or internal tools where you want predictable programmatic calls rather than assistant-driven tool use.

Use BrowserQL when GraphQL-style browser automation fits better

Choose BrowserQL when you need declarative, multi-step browser automation with built-in stealth mode and CAPTCHA solving.

BrowserQL uses a GraphQL-based query model that can chain navigation, extraction, and interaction steps in a single request, making it a good fit for complex workflows that go beyond what a single tool call can express.

Here's a table to help you choose:

If the operator is...UseBest for
Your assistantMCPLive research, editor workflows, prompt-driven browser tasks, and debugging in AI clients
Your appREST APIsProduction pipelines, backend jobs, scheduled tasks, and direct programmatic integrations
A query layerBrowserQLMulti-step declarative automation, stealth browsing, CAPTCHA solving, and complex extraction workflows

Browserless MCP FAQs

Do you need to install or self-host anything to use it?

Not for the hosted version. Browserless documents the server as ready to use with no installation or environment variables required.

Which clients work with it?

Claude.ai, Claude Desktop, Cursor, VS Code, and Windsurf, but the server can work with other MCP-compatible clients as well.

How does authentication work?

You can authenticate with OAuth in supported clients, an Authorization header, or a token query parameter.

Can it handle JavaScript-heavy pages or harder scrape targets?

Yes. browserless_smartscraper uses cascading strategies that can escalate from HTTP fetch to proxies, a headless browser, and CAPTCHA solving, while browserless_function gives you custom Puppeteer control when you need it.