Tutorials14 min read

How to Use Proxies with Selenium in 2026 — Complete Setup Guide

Hieu Nguyen
How to Use Proxies with Selenium in 2026 — Complete Setup Guide

Selenium is the most widely used browser automation framework in the world, and pairing it with proxies is essential for any serious scraping or testing workflow. Without proxies, your requests come from a single IP address, which means rate limits, CAPTCHAs, and outright bans within minutes on most protected websites.

This guide covers everything you need to know about configuring proxies with Selenium in both Python and JavaScript (Node.js). We will walk through basic setup, authenticated proxies, automatic IP rotation, and headless mode — all with working code examples you can copy into your projects today.

Selenium proxy setup overview
Selenium proxy setup overview

Why Use Proxies with Selenium?

Selenium drives a real browser (Chrome, Firefox, Edge), which makes it excellent for scraping JavaScript-rendered content. However, real browsers also send consistent fingerprints. Combining Selenium with rotating residential proxies solves two problems simultaneously: you get a fresh IP for each session, and the residential nature of the IP makes your automated browser look like a regular user.

Common reasons to proxy your Selenium sessions include:

  • Avoiding IP bans when scraping e-commerce, social media, or search engines
  • Geo-targeting to see localized content, pricing, or ads
  • Running parallel sessions that each appear to come from different users
  • QA and testing web applications from multiple regions simultaneously

Prerequisites

Before we start, make sure you have the following installed:

  • Python 3.9 or later (for Python examples)
  • Node.js 18 or later (for JavaScript examples)
  • Google Chrome browser
  • ChromeDriver matching your Chrome version

For Python, install the required packages:

`bash pip install selenium webdriver-manager `

For Node.js, install the Selenium WebDriver package:

`bash npm install selenium-webdriver chromedriver `

Installing Selenium dependencies
Installing Selenium dependencies

Basic Proxy Setup in Python

The simplest way to route Selenium traffic through a proxy is to pass the proxy address as a Chrome option. Here is a complete working example:

`python from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager

PROXY_HOST = "gate.resproxy.io" PROXY_PORT = "7777"

chrome_options = Options() chrome_options.add_argument(f"--proxy-server=http://{PROXY_HOST}:{PROXY_PORT}")

# Optional: ignore certificate errors from the proxy chrome_options.add_argument("--ignore-certificate-errors")

service = Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=service, options=chrome_options)

try: driver.get("https://httpbin.org/ip") print(driver.page_source) finally: driver.quit() `

This script launches Chrome, routes all traffic through the specified proxy, navigates to httpbin.org to verify the IP, and then closes the browser. The --proxy-server argument supports HTTP, HTTPS, and SOCKS5 protocols.

Basic Proxy Setup in JavaScript (Node.js)

The equivalent setup in Node.js uses the selenium-webdriver package:

`javascript const { Builder, Capabilities } = require("selenium-webdriver"); const chrome = require("selenium-webdriver/chrome");

const PROXY_HOST = "gate.resproxy.io"; const PROXY_PORT = "7777";

async function runWithProxy() { const options = new chrome.Options(); options.addArguments(--proxy-server=http://${PROXY_HOST}:${PROXY_PORT}); options.addArguments("--ignore-certificate-errors");

const driver = await new Builder() .forBrowser("chrome") .setChromeOptions(options) .build();

try { await driver.get("https://httpbin.org/ip"); const body = await driver.findElement({ tagName: "body" }); console.log(await body.getText()); } finally { await driver.quit(); } }

runWithProxy(); `

Both examples follow the same pattern: create options, attach the proxy argument, build the driver, and run your automation.

Selenium proxy authentication flow
Selenium proxy authentication flow

Proxy Authentication with Selenium

Most premium proxy providers, including ResProxy, require username and password authentication. Selenium does not support proxy authentication natively through Chrome options, but there are two reliable workarounds.

Method 1: Chrome Extension for Auth (Python)

You can create a lightweight Chrome extension at runtime that injects authentication credentials:

`python import zipfile import os from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager

PROXY_HOST = "gate.resproxy.io" PROXY_PORT = 7777 PROXY_USER = "your_username" PROXY_PASS = "your_password"

def create_proxy_auth_extension(host, port, user, pwd): manifest = '''{ "version": "1.0.0", "manifest_version": 2, "name": "Proxy Auth", "permissions": ["proxy", "tabs", "unlimitedStorage", "storage", "", "webRequest", "webRequestBlocking"], "background": {"scripts": ["background.js"]}, "minimum_chrome_version": "22.0.0" }'''

background = '''var config = { mode: "fixed_servers", rules: { singleProxy: { scheme: "http", host: "%s", port: parseInt(%s) }, bypassList: ["localhost"] } }; chrome.proxy.settings.set({value: config, scope: "regular"}, function(){}); chrome.webRequest.onAuthRequired.addListener( function(details) { return { authCredentials: { username: "%s", password: "%s" } }; }, { urls: [""] }, ["blocking"] );''' % (host, port, user, pwd)

ext_path = "proxy_auth_ext.zip" with zipfile.ZipFile(ext_path, "w") as zf: zf.writestr("manifest.json", manifest) zf.writestr("background.js", background) return ext_path

ext_path = create_proxy_auth_extension(PROXY_HOST, PROXY_PORT, PROXY_USER, PROXY_PASS)

chrome_options = Options() chrome_options.add_extension(ext_path)

service = Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=service, options=chrome_options)

try: driver.get("https://httpbin.org/ip") print(driver.page_source) finally: driver.quit() os.remove(ext_path) `

This method creates a temporary Chrome extension that handles the 407 Proxy Authentication Required response automatically. It works reliably with both HTTP and HTTPS targets.

Method 2: Proxy URL with Credentials (Node.js)

In Node.js, you can embed credentials directly in the proxy URL when using Selenium Wire or a local proxy forwarder:

`javascript const { Builder } = require("selenium-webdriver"); const chrome = require("selenium-webdriver/chrome"); const { execSync } = require("child_process");

const PROXY = "http://your_username:your_password@gate.resproxy.io:7777";

async function runAuthenticated() { // Use a local proxy forwarder for auth const options = new chrome.Options(); options.addArguments("--proxy-server=http://127.0.0.1:8080"); options.addArguments("--ignore-certificate-errors");

const driver = await new Builder() .forBrowser("chrome") .setChromeOptions(options) .build();

try { await driver.get("https://httpbin.org/ip"); const body = await driver.findElement({ tagName: "body" }); console.log(await body.getText()); } finally { await driver.quit(); } }

runAuthenticated(); `

Proxy rotation with Selenium sessions
Proxy rotation with Selenium sessions

Rotating Proxies with Selenium

For scraping at scale, you need a different IP for each page load or each session. With ResProxy's rotating endpoint, every new connection automatically gets a fresh IP. Here is how to implement rotation in a multi-page scraping loop:

`python from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from webdriver_manager.chrome import ChromeDriverManager import time

URLS = [ "https://example.com/page1", "https://example.com/page2", "https://example.com/page3", "https://example.com/page4", "https://example.com/page5", ]

def create_driver_with_proxy(): chrome_options = Options() chrome_options.add_argument("--proxy-server=http://gate.resproxy.io:7777") chrome_options.add_argument("--ignore-certificate-errors") chrome_options.add_argument("--headless=new") service = Service(ChromeDriverManager().install()) return webdriver.Chrome(service=service, options=chrome_options)

for url in URLS: driver = create_driver_with_proxy() try: driver.get(url) title = driver.title print(f"URL: {url} | Title: {title}") time.sleep(1) finally: driver.quit() `

Each iteration creates a new driver instance, which forces a new connection through the rotating proxy endpoint. This guarantees a different IP for every request. For high-volume jobs, you can also use sticky sessions to maintain the same IP across multiple pages within a single driver session.

Running Selenium in Headless Mode with Proxies

Headless mode runs Chrome without a visible window, which is essential for server-side scraping. Modern Chrome uses the --headless=new flag:

`python chrome_options = Options() chrome_options.add_argument("--headless=new") chrome_options.add_argument("--no-sandbox") chrome_options.add_argument("--disable-dev-shm-usage") chrome_options.add_argument("--proxy-server=http://gate.resproxy.io:7777") chrome_options.add_argument("--window-size=1920,1080")

# Set a realistic user agent chrome_options.add_argument( "user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) " "AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36" ) `

Key tips for headless mode with proxies:

  • Always set a realistic user agent — the default headless user agent contains "HeadlessChrome" which is trivially detectable
  • Set a standard window size (1920x1080 or 1366x768) to avoid fingerprint anomalies
  • Use --disable-blink-features=AutomationControlled to remove the navigator.webdriver flag
  • Add --no-sandbox and --disable-dev-shm-usage when running in Docker or CI environments
Selenium headless mode proxy configuration
Selenium headless mode proxy configuration

Using Firefox with Proxies

While Chrome is the most popular choice, Firefox works equally well with Selenium proxies:

`python from selenium import webdriver from selenium.webdriver.firefox.options import Options

firefox_options = Options() firefox_options.set_preference("network.proxy.type", 1) firefox_options.set_preference("network.proxy.http", "gate.resproxy.io") firefox_options.set_preference("network.proxy.http_port", 7777) firefox_options.set_preference("network.proxy.ssl", "gate.resproxy.io") firefox_options.set_preference("network.proxy.ssl_port", 7777) firefox_options.set_preference("network.proxy.no_proxies_on", "localhost,127.0.0.1")

driver = webdriver.Firefox(options=firefox_options) driver.get("https://httpbin.org/ip") print(driver.page_source) driver.quit() `

Firefox proxy configuration uses preferences rather than command-line arguments, giving you more granular control over proxy behavior.

Error Handling and Retry Logic

Production-grade Selenium scraping needs robust error handling:

`python from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.chrome.service import Service from selenium.common.exceptions import WebDriverException, TimeoutException from webdriver_manager.chrome import ChromeDriverManager import time

def scrape_with_retry(url, max_retries=3): for attempt in range(max_retries): driver = None try: chrome_options = Options() chrome_options.add_argument("--proxy-server=http://gate.resproxy.io:7777") chrome_options.add_argument("--headless=new") chrome_options.page_load_strategy = "normal"

service = Service(ChromeDriverManager().install()) driver = webdriver.Chrome(service=service, options=chrome_options) driver.set_page_load_timeout(30)

driver.get(url) return driver.page_source

except TimeoutException: print(f"Attempt {attempt + 1}: Timeout loading {url}") except WebDriverException as e: print(f"Attempt {attempt + 1}: WebDriver error — {e.msg}") finally: if driver: driver.quit()

time.sleep(2 ** attempt) # Exponential backoff

raise Exception(f"Failed to load {url} after {max_retries} attempts") `

This pattern creates a fresh driver (and therefore a fresh proxy IP) on each retry. The exponential backoff prevents hammering the target site when it is under load.

Best Practices Summary

After years of running Selenium with proxies at scale, here are the practices that matter most:

  1. One driver per task — Create a new driver instance for each independent scraping task to get a fresh IP
  2. Set page load timeouts — Proxied connections can be slower; set a 30-second timeout and handle failures gracefully
  3. Use residential proxies — Datacenter IPs are increasingly detected by anti-bot systems; residential IPs from ResProxy's rotating pool have much higher success rates
  4. Rotate user agents alongside proxy rotation — Matching the same user agent to different IPs raises flags
  5. Add realistic delays — Random delays between 1-5 seconds mimic human browsing patterns
  6. Monitor success rates — Track your hit rate and switch providers if it drops below 90 percent

Getting Started

If you are new to proxy-powered browser automation, start with the getting started guide to set up your ResProxy account and generate credentials. For the official Selenium documentation including advanced WebDriver configuration, visit selenium.dev.

With the right proxy setup, Selenium becomes a powerful tool for reliable, scalable web automation that can handle even the most protected websites.

Hieu Nguyen

Founder & CEO

Founder of ResProxy and JC Media Agency. Over 5 years of experience in proxy infrastructure, digital advertising, and SaaS product development. Building premium proxy solutions for businesses worldwide.