Tutorials12 min read

Python Requests Proxy — How to Set Up & Rotate Proxies

Hieu Nguyen
Python Requests Proxy — How to Set Up & Rotate Proxies

The Python requests library is the most popular HTTP client in the Python ecosystem, and it has first-class proxy support. Whether you are building a web scraper, monitoring prices, or collecting data from APIs, understanding how to route your requests through proxies is fundamental to any production data pipeline.

This guide covers everything from basic proxy setup to advanced rotation strategies, SOCKS5 configuration, session management, and error handling patterns used in real-world scraping systems.

Python requests proxy configuration
Python requests proxy configuration

Basic Proxy Setup

The requests library accepts a proxies dictionary that maps protocol schemes to proxy URLs:

`python import requests

proxies = { "http": "http://gate.resproxy.io:7777", "https": "http://gate.resproxy.io:7777", }

response = requests.get("https://httpbin.org/ip", proxies=proxies) print(response.json()) # Output: {"origin": "185.xxx.xxx.xxx"} `

The dictionary keys http and https tell requests which proxy to use for each protocol. In most cases, you will use the same proxy for both.

Proxy Authentication

Add credentials directly to the proxy URL:

`python import requests

proxies = { "http": "http://username:password@gate.resproxy.io:7777", "https": "http://username:password@gate.resproxy.io:7777", }

response = requests.get("https://httpbin.org/ip", proxies=proxies) print(response.json()) `

If your password contains special characters like @ or :, URL-encode them:

`python from urllib.parse import quote

password = "p@ss:word!" encoded_password = quote(password, safe="") proxy_url = f"http://username:{encoded_password}@gate.resproxy.io:7777"

proxies = {"http": proxy_url, "https": proxy_url} `

Python proxy authentication and sessions
Python proxy authentication and sessions

Using Sessions for Persistent Proxy Configuration

When making multiple requests, use a Session object to avoid repeating the proxy configuration:

`python import requests

session = requests.Session() session.proxies = { "http": "http://username:password@gate.resproxy.io:7777", "https": "http://username:password@gate.resproxy.io:7777", }

# All requests through this session use the proxy automatically response1 = session.get("https://httpbin.org/ip") response2 = session.get("https://httpbin.org/headers") response3 = session.get("https://httpbin.org/user-agent")

print(response1.json()) `

Sessions also maintain cookies and connection pooling, which improves performance when making many requests to the same host.

Environment Variables

You can configure proxies globally using environment variables, which requests picks up automatically:

`python import os

os.environ["HTTP_PROXY"] = "http://username:password@gate.resproxy.io:7777" os.environ["HTTPS_PROXY"] = "http://username:password@gate.resproxy.io:7777" os.environ["NO_PROXY"] = "localhost,127.0.0.1"

import requests # Now all requests use the proxy without explicit configuration response = requests.get("https://httpbin.org/ip") `

This is useful for containerized deployments where you want to configure proxies at the infrastructure level.

Rotating Proxies

For web scraping, you need a different IP for each request to avoid detection. Here are three approaches:

Approach 1: ResProxy's Rotating Endpoint

The simplest method — ResProxy's gateway automatically assigns a new IP per connection:

`python import requests

proxies = { "http": "http://username:password@gate.resproxy.io:7777", "https": "http://username:password@gate.resproxy.io:7777", }

urls = [ "https://httpbin.org/ip", "https://httpbin.org/ip", "https://httpbin.org/ip", ]

for url in urls: # Each request gets a new IP automatically response = requests.get(url, proxies=proxies, timeout=30) print(f"IP: {response.json()['origin']}") `

Approach 2: Manual Rotation from a Proxy List

If you have a list of proxy addresses, rotate through them:

`python import requests import random import itertools

PROXY_LIST = [ "http://user:pass@proxy1.resproxy.io:7777", "http://user:pass@proxy2.resproxy.io:7777", "http://user:pass@proxy3.resproxy.io:7777", "http://user:pass@proxy4.resproxy.io:7777", "http://user:pass@proxy5.resproxy.io:7777", ]

# Round-robin rotation proxy_cycle = itertools.cycle(PROXY_LIST)

def get_with_rotation(url): proxy = next(proxy_cycle) proxies = {"http": proxy, "https": proxy} return requests.get(url, proxies=proxies, timeout=30)

# Random rotation def get_with_random_proxy(url): proxy = random.choice(PROXY_LIST) proxies = {"http": proxy, "https": proxy} return requests.get(url, proxies=proxies, timeout=30) `

Approach 3: Smart Rotation with Health Tracking

Track proxy health and avoid failing proxies:

`python import requests import random import time from collections import defaultdict

class ProxyRotator: def __init__(self, proxy_list): self.proxies = proxy_list self.failures = defaultdict(int) self.last_used = {} self.max_failures = 3 self.cooldown = 60 # seconds

def get_proxy(self): available = [ p for p in self.proxies if self.failures[p] < self.max_failures ] if not available: # Reset failures after cooldown self.failures.clear() available = self.proxies

proxy = random.choice(available) self.last_used[proxy] = time.time() return proxy

def report_success(self, proxy): self.failures[proxy] = max(0, self.failures[proxy] - 1)

def report_failure(self, proxy): self.failures[proxy] += 1

def fetch(self, url, max_retries=3): for attempt in range(max_retries): proxy = self.get_proxy() proxies = {"http": proxy, "https": proxy} try: response = requests.get(url, proxies=proxies, timeout=30) if response.status_code == 200: self.report_success(proxy) return response else: self.report_failure(proxy) except requests.RequestException: self.report_failure(proxy)

time.sleep(2 ** attempt)

raise Exception(f"Failed to fetch {url} after {max_retries} retries")

# Usage rotator = ProxyRotator([ "http://user:pass@gate.resproxy.io:7777", "http://user:pass@gate.resproxy.io:7778", "http://user:pass@gate.resproxy.io:7779", ])

response = rotator.fetch("https://example.com/data") `

SOCKS5 proxy with Python requests
SOCKS5 proxy with Python requests

SOCKS5 Proxy Support

To use SOCKS5 proxies with requests, install the pysocks package:

`bash pip install requests[socks] `

Then configure the proxy with the socks5:// or socks5h:// scheme:

`python import requests

proxies = { "http": "socks5h://username:password@gate.resproxy.io:1080", "https": "socks5h://username:password@gate.resproxy.io:1080", }

response = requests.get("https://httpbin.org/ip", proxies=proxies) print(response.json()) `

Use socks5h:// (with the "h") to have the proxy handle DNS resolution, which prevents DNS leaks and is more secure.

Error Handling

Production scraping code must handle proxy-related errors gracefully:

`python import requests from requests.exceptions import ( ProxyError, ConnectTimeout, ReadTimeout, ConnectionError, ) import time

def robust_fetch(url, proxies, max_retries=3): for attempt in range(max_retries): try: response = requests.get( url, proxies=proxies, timeout=(10, 30), # (connect_timeout, read_timeout) headers={"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)"}, )

if response.status_code == 200: return response elif response.status_code == 407: raise ProxyError("Authentication failed — check credentials") elif response.status_code == 429: wait = 2 ** (attempt + 2) print(f"Rate limited, waiting {wait}s...") time.sleep(wait) else: print(f"HTTP {response.status_code} on attempt {attempt + 1}")

except ConnectTimeout: print(f"Connection timeout on attempt {attempt + 1}") except ReadTimeout: print(f"Read timeout on attempt {attempt + 1}") except ProxyError as e: print(f"Proxy error: {e}") except ConnectionError: print(f"Connection failed on attempt {attempt + 1}")

time.sleep(2 ** attempt)

raise Exception(f"Failed to fetch {url} after {max_retries} attempts") `

The tuple timeout (10, 30) sets a 10-second connection timeout and a 30-second read timeout separately, which is more robust than a single timeout value.

Concurrent Requests with Proxies

For maximum throughput, use concurrent.futures to send multiple proxied requests in parallel:

`python import requests from concurrent.futures import ThreadPoolExecutor, as_completed

proxies = { "http": "http://user:pass@gate.resproxy.io:7777", "https": "http://user:pass@gate.resproxy.io:7777", }

urls = [f"https://example.com/product/{i}" for i in range(1, 101)]

def fetch(url): try: r = requests.get(url, proxies=proxies, timeout=30) return {"url": url, "status": r.status_code, "size": len(r.content)} except Exception as e: return {"url": url, "error": str(e)}

with ThreadPoolExecutor(max_workers=10) as executor: futures = {executor.submit(fetch, url): url for url in urls} for future in as_completed(futures): result = future.result() print(result) `

Keep max_workers between 5 and 20 to avoid overwhelming the proxy or target server.

Best Practices

  1. Always set timeouts — Never make a request without a timeout; hanging connections waste proxy bandwidth
  2. Use sessions for repeated requests to the same host — connection pooling improves performance
  3. URL-encode passwords containing special characters to prevent parsing errors
  4. Use socks5h:// for SOCKS5 to prevent DNS leaks
  5. Track proxy health and avoid failing proxies in rotation logic
  6. Add realistic headers (User-Agent, Accept-Language) alongside your proxy to reduce detection

Getting Started

Set up your ResProxy credentials through the getting started guide and start making proxied requests in minutes. Use rotating residential proxies for the highest success rates on protected websites.

For the full proxy configuration reference, see the official Python requests documentation.

Hieu Nguyen

Founder & CEO

Founder of ResProxy and JC Media Agency. Over 5 years of experience in proxy infrastructure, digital advertising, and SaaS product development. Building premium proxy solutions for businesses worldwide.