Setting up rotating proxies in Python is straightforward with the requests library. Whether you are scraping product data, monitoring search rankings, or collecting market intelligence, rotating proxies ensure your requests come from different IP addresses — dramatically reducing the chance of getting blocked. In this tutorial, we will walk through everything from basic configuration to production-ready error handling and advanced rotation strategies.

Prerequisites
Before we start, make sure you have the following:
- Python 3.8+ installed on your system
- requests library (
pip install requests) - A proxy provider account — we will use ResProxy throughout this guide. Sign up at app.resproxy.io and grab your credentials from the dashboard.
Optional but recommended: - urllib3 for connection pooling and retry logic - python-dotenv for storing credentials securely in environment variables
Basic Setup
Install the requests library and configure your proxy credentials. The rotating proxy endpoint automatically assigns a new IP for each request.
`python
import requests
proxy_host = "gate.resproxy.io" proxy_port = "7000" proxy_user = "your_username" proxy_pass = "your_password"
proxies = { "http": f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}", "https": f"http://{proxy_user}:{proxy_pass}@{proxy_host}:{proxy_port}", }
response = requests.get("https://httpbin.org/ip", proxies=proxies, timeout=30)
print(response.json())
# Output: {"origin": "185.xxx.xxx.xxx"} — a different IP each time
`
Each time you run this script, you will see a different IP address in the response. That is the rotating proxy at work — the provider automatically assigns a fresh IP from its pool for every connection.

Sending Multiple Requests with Rotation
For scraping jobs, you typically need to send hundreds or thousands of requests. Here is a pattern that rotates IPs automatically:
`python
import requests
import time
proxies = { "http": "http://user:pass@gate.resproxy.io:7000", "https": "http://user:pass@gate.resproxy.io:7000", }
urls = [ "https://example.com/page/1", "https://example.com/page/2", "https://example.com/page/3", ]
for url in urls:
try:
resp = requests.get(url, proxies=proxies, timeout=30)
print(f"{url} — Status: {resp.status_code}")
time.sleep(1) # polite delay between requests
except requests.exceptions.RequestException as e:
print(f"Failed: {url} — {e}")
`
The time.sleep(1) adds a one-second delay between requests. This reduces the chance of triggering rate limiters on the target site. For heavily protected sites, increase the delay to 2-5 seconds.
Error Handling
Production scrapers must handle failures gracefully. The most common errors when using proxies are:
- 407 Proxy Authentication Required — your credentials are wrong or expired
- 429 Too Many Requests — the target site is rate limiting you
- 503 Service Unavailable — the target or proxy is temporarily overloaded
- ConnectionError / Timeout — network issues or proxy endpoint down
Here is a robust retry function with exponential backoff:
`python
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry
import time
def create_session(proxies): session = requests.Session() session.proxies.update(proxies)
retry_strategy = Retry( total=3, backoff_factor=2, status_forcelist=[429, 500, 502, 503, 504], ) adapter = HTTPAdapter(max_retries=retry_strategy) session.mount("http://", adapter) session.mount("https://", adapter) return session
def fetch_with_retry(session, url, max_retries=3):
for attempt in range(max_retries):
try:
response = session.get(url, timeout=30)
response.raise_for_status()
return response
except requests.exceptions.HTTPError as e:
if e.response.status_code == 407:
raise Exception("Proxy auth failed — check credentials")
if e.response.status_code == 429:
wait = 2 ** (attempt + 1)
print(f"Rate limited. Waiting {wait}s...")
time.sleep(wait)
else:
print(f"HTTP {e.response.status_code} on attempt {attempt + 1}")
except requests.exceptions.ConnectionError:
print(f"Connection failed on attempt {attempt + 1}")
time.sleep(2)
return None
`
This function retries up to three times with exponential backoff (2s, 4s, 8s). It distinguishes between auth failures (which should stop immediately) and rate limits (which should wait and retry).

Advanced: Session Sticky IPs
For multi-step workflows — like logging into a site, navigating to a dashboard, and downloading a report — you need the same IP across multiple requests. This is called a "sticky session."
Most proxy providers support sticky sessions by appending a session ID to your credentials:
`python
import requests
import random
session_id = random.randint(100000, 999999)
proxies = { "http": f"http://user-session-{session_id}:pass@gate.resproxy.io:7000", "https": f"http://user-session-{session_id}:pass@gate.resproxy.io:7000", }
session = requests.Session() session.proxies.update(proxies)
# All requests in this session use the same IP
session.get("https://example.com/login")
session.get("https://example.com/dashboard")
session.get("https://example.com/export")
`
When you change the session_id, you get a new sticky IP. This lets you control exactly when rotation happens.
Custom Rotation Logic
Sometimes you need more control over rotation — for example, rotating after a certain number of requests or when a specific error occurs:
`python
import requests
import random
class ProxyRotator: def __init__(self, user, password, host, port): self.user = user self.password = password self.host = host self.port = port self.request_count = 0 self.rotate_every = 10 self._new_session()
def _new_session(self): sid = random.randint(100000, 999999) self.proxies = { "http": f"http://{self.user}-session-{sid}:{self.password}@{self.host}:{self.port}", "https": f"http://{self.user}-session-{sid}:{self.password}@{self.host}:{self.port}", } self.request_count = 0
def get(self, url, **kwargs): if self.request_count >= self.rotate_every: self._new_session() kwargs.setdefault("timeout", 30) kwargs["proxies"] = self.proxies resp = requests.get(url, **kwargs) self.request_count += 1 return resp
# Usage
rotator = ProxyRotator("user", "pass", "gate.resproxy.io", "7000")
for i in range(50):
resp = rotator.get("https://httpbin.org/ip")
print(resp.json()["origin"])
`
This rotator switches to a new IP every 10 requests. You can adjust rotate_every based on the target site's tolerance.
Testing Your Setup
Before running a full scraping job, verify your proxy configuration:
`python
import requests
proxies = { "http": "http://user:pass@gate.resproxy.io:7000", "https": "http://user:pass@gate.resproxy.io:7000", }
# Test 1: Verify IP rotation print("Testing IP rotation...") ips = set() for i in range(5): r = requests.get("https://httpbin.org/ip", proxies=proxies, timeout=30) ip = r.json()["origin"] ips.add(ip) print(f" Request {i+1}: {ip}")
print(f"Unique IPs: {len(ips)} out of 5 requests")
# Test 2: Verify geo-targeting
r = requests.get("https://ipinfo.io/json", proxies=proxies, timeout=30)
data = r.json()
print(f"Location: {data.get('city')}, {data.get('country')}")
`
You should see different IPs for each request. If you see the same IP repeatedly, check that your proxy endpoint is configured for rotating mode (not sticky).
Best Practices
- Store credentials in environment variables — never hardcode passwords in your scripts. Use
python-dotenvor your OS's secret manager. - Set timeouts on every request — a missing timeout can hang your script indefinitely. Use 30 seconds as a default.
- Add User-Agent headers — many sites block requests with the default Python requests User-Agent. Set a realistic browser User-Agent string.
- Log everything — track success rates, response times, and error types. This data helps you tune rotation intervals and delays.
- Respect robots.txt — check the target site's robots.txt before scraping. Ethical scraping builds a sustainable practice.
Learn more about what residential proxies are and view rotating proxy pricing. For a complete beginner guide, visit our getting started page.
See the official Python requests library docs for additional proxy configuration options.
FAQ
Do I need a different library for SOCKS5 proxies?
Yes. The standard requests library only supports HTTP proxies. For SOCKS5, install requests[socks] (pip install requests[socks]) and change your proxy URL scheme to socks5://.
How many concurrent requests can I run?
This depends on your proxy plan and the target site. Most rotating proxy plans support hundreds of concurrent connections. Use Python's concurrent.futures.ThreadPoolExecutor for parallel requests.
Why am I getting 407 errors?
A 407 error means proxy authentication failed. Double-check your username, password, and proxy endpoint. Make sure there are no extra spaces in your credentials.
Should I use sessions or individual requests?
Use requests.Session() for better performance — it reuses TCP connections and handles cookies automatically. For rotating proxies, a session still gets a new IP per request unless you enable sticky mode.
Founder & CEO
Founder of ResProxy and JC Media Agency. Over 5 years of experience in proxy infrastructure, digital advertising, and SaaS product development. Building premium proxy solutions for businesses worldwide.