A reverse proxy is a server that sits in front of one or more origin servers, accepts client requests, forwards them to the chosen backend, then returns the response as if it came from the origin. It hides internal topology and centralizes cross cutting concerns. Most operate at L7 for HTTP and TLS termination, though L4 TCP proxies exist. Unlike a forward proxy that represents clients to the internet, a reverse proxy represents your services to clients.
Core functions include load balancing with health checks and algorithms like round robin, least connections, and EWMA. It can do retries, circuit breaking, connection pooling, compression, caching, auth, rate limiting, WAF, and path or host based routing. It normalizes protocols and features, for example HTTP/2 or HTTP/3 to clients and HTTP/1.1 to backends, plus gRPC and WebSocket support. It is also the ingress point for service meshes and Kubernetes.
Operational details matter. Preserve client identity with X-Forwarded-For or PROXY protocol, and set X-Forwarded-Proto and Host correctly. Tune timeouts, buffering, and backpressure for streaming and WebSockets, and pick session affinity or stateless tokens wisely. Harden TLS, OCSP stapling, and HSTS, and use weighted routing for canary or blue green. Avoid single points by running multiple instances. Common options include Nginx, HAProxy, Envoy, Traefik, Apache httpd, and cloud offerings like AWS ALB, Cloudflare, and GCP HTTP Load Balancer.