Where The Idea Came From
While working on a migration for a client running three high-traffic WordPress sites last month, I got an idea for writing up the NGINX WordPress configuration I keep reaching for. Their old Apache-and-mod_php setup was burning RAM and choking under concurrent requests, and the migration pushed median response times from 1.2 seconds down to around 180ms without us touching a single line of PHP.
This post covers the NGINX side — the nginx.conf tweaks, the server block, PHP-FPM pool, and the FastCGI cache. Explaining how to tune MySQL, compile PHP from source, or harden the WordPress admin is outside the scope of this post. For PHP-FPM internals, the upstream docs are solid and I will not try to re-tell them here.
Notes and Scope
Versions used: NGINX 1.24.0, PHP-FPM 8.2, WordPress 6.4, Ubuntu 22.04 LTS. The config is portable across NGINX 1.18 and 1.26 with no changes.
What this post does not cover: firewall rules, OWASP security headers beyond HSTS, automated deployment (we use Ansible for that), and WordPress-side caching plugins.
Opinionated Take: Skip mod_php, Commit to PHP-FPM
I don’t deploy Apache for WordPress anymore, and I push clients toward the same. NGINX with PHP-FPM decouples the web server from the PHP interpreter, so a slow PHP request stops blocking static file delivery. On an engagement for a law firm we consult for, the previous vendor had tuned Apache’s MPM for months and still hit worker exhaustion under load. First day on NGINX, the symptom vanished.
That is not a dig at Apache — it is a deployment-model argument. You get better concurrency for the same hardware.
Core nginx.conf Settings
Start with the main file at /etc/nginx/nginx.conf. These are the values I touch on every deployment:
worker_processes auto; # one per CPU core
pid /var/run/nginx.pid;
events {
worker_connections 768; # tune to server hardware
}
http {
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip on;
gzip_types text/plain text/css application/json application/javascript
text/xml application/xml application/xml+rss text/javascript;
include /etc/nginx/sites-enabled/*;
}
I set worker_processes to auto rather than hard-coding 8 — same result on an 8-core box, correct result everywhere else. Gzip on the listed MIME types usually cuts response bodies by 60 to 80 percent.
PHP-FPM Pool for WordPress
Create a dedicated pool at /etc/php/8.2/fpm/pool.d/wordpress.conf so WordPress is not sharing resources with anything else on the host:
[wordpress]
user = wordpress
group = wordpress
listen = 127.0.0.1:9000
allowed_clients = 127.0.0.1
pm = dynamic
pm.max_children = 20
pm.start_servers = 4
pm.min_spare_servers = 2
pm.max_spare_servers = 6
If NGINX runs on the same box, 127.0.0.1:9000 is fine. You might have to switch to a Unix socket (/var/run/php/php8.2-fpm.sock) to squeeze out the last bit of syscall overhead — in my testing the gain is real but small, around 3 to 5 percent.
Server Block: Redirect and TLS
Two server blocks — one redirects HTTP to HTTPS, the other does the real work. Site root, PHP handoff, and the upload ceiling live here:
server {
listen 80 default_server;
listen [::]:80 default_server;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
include ssl.conf;
server_name .example.com;
root /home/wordpress/www;
index index.php;
client_body_in_file_only clean;
client_body_buffer_size 32K;
client_max_body_size 300M;
send_timeout 10s;
# location blocks go here
}
The client_max_body_size of 300M is there because every WordPress site I have inherited eventually needs to upload a large media file. Set it once, forget it.
Location Blocks
Three location blocks matter — static assets, the WordPress rewrite, and the PHP handler:
location ~* ^.+\.(jpg|jpeg|png|gif|ico|css|js)$ {
access_log off;
expires max;
}
location / {
try_files $uri $uri/ /index.php?q=$uri&$args;
}
location ~ \.php$ {
if (!-e $request_filename) { return 404; }
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param PATH_INFO $fastcgi_script_name;
include fastcgi_params;
include fastcgi_cache;
}
The try_files line is the WordPress permalink rewrite — same behavior Apache gets from .htaccess, but evaluated once by NGINX rather than on every request.
FastCGI Cache for Anonymous Traffic
Pull this into /etc/nginx/fastcgi_cache so every WordPress site on the host can include it:
fastcgi_cache phpcache;
fastcgi_cache_key "$scheme$host$request_uri";
fastcgi_cache_min_uses 2;
fastcgi_cache_path /tmp/cache levels=1:2 keys_zone=phpcache:10m
inactive=30m max_size=500M;
fastcgi_cache_use_stale updating timeout;
fastcgi_cache_valid 404 1m;
fastcgi_cache_valid 500 502 504 5m;
Caveat: FastCGI cache will serve stale content to logged-in users if you don’t add a bypass. The fix is a cookie check — skip the cache when wordpress_logged_in_* is present. If the site has no logged-in front-end users, you can skip the bypass entirely.
Validate and Reload
After every change, test the config before reloading so a typo does not take the site down:
sudo nginx -t
sudo systemctl reload nginx
This dry-run-before-apply habit shows up in a lot of ops work. We covered the Windows side of it in tracing Windows boot and service init with Sysinternals, and the same mindset applied to release hygiene in our post on module manifest versioning.
Takeaway
This is the baseline we drop in on every WordPress NGINX engagement — tune worker counts and the FastCGI cache size to the hardware, but leave the structure alone. If you want us to roll this out on your infrastructure or audit what you already have running, reach out here.

