Async PHP in Production: Fibers, ReactPHP, and Swoole Demystified
PHP can handle 10,000 concurrent connections. Here’s how.
PHP’s least understood superpower
Three days. That’s how long I spent optimizing an API aggregation service that called 15 external APIs sequentially. The response time? A painful 2.1 seconds on average. Then I discovered async PHP, and that same service now responds in 520 milliseconds. That’s a 304% improvement without touching the business logic.
If you still think PHP is that slow, blocking, synchronous language from the LAMP stack era, you’re in for a surprise. In 2025, async PHP is no longer the exception. With Fibers in core since PHP 8.1, plus mature ecosystems like ReactPHP, Swoole, and Amp, PHP can drive low-latency APIs, WebSockets, and streaming pipelines without turning your codebase into callback hell.
Let me show you how async PHP works in production, when to use each approach, and how to avoid the pitfalls that will bite you at 3 AM.
The PHP Performance Revolution Nobody Talks About
Remember when handling concurrent operations in PHP meant either spinning up multiple processes (memory nightmare) or using threads (complexity nightmare)? Those days are over.
Here’s what async PHP can do today:
- Process 200 RSS feeds in 2.3 seconds instead of 111 seconds (463% improvement)
- Scrape 500+ web pages in the time it used to take to process 100
- Handle 10x more concurrent users while maintaining consistent response times
- Run WebSocket servers that keep thousands of connections alive simultaneously
The secret? Cooperative multitasking through Fibers and event loops. Let’s break down what that actually means.
Understanding PHP Fibers: The Foundation
PHP 8.1 introduced Fibers as full-stack, interruptible functions. Think of them as lightweight threads that you can pause and resume at will, but without the complexity of true parallelism.
Here’s the crucial difference: Fibers are about concurrency, not parallelism. You’re not running multiple CPU-bound tasks simultaneously. You’re efficiently juggling multiple I/O-bound operations while one waits, another runs.
Your First Fiber: The “Aha” Moment
<?php
$fiber = new Fiber(function (): string {
echo "Fiber started\n";
// Suspend execution and return control
$value = Fiber::suspend('paused');
echo "Fiber resumed with: {$value}\n";
return 'completed';
});
// Start the fiber
$result = $fiber->start();
echo "Main: Fiber returned '{$result}'\n";
// Do other work here
echo "Main: Doing other work...\n";
// Resume the fiber
$final = $fiber->resume('new data');
echo "Main: Fiber finished with '{$final}'\n";
Output:
Fiber started
Main: Fiber returned 'paused'
Main: Doing other work...
Fiber resumed with: new data
Main: Fiber finished with 'completed'
What just happened? The Fiber suspended itself, gave control back to the main program, then resumed later with new data. This is the foundation of async PHP.
Memory Efficiency: The Game Changer
Traditional threads consume 1-2MB each. Fibers? About 4KB each. That’s a 500x difference. Suddenly creating hundreds of concurrent operations becomes feasible without worrying about memory exhaustion.
Each fiber gets its own call stack but shares the same memory space. If you load a class in a fiber, it’s loaded into the process heap, not duplicated per fiber.
ReactPHP: Pure PHP Event-Driven Power
ReactPHP is the original async PHP solution. It’s pure PHP (no extensions required), battle-tested, and surprisingly fast. The architecture follows Node.js’s event loop model.
The Event Loop Explained
ReactPHP’s event loop is the heartbeat of your async application. It continuously checks for I/O operations that are ready, executes their callbacks, and moves to the next operation.
<?php
use React\EventLoop\Loop;
use React\Http\Browser;
use function React\Async\async;
use function React\Async\await;
require 'vendor/autoload.php';
// Create async function
$fetchUrls = async(function (array $urls): array {
$browser = new Browser();
$results = [];
// Start all requests concurrently
$promises = [];
foreach ($urls as $name => $url) {
$promises[$name] = $browser->get($url);
}
// Wait for all to complete
try {
$responses = await(\React\Promise\all($promises));
foreach ($responses as $name => $response) {
$results[$name] = [
'status' => $response->getStatusCode(),
'size' => $response->getBody()->getSize(),
];
}
} catch (Exception $e) {
// Cancel pending promises on error
foreach ($promises as $promise) {
$promise->cancel();
}
throw $e;
}
return $results;
});
// Execute
$promise = $fetchUrls([
'google' => 'https://www.google.com',
'github' => 'https://github.com',
'reddit' => 'https://www.reddit.com',
]);
$promise->then(
function (array $results) {
echo "Successfully fetched " . count($results) . " URLs:\n";
foreach ($results as $name => $data) {
echo "- {$name}: {$data['status']} ({$data['size']} bytes)\n";
}
},
function (Exception $e) {
echo "Error: {$e->getMessage()}\n";
}
);
This code fires off three HTTP requests concurrently. In traditional PHP, this would take 3x as long because you’d wait for each request sequentially.
Production Pattern: HTTP Client Pool
Real-world applications need rate limiting, retry logic, and error handling. Here’s a production-ready pattern:
<?php
use React\Http\Browser;
use React\Promise\PromiseInterface;
use function React\Async\async;
use function React\Async\await;
use function React\Promise\all;
class AsyncHttpPool
{
private Browser $browser;
private int $concurrency;
public function __construct(int $concurrency = 10)
{
$this->browser = new Browser();
$this->concurrency = $concurrency;
}
/**
* Fetch URLs in chunks to avoid overwhelming the server
*/
public function fetchAll(array $urls): PromiseInterface
{
return async(function () use ($urls) {
$chunks = array_chunk($urls, $this->concurrency, true);
$allResults = [];
foreach ($chunks as $chunk) {
$promises = [];
foreach ($chunk as $name => $url) {
$promises[$name] = $this->fetchWithRetry($url);
}
try {
$results = await(all($promises));
$allResults = array_merge($allResults, $results);
} catch (Exception $e) {
// Continue with next chunk even if this one fails
error_log("Chunk failed: {$e->getMessage()}");
}
// Small delay between chunks to be nice to servers
await(\React\Promise\Timer\sleep(0.1));
}
return $allResults;
})();
}
/**
* Retry failed requests up to 3 times
*/
private function fetchWithRetry(string $url, int $attempt = 1): PromiseInterface
{
return async(function () use ($url, $attempt) {
try {
$response = await($this->browser->get($url));
return $response->getBody()->getContents();
} catch (Exception $e) {
if ($attempt < 3) {
// Exponential backoff
await(\React\Promise\Timer\sleep(pow(2, $attempt - 1)));
return await($this->fetchWithRetry($url, $attempt + 1));
}
throw $e;
}
})();
}
}
// Usage
$pool = new AsyncHttpPool(concurrency: 5);
$promise = $pool->fetchAll([
'api1' => 'https://api.example.com/endpoint1',
'api2' => 'https://api.example.com/endpoint2',
// ... hundreds more
]);
This pattern processes URLs in chunks of 5, implements retry logic with exponential backoff, and gracefully handles failures without stopping the entire batch.
Swoole: C-Extension Performance Beast
Swoole is a high-performance, coroutine-based extension written in C++. It’s significantly faster than pure PHP solutions but requires compilation and has different deployment considerations.
The Swoole Advantage: Built-In HTTP Server
Unlike ReactPHP which runs as a CLI process, Swoole includes a production-ready HTTP server with worker processes, coroutine support, and automatic connection management.
<?php
use Swoole\Http\Server;
use Swoole\Http\Request;
use Swoole\Http\Response;
use Swoole\Coroutine as Co;
$server = new Server('0.0.0.0', 9501);
$server->set([
'worker_num' => 4, // Number of worker processes
'hook_flags' => SWOOLE_HOOK_ALL, // Enable coroutine hooks
]);
$server->on('request', function (Request $request, Response $response) {
// All I/O is now non-blocking thanks to hooks
$results = [];
// Fire off multiple requests concurrently using coroutines
Co::join([
go(function () use (&$results) {
$results['weather'] = file_get_contents(
'https://api.weather.com/current'
);
}),
go(function () use (&$results) {
$results['news'] = file_get_contents(
'https://api.news.com/latest'
);
}),
go(function () use (&$results) {
// Even database queries are non-blocking
$pdo = new PDO('mysql:host=localhost;dbname=app', 'user', 'pass');
$stmt = $pdo->query('SELECT * FROM users LIMIT 10');
$results['users'] = $stmt->fetchAll();
}),
]);
$response->header('Content-Type', 'application/json');
$response->end(json_encode([
'status' => 'success',
'data' => array_map(function($v) {
return is_string($v) ? strlen($v) . ' bytes' : count($v) . ' rows';
}, $results),
'timestamp' => time(),
]));
});
$server->start();
What’s happening here? The go() function creates a coroutine. Co::join()
waits for all coroutines to complete. Thanks to SWOOLE_HOOK_ALL, even blocking
functions like file_get_contents() and PDO queries become non-blocking
automatically.
Swoole’s Runtime Hooks: The Magic
This is where Swoole shines. Runtime hooks automatically convert synchronous blocking I/O into non-blocking coroutine equivalents:
<?php
use Swoole\Runtime;
use Swoole\Coroutine as Co;
// Enable coroutine hooks for everything
Runtime::enableCoroutine(SWOOLE_HOOK_ALL);
Co\run(function() {
// All these operations are now non-blocking!
// File operations
for ($i = 0; $i < 100; $i++) {
go(function () use ($i) {
$content = file_get_contents("/tmp/file_{$i}.txt");
file_put_contents("/tmp/processed_{$i}.txt", strtoupper($content));
});
}
// Database operations with native PHP PDO
for ($i = 0; $i < 50; $i++) {
go(function () {
$pdo = new PDO('mysql:host=127.0.0.1;dbname=test', 'root', 'pass');
$stmt = $pdo->query('SELECT * FROM users LIMIT 10');
$results = $stmt->fetchAll();
});
}
// Redis operations with native PHP Redis extension
for ($i = 0; $i < 50; $i++) {
go(function () {
$redis = new Redis();
$redis->connect('127.0.0.1', 6379);
$redis->set('key', 'value');
$value = $redis->get('key');
});
}
});
Without changing a single line of your I/O code, Swoole makes it non-blocking. This is incredibly powerful for migrating existing applications to async.
WebSocket Server: Real-Time Communication
Building a production WebSocket server in vanilla PHP is painful. With Swoole, it’s straightforward:
<?php
use Swoole\WebSocket\Server;
use Swoole\Http\Request;
use Swoole\WebSocket\Frame;
$server = new Server('0.0.0.0', 9502);
$server->set([
'worker_num' => 2,
]);
// Track connected clients
$clients = [];
$server->on('open', function (Server $server, Request $request) use (&$clients) {
$clients[$request->fd] = [
'connected_at' => time(),
'ip' => $request->server['remote_addr'],
];
echo "Client #{$request->fd} connected from {$request->server['remote_addr']}\n";
$server->push($request->fd, json_encode([
'type' => 'welcome',
'client_id' => $request->fd,
'message' => 'Connected to WebSocket server',
]));
});
$server->on('message', function (Server $server, Frame $frame) use (&$clients) {
echo "Received from #{$frame->fd}: {$frame->data}\n";
$data = json_decode($frame->data, true);
if ($data['type'] === 'broadcast') {
// Broadcast to all connected clients
foreach ($clients as $fd => $info) {
if ($server->isEstablished($fd)) {
$server->push($fd, json_encode([
'type' => 'broadcast',
'from' => $frame->fd,
'message' => $data['message'],
'timestamp' => time(),
]));
}
}
} else {
// Echo back to sender
$server->push($frame->fd, json_encode([
'type' => 'echo',
'message' => $data['message'],
]));
}
});
$server->on('close', function (Server $server, $fd) use (&$clients) {
echo "Client #{$fd} disconnected\n";
unset($clients[$fd]);
});
$server->start();
This server can handle thousands of concurrent WebSocket connections on a single process. Try doing that with traditional PHP-FPM.
Amp: The Fiber-First Framework
Amp is built from the ground up with PHP 8.1+ Fibers in mind. It provides the cleanest async/await syntax of the bunch.
Amp’s Clean Syntax
<?php
use Amp\Http\Client\HttpClientBuilder;
use Amp\Http\Client\Request;
use function Amp\async;
use function Amp\Future\await;
require 'vendor/autoload.php';
$client = HttpClientBuilder::buildDefault();
// Create multiple concurrent requests
$futures = [
async(fn() => $client->request(new Request('https://httpbin.org/delay/1'))),
async(fn() => $client->request(new Request('https://httpbin.org/delay/2'))),
async(fn() => $client->request(new Request('https://httpbin.org/delay/3'))),
];
// Wait for all to complete
$responses = await($futures);
foreach ($responses as $i => $response) {
echo "Request " . ($i + 1) . ": " . $response->getStatus() . "\n";
}
Amp uses async() to create concurrent operations and await() to wait for
results. It’s the most “modern” feeling syntax if you’re coming from JavaScript
or other async languages.
Concurrency Control with Semaphores
Semaphores are perfect for controlling concurrency levels:
<?php
use Amp\Http\Client\HttpClientBuilder;
use Amp\Http\Client\Request;
use Amp\Sync\LocalSemaphore;
use function Amp\async;
use function Amp\Future\await;
$client = HttpClientBuilder::buildDefault();
$semaphore = new LocalSemaphore(5); // Max 5 concurrent requests
$urls = [
'https://api.example.com/endpoint1',
'https://api.example.com/endpoint2',
// ... 100 more URLs
];
$futures = [];
foreach ($urls as $url) {
$futures[] = async(function () use ($client, $semaphore, $url) {
// Acquire semaphore (wait if 5 requests already running)
$lock = $semaphore->acquire();
try {
$response = $client->request(new Request($url));
return $response->getBody()->buffer();
} finally {
// Always release the lock
$lock->release();
}
});
}
$results = await($futures);
echo "Processed " . count($results) . " URLs with max 5 concurrent\n";
This ensures you never overwhelm the target server with too many simultaneous connections.
The Great Comparison: When to Use What?
Based on real-world benchmarks, here’s how they stack up:
Performance Benchmarks
Low Concurrency (< 100 requests):
- Swoole: ⭐⭐⭐⭐⭐ (Fastest, C-extension advantage)
- ReactPHP: ⭐⭐⭐⭐ (Surprisingly fast for pure PHP)
- Amp: ⭐⭐ (Slower at low concurrency)
High Concurrency (1000+ requests):
- Swoole: ⭐⭐⭐⭐⭐ (Consistently fast)
- Amp: ⭐⭐⭐⭐⭐ (Outperforms others at scale)
- ReactPHP: ⭐⭐⭐⭐ (Solid performance)
Decision Matrix
Choose ReactPHP when:
- ✅ You can’t install PHP extensions (shared hosting, strict corporate policies)
- ✅ You want pure PHP with no compilation steps
- ✅ You need battle-tested, rock-solid LTS releases
- ✅ Your team is familiar with Node.js event loops
- ❌ You need maximum raw performance
Choose Swoole when:
- ✅ You need the absolute best performance
- ✅ You’re building WebSocket servers or real-time apps
- ✅ You want automatic coroutine hooks for existing code
- ✅ You can install C extensions
- ❌ You deploy to environments where you can’t compile extensions
- ❌ You need to debug with traditional tools (stack traces can be confusing)
Choose Amp when:
- ✅ You want the cleanest, most modern async/await syntax
- ✅ You’re building highly concurrent applications (1000+ operations)
- ✅ You prefer Fiber-first design
- ✅ You need sophisticated concurrency primitives (semaphores, mutexes)
- ❌ You need the fastest startup for low-concurrency scenarios
Choose Plain Fibers when:
- ✅ You’re building your own async framework
- ✅ You need precise control over execution flow
- ✅ You want minimal dependencies
- ❌ You need high-level abstractions for HTTP, WebSockets, etc.
Production Deployment: The Real-World Checklist
This is where most async PHP tutorials stop. But production is where you’ll actually feel the pain if you’re not prepared.
Memory Leak Management
Async processes are long-lived. Memory leaks will accumulate over time and eventually cause stability issues.
The watchdog pattern:
<?php
use Swoole\Timer;
class MemoryWatchdog
{
private int $maxMemoryMb;
private int $checkIntervalMs;
public function __construct(int $maxMemoryMb = 512, int $checkIntervalMs = 60000)
{
$this->maxMemoryMb = $maxMemoryMb;
$this->checkIntervalMs = $checkIntervalMs;
}
public function start(): void
{
Timer::tick($this->checkIntervalMs, function () {
$currentMb = memory_get_usage(true) / 1024 / 1024;
echo "[Watchdog] Memory usage: " . round($currentMb, 2) . " MB\n";
if ($currentMb > $this->maxMemoryMb) {
echo "[Watchdog] Memory limit exceeded, graceful shutdown\n";
// Log the issue
error_log("Memory limit exceeded: {$currentMb}MB");
// Trigger graceful shutdown
posix_kill(posix_getpid(), SIGTERM);
}
});
}
}
// Usage in Swoole server
$server->on('workerStart', function ($server, $workerId) {
$watchdog = new MemoryWatchdog(maxMemoryMb: 512);
$watchdog->start();
});
Graceful Shutdown
Handle SIGTERM properly to finish processing requests before dying:
<?php
use Swoole\Process;
use Swoole\Http\Server;
$server = new Server('0.0.0.0', 9501);
$activeRequests = 0;
$server->on('request', function ($request, $response) use (&$activeRequests) {
$activeRequests++;
// Process request
$response->end("OK");
$activeRequests--;
});
// Graceful shutdown handler
Process::signal(SIGTERM, function () use ($server, &$activeRequests) {
echo "Received SIGTERM, waiting for {$activeRequests} active requests...\n";
// Stop accepting new connections
$server->shutdown();
// Wait for active requests to complete (max 30 seconds)
$waited = 0;
while ($activeRequests > 0 && $waited < 30) {
sleep(1);
$waited++;
echo "Still waiting... {$activeRequests} requests remaining\n";
}
echo "Shutdown complete\n";
exit(0);
});
$server->start();
Health Check Endpoints
Your orchestrator (Kubernetes, Docker Swarm) needs to know if your async service is healthy:
<?php
$server->on('request', function (Request $request, Response $response) use (&$activeRequests) {
// Health check endpoint
if ($request->server['request_uri'] === '/health') {
$memoryMb = memory_get_usage(true) / 1024 / 1024;
$health = [
'status' => 'healthy',
'memory_mb' => round($memoryMb, 2),
'active_requests' => $activeRequests,
'uptime_seconds' => time() - $server->setting['start_time'],
];
// Mark unhealthy if memory too high
if ($memoryMb > 500) {
$health['status'] = 'unhealthy';
$response->status(503);
}
$response->header('Content-Type', 'application/json');
$response->end(json_encode($health));
return;
}
// Normal request handling...
});
Docker Deployment
Here’s a production-ready Dockerfile for Swoole:
FROM php:8.3-cli-alpine
# Install Swoole
RUN apk add --no-cache $PHPIZE_DEPS \
&& pecl install swoole \
&& docker-php-ext-enable swoole \
&& apk del $PHPIZE_DEPS
# Install composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
WORKDIR /app
# Install dependencies
COPY composer.json composer.lock ./
RUN composer install --no-dev --optimize-autoloader
# Copy application
COPY . .
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD wget --quiet --tries=1 --spider http://localhost:9501/health || exit 1
EXPOSE 9501
# Run with proper signal handling
CMD ["php", "server.php"]
Monitoring and Debugging
Stack traces with Fibers can be confusing because they only show the current fiber’s execution path. Use structured logging:
<?php
use Psr\Log\LoggerInterface;
class FiberLogger
{
private LoggerInterface $logger;
public function logFiberExecution(string $context, callable $fn): mixed
{
$fiberId = spl_object_id(Fiber::getCurrent() ?? new stdClass());
$this->logger->info("Fiber execution started", [
'fiber_id' => $fiberId,
'context' => $context,
]);
try {
$result = $fn();
$this->logger->info("Fiber execution completed", [
'fiber_id' => $fiberId,
'context' => $context,
]);
return $result;
} catch (Throwable $e) {
$this->logger->error("Fiber execution failed", [
'fiber_id' => $fiberId,
'context' => $context,
'error' => $e->getMessage(),
'trace' => $e->getTraceAsString(),
]);
throw $e;
}
}
}
Common Pitfalls (And How to Avoid Them)
After deploying async PHP to production multiple times, here are the mistakes I see repeatedly:
1. Orphaned Fibers
The Problem: Creating Fibers but forgetting to resume them creates memory leaks.
<?php
// ❌ BAD: Fiber suspended but never resumed
$fiber = new Fiber(function () {
Fiber::suspend(); // Stuck here forever
echo "This never executes\n";
});
$fiber->start();
// Oops, forgot to resume - memory leak!
// ✅ GOOD: Always complete the Fiber lifecycle
$fiber = new Fiber(function () {
$data = Fiber::suspend();
return "Processed: {$data}";
});
$fiber->start();
$result = $fiber->resume('input'); // Fiber completes and is garbage collected
2. Blocking Operations in Async Code
The Problem: Using truly blocking functions (like sleep()) in async code
defeats the purpose.
<?php
// ❌ BAD: This blocks the entire event loop
go(function () {
sleep(5); // Everything freezes for 5 seconds
echo "Done\n";
});
// ✅ GOOD: Use non-blocking sleep
go(function () {
Co::sleep(5); // Other coroutines keep running
echo "Done\n";
});
3. Shared State Without Synchronization
The Problem: Multiple coroutines modifying shared state causes race conditions.
<?php
// ❌ BAD: Race condition
$counter = 0;
for ($i = 0; $i < 100; $i++) {
go(function () use (&$counter) {
$counter++; // Not atomic!
});
}
// ✅ GOOD: Use Amp's Mutex for synchronization
use Amp\Sync\LocalMutex;
$mutex = new LocalMutex();
$counter = 0;
for ($i = 0; $i < 100; $i++) {
async(function () use ($mutex, &$counter) {
$lock = $mutex->acquire();
try {
$counter++;
} finally {
$lock->release();
}
});
}
4. Not Handling Timeouts
The Problem: Waiting indefinitely for external services to respond.
<?php
// ❌ BAD: No timeout, could hang forever
$response = await($client->request(new Request('https://slow-api.com')));
// ✅ GOOD: Always set timeouts
use Amp\TimeoutCancellation;
$cancellation = new TimeoutCancellation(5.0); // 5 second timeout
try {
$response = await($client->request(
new Request('https://slow-api.com'),
$cancellation
));
} catch (CancelledException $e) {
echo "Request timed out after 5 seconds\n";
}
The Bottom Line
Three months after switching my API aggregation service to async PHP (using ReactPHP), we’re handling 10x the traffic on the same infrastructure. Response times dropped from 2.1 seconds to 520ms. Memory usage actually went down despite higher throughput.
The transformation wasn’t free. We spent two weeks refactoring, another week fixing memory leaks, and we maintain stricter monitoring than before. But the payoff? Our infrastructure costs dropped by 60% and our users are happier.
The async PHP decision tree:
- Need raw performance and control? → Swoole (but be ready for deployment complexity)
- Want pure PHP with no extensions? → ReactPHP (battle-tested and reliable)
- Building highly concurrent modern apps? → Amp (clean syntax, scales well)
- Just learning async concepts? → Start with raw Fibers, then graduate to a framework
Remember: async PHP isn’t about making CPU-intensive operations faster. It’s about not wasting time waiting. If your application does a lot of I/O (API calls, database queries, file operations), async PHP can transform it from sluggish to snappy.
The PHP you knew is dead. Long live async PHP.
Further Reading
- PHP Fibers Official Documentation
- ReactPHP Documentation
- Swoole Documentation
- Amp Documentation
- Async PHP in 2025: Beyond Workers with Fibers, ReactPHP, and Amp
- PHP Fibers: The Game-Changer That Makes Async Programming Feel Like Magic
- PHP Runtime Benchmark Comparison
Got async PHP horror stories or success stories? Share them in the comments. I’m particularly interested in production deployment patterns and what broke at 3 AM.