<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Marcel Moll]]></title><description><![CDATA[PHP enthusiast with interests in clean code, architecture and developer experience.]]></description><link>https://marcelmoll.dev/</link><generator>Ghost 5.88</generator><lastBuildDate>Sat, 18 Apr 2026 20:43:29 GMT</lastBuildDate><atom:link href="https://marcelmoll.dev/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Symfony Messenger: What the Documentation Does Not Cover]]></title><description><![CDATA[<p>Last year we took over a Symfony e-commerce application that was struggling under a combination of high customer traffic and a constant flood of write operations from external systems: product updates, price changes, availability feeds, all hitting the application simultaneously. The Messenger setup was already in place. Messages were being</p>]]></description><link>https://marcelmoll.dev/symfony-messenger-what-the-documentation-does-not-cover/</link><guid isPermaLink="false">69a5f5937ec5f735e0bef075</guid><category><![CDATA[PHP]]></category><category><![CDATA[Symfony]]></category><category><![CDATA[Deep Dive]]></category><dc:creator><![CDATA[Marcel Moll]]></dc:creator><pubDate>Fri, 06 Mar 2026 18:51:21 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1584444734476-2e8392368b07?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDc1fHxNZXNzZW5nZXJ8ZW58MHx8fHwxNzcyNDg0MDI4fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1584444734476-2e8392368b07?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDc1fHxNZXNzZW5nZXJ8ZW58MHx8fHwxNzcyNDg0MDI4fDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Symfony Messenger: What the Documentation Does Not Cover"><p>Last year we took over a Symfony e-commerce application that was struggling under a combination of high customer traffic and a constant flood of write operations from external systems: product updates, price changes, availability feeds, all hitting the application simultaneously. The Messenger setup was already in place. Messages were being dispatched. Workers were running. On paper, everything was fine.</p><p>In practice, the workers were consuming 600MB of memory and climbing. The queue had thousands of unprocessed messages backed up. The failure transport (never monitored) contained over 400 entries, some of them weeks old. And because product availability messages were sharing a transport with payment confirmation messages, a slow availability handler was blocking payment confirmations for customers trying to check out.</p><p>No single thing was catastrophically wrong. The problems were configuration decisions that looked reasonable in isolation and fell apart under load. We fixed them: restructured the transports, configured proper retry strategy, added memory limits, set up failure transport monitoring, separated handler concerns. The instability went away.</p><p>This is a writeup of what we changed and why. It assumes you know Symfony. It is not a getting-started guide.</p><hr><h2 id="your-message-classes-will-outlive-your-code">Your Message Classes Will Outlive Your Code</h2><p>Most developers treat message classes like DTOs: a constructor, some readonly properties, done. That works fine until you deploy a change, a hundred messages with the old payload are still sitting in the queue, and deserialization starts throwing.</p><p>A message class is not a DTO. It is a versioned contract between two processes that may be running different code. Unlike a DTO, you can&apos;t just change it and move on.</p><p>Versioning is not optional, it is inevitable. Adding a required constructor parameter to a message class will fail deserialization for any messages already sitting in the queue with the old payload. You have two options: make new parameters optional with a default, or version the class explicitly.</p><p>Optional parameters are fine for purely additive changes:</p><pre><code class="language-php">final class SendInvoice
{
    public function __construct(
        public readonly int $orderId,
        public readonly ?string $locale = null, // added in v2, defaults gracefully
    ) {}
}
</code></pre><p>For anything that changes the meaning of the message, not just the signature, version it explicitly:</p><pre><code class="language-php">// Keep SendInvoice alive until its queue is fully drained.
// Only remove it in a follow-up deployment.
final class SendInvoiceV2
{
    public function __construct(
        public readonly int $orderId,
        public readonly string $locale,
        public readonly string $templateId,
    ) {}
}
</code></pre><p>The deployment sequence matters: first deploy&#xA0;<code>SendInvoiceV2</code>&#xA0;alongside both handlers, wait for the&#xA0;<code>SendInvoice</code>&#xA0;queue to drain completely, then remove the old class in a separate deployment. During the drain window you need both handlers live:&#xA0;<code>SendInvoiceHandler</code>&#xA0;consuming the old queue,&#xA0;<code>SendInvoiceV2Handler</code>&#xA0;consuming new dispatches. Do not remove the old handler before the old queue is empty. This is a two-deployment operation, not one.</p><p>Skip the drain step and you get deserialization exceptions that are impossible to reproduce locally, appearing hours after a deployment when an old worker picks up a message for a class that no longer exists.</p><p>Sometimes the drain window is not an option. If the queue has tens of thousands of messages and the client needs the change deployed now, you have two paths. The first is a compatibility shim: keep the old message class but make it deserialize into the new handler by implementing a custom&#xA0;<code>DenormalizerInterface</code>&#xA0;that maps old payloads to the new shape. It is extra code but it eliminates the drain dependency entirely. The second is a forced drain: temporarily scale workers up aggressively to exhaust the queue fast, then deploy. Neither is elegant, but both are better than deploying a breaking change against a live queue and discovering the consequences at 3am.</p><p>The secondary concern, and it matters just as much, is entity state drift. Don&apos;t put Doctrine entities in messages. By the time the handler runs, the order that was&#xA0;<code>pending</code>&#xA0;when the message was dispatched might now be&#xA0;<code>cancelled</code>. Fetch fresh state in the handler, carry only the intent-critical values that must reflect the moment of dispatch:</p><pre><code class="language-php">// This looks convenient. It will eventually cause a subtle bug.
final class ProcessRefund
{
    public function __construct(public readonly Order $order) {}
}

// This forces the handler to work with current reality,
// while preserving the amount the customer was actually charged.
final class ProcessRefund
{
    public function __construct(
        public readonly int $orderId,
        public readonly string $currency,
        public readonly int $amountCents,
    ) {}
}
</code></pre><p>Currency and amount are explicit not because they cannot be fetched from the order (they can), but because they represent the intent at the moment of dispatch. If the order amount is corrected between dispatch and consumption, the refund should reflect what was charged, not what was later edited.</p><hr><h2 id="transport-configuration-the-defaults-will-eventually-let-you-down">Transport Configuration: The Defaults Will Eventually Let You Down</h2><p>In the application we inherited, all messages shared a single&#xA0;<code>doctrine://</code>&#xA0;transport. Product availability updates, price syncs, order confirmations, invoice generation: all in the same queue, all consuming from the same database table under load. That is not a configuration mistake exactly. It is what you get when you follow the getting-started guide and never revisit it.</p><p>The Doctrine transport is recommended because it requires no additional infrastructure. Fair enough. But it uses&#xA0;<code>SELECT ... FOR UPDATE SKIP LOCKED</code>&#xA0;to claim messages. Multiple workers polling the same table means database load that scales with worker count, not message volume. Under the kind of write pressure we were seeing (external systems pushing thousands of product updates per hour), the lock contention was visible in slow query logs and was adding latency to the same database handling customer requests.</p><p><s>If you are on PostgreSQL, switch to the native PostgreSQL transport:</s></p><pre><code class="language-dotenv">MESSENGER_TRANSPORT_DSN=postgresql://user:password@localhost/dbname
</code></pre><p><s>It uses<code>LISTEN/NOTIFY</code>. Workers sleep until a message arrives instead of polling. The migration from&#xA0;<code>doctrine://</code>&#xA0;is a drop-in replacement. There is no good reason to stay on the polling transport if you are already on PostgreSQL.</s></p><p>Turn off&#xA0;<code>auto_setup</code>&#xA0;in production:</p><pre><code class="language-yaml">transports:
    async:
        dsn: &apos;%env(MESSENGER_TRANSPORT_DSN)%&apos;
        options:
            auto_setup: false
</code></pre><p><code>auto_setup: true</code>&#xA0;creates the&#xA0;<code>messenger_messages</code>&#xA0;table the first time a message is dispatched. Convenient in development. In production, your deployment process should own schema changes. Create the table via a Doctrine migration and turn&#xA0;<code>auto_setup</code>&#xA0;off outside local development.</p><p>For Redis, set&#xA0;<code>stream_max_entries</code>. Redis Streams are append-only by default and grow indefinitely without trimming:</p><pre><code class="language-dotenv">MESSENGER_TRANSPORT_DSN=redis://localhost:6379/messages/symfony/consumer?stream_max_entries=2000
</code></pre><p>The number depends on your throughput and how much recent history you want visible for debugging. We use 2000 in our e-commerce setups. Enough to see recent activity, bounded enough that memory usage stays predictable.</p><p><strong>Separate transports by concern.</strong>&#xA0;This was the most impactful single change we made to the inherited application. Payment confirmations and product availability updates have completely different latency requirements, different failure modes, different acceptable retry windows. One slow handler type on a shared transport holds up everything else:</p><pre><code class="language-yaml">framework:
    messenger:
        transports:
            catalog_sync:
                dsn: &apos;%env(MESSENGER_TRANSPORT_DSN)%&apos;
                options:
                    queue_name: catalog_sync
            orders_high:
                dsn: &apos;%env(MESSENGER_TRANSPORT_DSN)%&apos;
                options:
                    queue_name: orders_high
        routing:
            App\Message\UpdateProductAvailability: catalog_sync
            App\Message\UpdateProductPrice: catalog_sync
            App\Message\SendOrderConfirmation: orders_high
            App\Message\SendInvoice: orders_high
</code></pre><p>Give each transport a dedicated worker. A slow catalog sync handler on a shared transport was the direct cause of delayed payment confirmations in the system we inherited. Separate workers, separate queues, separate failure modes:</p><pre><code class="language-bash"># Run dedicated workers per transport
php bin/console messenger:consume catalog_sync --memory-limit=128M --time-limit=3600
php bin/console messenger:consume orders_high --memory-limit=256M --time-limit=3600
</code></pre><h3 id="correction-march-2026"><strong>Correction (March 2026)</strong></h3><p>The advice on switching to the PostgreSQL-native transport is outdated, and I should have caught this before publishing.</p><p>The article states that&#xA0;<code>doctrine://</code>&#xA0;uses polling and that switching to the native PostgreSQL transport is necessary to get&#xA0;<code>LISTEN/NOTIFY</code>&#xA0;behaviour. That was true before Symfony 5.1. Since then it is not &#x2014; though I should be precise about what &quot;since 5.1&quot; actually means, because the history here is slightly messier than a clean version bump suggests.</p><p>The feature was introduced in&#xA0;<a href="https://github.com/symfony/symfony/pull/35485?ref=marcelmoll.dev">PR #35485</a>&#xA0;by K&#xE9;vin Dunglas and merged into the 5.1 branch. The official documentation picked it up in 5.2. There were also meaningful bug fixes to the PostgreSQL connection behaviour in 5.2 and 5.3, so if you were on 5.1 at launch you may have hit rough edges. The canonical advice has been &quot;use 5.1 or later&quot;, but in practice &quot;5.3 or later&quot; is where it became solid.</p><p>The factory code tells the story clearly:</p><pre><code class="language-php">$useNotify = $options[&apos;use_notify&apos;] ?? true;

if ($useNotify &amp;&amp; $driverConnection-&gt;getDatabasePlatform() instanceof PostgreSQLPlatform) {
    $connection = new PostgreSqlConnection($configuration, $driverConnection);
}</code></pre><p><code>use_notify</code>&#xA0;defaults to&#xA0;<code>true</code>. If your connection is PostgreSQL, Symfony detects the platform and routes you to&#xA0;<code>PostgreSqlConnection</code>&#xA0;with&#xA0;<code>LISTEN/NOTIFY</code>&#xA0;automatically. You do not need to change your DSN. You do not need to migrate anything.</p><p>The line &quot;there is no good reason to stay on the polling transport if you are already on PostgreSQL&quot; is wrong. You are not on the polling transport. You never were, assuming a maintained Symfony version.</p><p>Set&#xA0;<code>use_notify: false</code>&#xA0;only if you have a specific reason to opt out. Otherwise, leave it alone and it works.</p><hr><h2 id="retry-strategy-the-defaults-are-too-aggressive">Retry Strategy: The Defaults Are Too Aggressive</h2><p>Three retries. One second delay. Multiplier of two. Retries at 1, 2, and 4 seconds. Total window: about 7 seconds before a message lands in the failure transport.</p><p>Consider what that covers: a service restarting after a deployment typically needs 20 to 60 seconds. A rate-limited external API may respond with 429 for several minutes. A transient network partition can last longer than 7 seconds. The default retry window is too short for anything involving an external dependency, and in a system receiving constant feed updates from external systems, external dependencies are everywhere.</p><p>A more realistic starting point:</p><pre><code class="language-yaml">transports:
    catalog_sync:
        dsn: &apos;%env(MESSENGER_TRANSPORT_DSN)%&apos;
        retry_strategy:
            max_retries: 5
            delay: 5000        # 5 seconds
            multiplier: 3
            max_delay: 300000  # cap at 5 minutes
</code></pre><p>Retries at 5s, 15s, 45s, 2min 15s, and then 5min (capped by&#xA0;<code>max_delay</code>). Without the cap, retry 5 would be 405 seconds. The&#xA0;<code>max_delay</code>&#xA0;brings it down to a sensible ceiling. That spread covers most transient failures without waiting so long that the failure transport fills up during a routine dependency outage.</p><p>Classify your failures explicitly. This is where most Messenger implementations waste their retry budget: retrying errors that will never resolve, burning through retries in seconds, flooding the failure transport with messages that should have been discarded immediately:</p><pre><code class="language-php">#[AsMessageHandler]
final class SyncProductAvailabilityHandler
{
    public function __construct(
        private readonly ProductRepository $productRepository,
        private readonly WarehouseApiClient $warehouseClient,
        private readonly LoggerInterface $logger,
    ) {}

    public function __invoke(UpdateProductAvailability $message): void
    {
        $product = $this-&gt;productRepository-&gt;findBySku($message-&gt;sku);

        if ($product === null) {
            // The product no longer exists in our catalog.
            // Retrying will not change that. Discard immediately.
            throw new UnrecoverableMessageHandlingException(
                sprintf(&apos;Product SKU %s not found, discarding availability update&apos;, $message-&gt;sku)
            );
        }

        try {
            $availability = $this-&gt;warehouseClient-&gt;getAvailability($message-&gt;sku);
            $product-&gt;updateAvailability($availability);
            $this-&gt;productRepository-&gt;save($product);
        } catch (WarehouseApiRateLimitException $e) {
            // The API will accept this later. Worth retrying.
            throw new RecoverableMessageHandlingException(
                &apos;Warehouse API rate limited, will retry&apos;,
                previous: $e
            );
        } catch (WarehouseApiProductNotFoundException $e) {
            // The warehouse does not know this SKU either. Retrying is pointless.
            throw new UnrecoverableMessageHandlingException(
                sprintf(&apos;SKU %s unknown to warehouse API&apos;, $message-&gt;sku),
                previous: $e
            );
        }
    }
}
</code></pre><p><code>UnrecoverableMessageHandlingException</code>&#xA0;bypasses the retry strategy entirely and goes straight to the failure transport.&#xA0;<code>RecoverableMessageHandlingException</code>&#xA0;forces a retry even for exception types Messenger would not normally retry. In the application we inherited, unclassified exceptions from a warehouse API that was frequently rate-limiting were consuming the entire retry budget in 7 seconds, then flooding the failure transport. Classifying them correctly, and extending the retry window, reduced failure transport entries by roughly 80% without changing a single line of handler logic.</p><hr><h2 id="handlers-must-be-idempotent">Handlers Must Be Idempotent</h2><p>We found this out the hard way. When we scaled the catalog workers from one process to three, two handlers ran simultaneously on the same SKU. Both read the current stock figure of 1. Both decremented it. We ended up with -1 inventory on a product that had just sold its last unit. The handlers had been written assuming sequential execution. Three parallel workers made that assumption false.</p><p>A message can be delivered more than once under normal operating conditions. Not just from bugs. A worker can process a message successfully and crash before sending the acknowledgement. The transport redelivers. On a Redis or AMQP transport with manual acknowledgement, a network hiccup between the handler completing and the&#xA0;<code>ack</code>&#xA0;being sent means the broker considers the message undelivered and requeues it. In a high-volume system this is not an edge case. It is routine.</p><p>If running a handler twice produces different outcomes (decremented stock, duplicate charge, second confirmation email), you have a correctness problem that no amount of retry configuration will fix.</p><p>For catalog sync the fix was straightforward: replace the decrement with an absolute write. The warehouse feed sends the current stock figure, not a delta. Writing it twice is harmless. That is natural idempotency and it is the easiest kind to achieve: design the operation as a set rather than an increment wherever the domain allows it.</p><p>For payment handlers the domain does not allow it. A charge is inherently an increment. The fix there is an idempotency key that derives from the business event, not from the dispatch:</p><pre><code class="language-php">// The key is generated when the payment intent is created, before dispatch.
// It is stable: the same order at the same version always produces the same key.
// A double-submit or a redelivery carries the same key. Both are caught.
final class ProcessPayment
{
    public function __construct(
        public readonly int $orderId,
        public readonly string $idempotencyKey, // e.g. &quot;payment-{orderId}-{orderVersion}&quot;
    ) {}
}
</code></pre><p>A UUID generated at dispatch time does not work. If the controller dispatches twice (double form submission, request timeout with retry), each dispatch generates a new UUID and both pass the idempotency check. The key must be stable across any number of deliveries of the same logical event:</p><pre><code class="language-php">#[AsMessageHandler]
final class ProcessPaymentHandler
{
    public function __invoke(ProcessPayment $message): void
    {
        if ($this-&gt;paymentRepository-&gt;existsByIdempotencyKey($message-&gt;idempotencyKey)) {
            $this-&gt;logger-&gt;info(&apos;Duplicate payment message, skipping&apos;, [
                &apos;idempotency_key&apos; =&gt; $message-&gt;idempotencyKey,
                &apos;order_id&apos; =&gt; $message-&gt;orderId,
            ]);
            return;
        }

        // ... process payment, store record with idempotency key
    }
}
</code></pre><p>The database-level uniqueness constraint on&#xA0;<code>idempotency_key</code>&#xA0;is the actual safety net. The&#xA0;<code>existsByIdempotencyKey</code>check is an optimisation that avoids calling the payment gateway unnecessarily. Without the constraint, two concurrent deliveries can both pass the check before either has written the record, and both attempt the charge.</p><p>Design for idempotency when you write the handler, not after you find negative inventory in a live catalog.</p><hr><h2 id="memory-leaks-long-running-processes-require-different-habits">Memory Leaks: Long-Running Processes Require Different Habits</h2><p>Workers are long-running PHP processes and PHP accumulates memory. Leaks that do not matter in a 50ms request lifecycle compound over hours in a worker. In the application we took over, workers had no memory limits and no restart schedule. By morning they were consuming 600MB each and climbing. The server was struggling and workers with no ceiling were a direct contributor.</p><p>The primary source is Doctrine. The entity manager accumulates every object it loads in its identity map and never releases them. In a worker processing thousands of messages per hour, each loading several entities, the identity map grows without bound.</p><p>Symfony&apos;s&#xA0;<code>ResetServicesMiddleware</code>&#xA0;handles this by calling&#xA0;<code>$container-&gt;reset()</code>&#xA0;after each message, which clears the identity map and resets other stateful services. It is in the default middleware stack. Don&apos;t remove it, and verify it is present if you have customized the stack.</p><p>The second source is your own code: static properties, unbounded service-level caches, third-party libraries holding references. Profile worker memory over time. Log&#xA0;<code>memory_get_usage(true)</code>&#xA0;with each message, graph it over an hour, watch for growth that does not plateau after the first few messages.</p><p>The operational fix regardless of whether you have traced every leak:</p><pre><code class="language-bash">php bin/console messenger:consume async --memory-limit=256M --time-limit=3600
</code></pre><p><code>--memory-limit</code>&#xA0;causes the worker to finish the current message and exit cleanly when it crosses the threshold.&#xA0;<code>--time-limit</code>&#xA0;restarts it after an hour regardless of memory. This catches slow drifts that&#xA0;<code>ResetServicesMiddleware</code>&#xA0;does not cover. Both are non-negotiable in production.</p><pre><code class="language-ini">[program:messenger-catalog-worker]
command=php /var/www/app/bin/console messenger:consume catalog_sync --memory-limit=128M --time-limit=3600
user=www-data
numprocs=3
autostart=true
autorestart=true
process_name=%(program_name)s_%(process_num)02d
stderr_logfile=/var/log/messenger-catalog.err.log
stdout_logfile=/var/log/messenger-catalog.out.log

[program:messenger-orders-worker]
command=php /var/www/app/bin/console messenger:consume orders_high --memory-limit=256M --time-limit=3600
user=www-data
numprocs=2
autostart=true
autorestart=true
process_name=%(program_name)s_%(process_num)02d
stderr_logfile=/var/log/messenger-orders.err.log
stdout_logfile=/var/log/messenger-orders.out.log
</code></pre><p>The catalog worker runs three processes because catalog sync is high volume and the handlers are fast. The orders worker runs two: lower volume but more critical, and handlers call external APIs that occasionally block.</p><hr><h2 id="graceful-shutdown-sigterm-not-sigkill">Graceful Shutdown: SIGTERM, Not SIGKILL</h2><p>Workers respond to&#xA0;<code>SIGTERM</code>&#xA0;by finishing the current message and exiting cleanly. The message is acknowledged, the worker stops.&#xA0;<code>SIGKILL</code>&#xA0;is not graceful. A message mid-processing is not acknowledged, and depending on your transport&apos;s visibility timeout it may be redelivered or lost.</p><p>This matters at deployment time. If your deployment pipeline stops workers with&#xA0;<code>SIGKILL</code>&#xA0;(the default for many container orchestration setups if you are not explicit), you will occasionally interrupt a handler mid-execution. In an e-commerce context, that might mean a payment is partially processed, an inventory update is half-written, or an invoice is generated but not sent.</p><p>In Kubernetes, set&#xA0;<code>terminationGracePeriodSeconds</code>&#xA0;long enough for your longest-running handler to complete:</p><pre><code class="language-yaml">spec:
  terminationGracePeriodSeconds: 60
</code></pre><p>Supervisor uses&#xA0;<code>SIGTERM</code>&#xA0;by default and waits for the process to exit before restarting, so it handles this correctly out of the box.</p><p>The piece most deployment guides skip is how workers restart during a code deployment. Supervisor will restart a worker automatically when it exits, but you need the workers running the new code, not stuck processing messages with the old codebase. The reliable approach is to run&#xA0;<code>messenger:stop-workers</code>&#xA0;as a deployment step after your code is on disk but before traffic shifts:</p><pre><code class="language-bash"># In your deployment script, after code deploy and cache warmup:
php bin/console messenger:stop-workers
</code></pre><p>This sets a cache flag that tells each worker to finish its current message and exit cleanly. Supervisor restarts them, they pick up the new code, and the transition happens without a forced kill or a gap in processing. If you are on Kubernetes and running workers as a separate Deployment, a rolling restart achieves the same thing, but only if&#xA0;<code>terminationGracePeriodSeconds</code>&#xA0;is long enough for the current message to finish before the pod is replaced.</p><hr><h2 id="the-failed-transport-your-production-safety-net">The Failed Transport: Your Production Safety Net</h2><p>Every message that exhausts its retries ends up in the failure transport. If you have not configured one, they are dropped silently. Configure it:</p><pre><code class="language-yaml">framework:
    messenger:
        failure_transport: failed
        transports:
            failed:
                dsn: &apos;doctrine://default?queue_name=failed&apos;
</code></pre><p>The 400 entries we found in the inherited application&apos;s failure transport when we took it over (some of them weeks old) were the direct result of nobody monitoring this queue. Product availability for discontinued SKUs, orders referencing deleted customers, payment messages for cancelled subscriptions. Most were legitimately unrecoverable. Some were failures caused by a bug that had since been fixed. Without monitoring, nobody knew.</p><p>Treat the failure transport as a production incident queue. We wrap&#xA0;<code>messenger:failed:show</code>&#xA0;in a cron job, pipe the count into our monitoring platform, and page on-call if it exceeds a threshold. The specific tooling does not matter. Growth in the failure transport should be investigated the same day, not discovered during a quarterly review.</p><p>Failed messages go through the full middleware stack again when you replay them. If the failure was a handler bug you have since fixed, replaying is safe. If it was a transient infrastructure issue that has resolved, same. If it was data-related (entity deleted, third-party account closed), replaying will produce another&#xA0;<code>UnrecoverableMessageHandlingException</code>and the message will be cleaned up. Inspect before replaying blindly:</p><pre><code class="language-bash">php bin/console messenger:failed:show --max=20
php bin/console messenger:failed:retry 42     # specific message
php bin/console messenger:failed:retry        # all: use with caution
</code></pre><p>Use separate failure transports per transport. Mixing catalog sync failures with order failures in the same queue makes triage slower than it needs to be:</p><pre><code class="language-yaml">transports:
    catalog_sync:
        dsn: &apos;%env(MESSENGER_TRANSPORT_DSN)%&apos;
        failure_transport: failed_catalog
    orders_high:
        dsn: &apos;%env(MESSENGER_TRANSPORT_DSN)%&apos;
        failure_transport: failed_orders
    failed_catalog:
        dsn: &apos;doctrine://default?queue_name=failed_catalog&apos;
    failed_orders:
        dsn: &apos;doctrine://default?queue_name=failed_orders&apos;
</code></pre><hr><h2 id="the-doctrine-transaction-gotcha-that-silently-eats-messages">The Doctrine Transaction Gotcha That Silently Eats Messages</h2><p>If you have customized the Messenger middleware stack and your messages are mysteriously ending up in the failure transport with &quot;entity not found&quot; errors that cannot be reproduced locally, check whether&#xA0;<code>DoctrineTransactionMiddleware</code>&#xA0;is still in your chain. Its absence is one of the harder failure modes to diagnose because nothing obviously breaks: the message is dispatched, the worker picks it up, the handler fails cleanly, and the retry clock starts. By the time the transaction that created the entity commits, the message has exhausted its retries.</p><p>The middleware holds transport sends until after the transaction commits. Remove it and dispatch happens immediately, before the row exists. Verify it is present if you have a custom chain:</p><pre><code class="language-yaml">framework:
    messenger:
        buses:
            command.bus:
                middleware:
                    - doctrine_transaction
</code></pre><p>The second scenario bites in a different way. If your handler dispatches a follow-up message and the outer handler later throws, rolling back the transaction, that inner message has already been sent. The follow-up handler runs against data that was never committed. No error, no indication anything is wrong. Just work being done against a row that does not exist.</p><p><code>DispatchAfterCurrentBusStamp</code>&#xA0;defers the inner dispatch until the outer handler completes without throwing:</p><pre><code class="language-php">#[AsMessageHandler]
final class PlaceOrderHandler
{
    public function __invoke(PlaceOrder $message): void
    {
        // ... write order to database inside a transaction

        // Not dispatched until this handler returns successfully.
        // Transaction rollback = this message never leaves the bus.
        $this-&gt;bus-&gt;dispatch(
            new SendOrderConfirmation($message-&gt;orderId),
            [new DispatchAfterCurrentBusStamp()]
        );
    }
}
</code></pre><p>Use this stamp any time you dispatch from inside a handler. The cases where you don&apos;t need it are rarer than the cases where you do.</p><hr><h2 id="stamps-worth-knowing-beyond-delaystamp">Stamps Worth Knowing Beyond DelayStamp</h2><p><code>DelayStamp</code>&#xA0;is well-documented. Three others are just as useful and rarely appear in tutorials.</p><p><strong><code>TransportNamesStamp</code></strong>&#xA0;overrides routing at dispatch time. In our agency work, premium merchants get their order processing routed to a dedicated high-priority transport regardless of what the YAML routing says. Encoding that in YAML would mean a new routing rule for every merchant tier. Encoding it at the dispatch site means the business logic lives where the decision is made:</p><pre><code class="language-php">$stamps = $merchant-&gt;isPremium()
    ? [new TransportNamesStamp([&apos;orders_premium&apos;])]
    : [];

$this-&gt;bus-&gt;dispatch(new ProcessOrder($orderId), $stamps);
</code></pre><p><strong><code>RedeliveryStamp</code></strong>&#xA0;tracks retry count. Access it in your handler when retry state should influence behaviour, for example to escalate to a manual review queue after several failed attempts:</p><pre><code class="language-php">public function __invoke(SyncProductToErp $message, Envelope $envelope): void
{
    $retries = $envelope-&gt;last(RedeliveryStamp::class)?-&gt;getRetryCount() ?? 0;

    if ($retries &gt;= 3) {
        $this-&gt;operationsQueue-&gt;escalate($message-&gt;productId, $retries);
        return;
    }

    $this-&gt;erpClient-&gt;syncProduct($message-&gt;productId);
}
</code></pre><p><s>Add<code>Envelope $envelope</code>&#xA0;as a second parameter to&#xA0;<code>__invoke</code>&#xA0;and Messenger injects it automatically.</s></p><p><strong><code>SentToFailureTransportStamp</code></strong>&#xA0;marks messages that have entered the failure transport. If your monitoring middleware counts all handled messages, use this to separate replays from first-time dispatches. Otherwise your metrics count failure transport replays as new work.</p><h3 id="correction-march-2026-1">Correction (March 2026)</h3><p>A member of the Symfony community caught something I got wrong in this section, and they were right.<br><br>I wrote that adding&#xA0;<code>Envelope $envelope</code>&#xA0;as a second parameter to&#xA0;<code>__invoke</code>&#xA0;is enough for Messenger to inject it automatically. It isn&apos;t. I checked the framework source after the message came in.&#xA0;<code>HandleMessageMiddleware::callHandler()</code>&#xA0;builds its argument list explicitly: the message, optionally an&#xA0;<code>Acknowledger</code>&#xA0;for batch handlers, and anything added via&#xA0;<code>HandlerArgumentsStamp</code>. The&#xA0;<code>Envelope</code>&#xA0;never makes it into that list on its own.</p><p>The handler code in the example works. Just not for the reason I gave.</p><p>The project this came from has a custom middleware that stamps the envelope as an additional argument before&#xA0;<code>HandleMessageMiddleware</code>&#xA0;runs:</p><pre><code class="language-php">final class InjectEnvelopeMiddleware implements MiddlewareInterface
{
    public function handle(Envelope $envelope, StackInterface $stack): Envelope
    {
        return $stack-&gt;next()-&gt;handle(
            $envelope-&gt;with(new HandlerArgumentsStamp([$envelope])),
            $stack
        );
    }
}</code></pre><p>I wrote the article from memory of working code. The middleware had become invisible infrastructure, something registered once and forgotten. When I described the handler pattern, I described what I saw in the codebase without tracing it back to what was actually making it work.</p><p>The pattern is solid.&#xA0;<code>HandlerArgumentsStamp</code>&#xA0;is the official mechanism for this. If you want the handler signature from the example, register this middleware before&#xA0;<code>HandleMessageMiddleware</code>&#xA0;in your bus configuration and it works exactly as shown.</p><p>The sentence about automatic injection was wrong. The rest stands.</p><hr><h2 id="multiple-buses-earn-the-complexity">Multiple Buses: Earn the Complexity</h2><p>A single bus is the right default. Add a second bus when you need a different middleware stack, not because CQRS says you should. Those are different reasons.</p><p>We learned this the hard way on a project where we introduced a command bus and query bus early, convinced the architectural separation would pay off. It did not. Both buses had identical middleware stacks for the first eight months. The only effect was that every service needed two injected buses, every new developer asked why, and the answer was &quot;for the architecture&quot;. That is not a good answer.</p><p>We now add a query bus only when the middleware difference is concrete. In our current agency setup, the command bus carries&#xA0;<code>doctrine_transaction</code>&#xA0;middleware and the query bus does not. That difference is real and measurable. Reads do not need transaction overhead:</p><pre><code class="language-yaml">framework:
    messenger:
        default_bus: command.bus
        buses:
            command.bus:
                middleware:
                    - doctrine_transaction
            query.bus: ~
</code></pre><pre><code class="language-php">final class ProductService
{
    public function __construct(
        #[Target(&apos;command.bus&apos;)] private readonly MessageBusInterface $commandBus,
        #[Target(&apos;query.bus&apos;)] private readonly MessageBusInterface $queryBus,
    ) {}
}
</code></pre><p>If you cannot name a concrete difference in what each bus does, you do not need two buses yet.</p><hr><h2 id="testing-what-goes-where">Testing: What Goes Where</h2><p>Test handlers as plain PHP classes. Inject mocks, call&#xA0;<code>__invoke</code>, assert the outcome. No Messenger infrastructure:</p><pre><code class="language-php">final class SyncProductAvailabilityHandlerTest extends TestCase
{
    public function testUpdatesAvailabilityForKnownProduct(): void
    {
        $product = new Product(sku: &apos;ABC-123&apos;, availability: 10);

        $repository = $this-&gt;createMock(ProductRepository::class);
        $repository-&gt;method(&apos;findBySku&apos;)-&gt;with(&apos;ABC-123&apos;)-&gt;willReturn($product);
        $repository-&gt;expects($this-&gt;once())-&gt;method(&apos;save&apos;)-&gt;with($product);

        $warehouseClient = $this-&gt;createMock(WarehouseApiClient::class);
        $warehouseClient-&gt;method(&apos;getAvailability&apos;)-&gt;with(&apos;ABC-123&apos;)-&gt;willReturn(25);

        $handler = new SyncProductAvailabilityHandler(
            $repository,
            $warehouseClient,
            $this-&gt;createMock(LoggerInterface::class)
        );

        $handler(new UpdateProductAvailability(sku: &apos;ABC-123&apos;));

        self::assertSame(25, $product-&gt;getAvailability());
    }

    public function testThrowsUnrecoverableForUnknownProduct(): void
    {
        $repository = $this-&gt;createMock(ProductRepository::class);
        $repository-&gt;method(&apos;findBySku&apos;)-&gt;willReturn(null);

        $handler = new SyncProductAvailabilityHandler(
            $repository,
            $this-&gt;createMock(WarehouseApiClient::class),
            $this-&gt;createMock(LoggerInterface::class)
        );

        $this-&gt;expectException(UnrecoverableMessageHandlingException::class);
        $handler(new UpdateProductAvailability(sku: &apos;UNKNOWN&apos;));
    }
}
</code></pre><p>Use the&#xA0;<code>in-memory://</code>&#xA0;transport in functional tests to verify that the right message was dispatched in response to an action. If you are following the transport separation from earlier in this article, map each named transport to&#xA0;<code>in-memory://</code>&#xA0;in your test configuration:</p><pre><code class="language-yaml"># config/packages/test/messenger.yaml
framework:
    messenger:
        transports:
            catalog_sync: &apos;in-memory://&apos;
            orders_high: &apos;in-memory://&apos;
</code></pre><pre><code class="language-php">// Assert that placing an order dispatched a confirmation message
// to the orders_high transport, not to catalog_sync, not missing entirely.
/** @var InMemoryTransport $transport */
$transport = self::getContainer()-&gt;get(&apos;messenger.transport.orders_high&apos;);

self::assertCount(1, $transport-&gt;getSent());
self::assertInstanceOf(SendOrderConfirmation::class, $transport-&gt;getSent()[0]-&gt;getMessage());
self::assertSame(42, $transport-&gt;getSent()[0]-&gt;getMessage()-&gt;orderId);
</code></pre><p>The functional test asserts routing and dispatch. The handler test asserts business logic. They cover different things and should not bleed into each other.</p><hr><h2 id="async-is-not-an-upgrade">Async Is Not an Upgrade</h2><p>Use async because it feels sophisticated and you will be living with the consequences for a long time.</p><p>In e-commerce specifically, async has costs that are easy to underestimate. An availability update sitting in a queue for 30 seconds is 30 seconds during which a customer can add an out-of-stock product to their cart, proceed through checkout, and reach payment before the handler has run. You have not just delayed a database write. You have created an oversell window. The async layer that was supposed to protect the application under load has introduced a consistency gap directly in the checkout flow.</p><p>The same applies to pricing. A price update dispatched asynchronously means there is a window, however brief, where the displayed price and the stored price disagree. For most products that is tolerable. For a flash sale that starts at midnight, it is not.</p><p>These are not arguments against Messenger. They are arguments for being deliberate about which operations can afford eventual consistency and which cannot. Availability and pricing updates that feed from external systems are genuinely good candidates for async. The volume is too high for synchronous handling and the latency is bounded. But the decision should be made explicitly, with the consistency window understood, not by defaulting to async because the infrastructure is already in place.</p><p>In the application we fixed, async was the right call. The volume of external feed updates was genuinely incompatible with synchronous handling under customer traffic. But the correct Messenger setup for that application required transport separation, failure monitoring, idempotent handlers, memory-bounded workers, and a retry strategy matched to the actual failure modes. The component was already installed. None of that was configured.</p><p>That gap, between &quot;Messenger is installed&quot; and &quot;Messenger is working correctly under production conditions&quot;, is what this article is about.</p><hr><h2 id="what-we-actually-delivered">What We Actually Delivered</h2><p>None of the fixes were exotic. The Symfony Messenger component was already doing exactly what it was configured to do. That was the problem.</p><p>A single shared transport meant catalog sync volume could starve order processing. Separating them meant the payment confirmation backlog cleared within minutes. Not because anything got faster, but because the two workloads stopped competing for the same queue.</p><p>Workers without memory limits ran until the server complained. The 600MB figure was not a memory leak in the traditional sense. It was Doctrine&apos;s identity map accumulating every loaded entity in a process that had been running for days.&#xA0;<code>--memory-limit</code>&#xA0;and&#xA0;<code>--time-limit</code>&#xA0;brought it down to a stable 80-120MB range.</p><p>The 400 failure transport entries were the most instructive part. Roughly a third were genuinely unrecoverable: deleted products, closed merchant accounts, SKUs the warehouse had never heard of. They had been burning through retries in 7 seconds and sitting in the failure queue unnoticed for weeks. Proper exception classification would have discarded them on the first attempt. The remaining two thirds were transient failures: rate limits, a deployment window where an upstream service was briefly unreachable. A wider retry window would have resolved all of them without ever touching the failure transport.</p><p>It was not all clean. When we separated the transports and restarted the workers, we discovered that three handler classes were not idempotent. That had never mattered with a single slow worker, but became obvious immediately when three catalog workers started running in parallel. The first sign was a product showing -1 stock in the catalog. Two availability handlers had picked up the same SKU simultaneously. Both queried the current stock figure of 1. Both decremented it. Neither knew the other existed. The race had always been possible in theory; the single-worker setup had just made it vanishingly unlikely in practice.</p><p>We had to stop the catalog workers, audit all handler classes that touched shared state, add idempotency keys to the messages, add uniqueness constraints at the database level, and redeploy before restoring full worker concurrency. It took most of a day. The fix itself was not complicated: switch from decrements to absolute writes for the availability handlers, add the idempotency key pattern for the two handlers where that was not possible. Finding every affected handler and being confident we had not missed one was the slow part. This is exactly the kind of thing you want to discover in a staging environment under a load test, not on a live catalog at 11am on a Wednesday.</p><p>That complication aside, the application has been stable under the same traffic and feed volume ever since. The queue depth stays low. The failure transport gets a handful of entries per week, all unrecoverable, all expected.</p><p>That is what a well-configured Messenger setup looks like. Not invisible, but quiet.</p>]]></content:encoded></item><item><title><![CDATA[Fuck Clean Code - Ain't Nobody Got Time for That]]></title><description><![CDATA[Clean code is dead. The AI doesn't care about your variable names. It'll read your 800-line controller and generate another one without blinking. But here's the problem: you just taught it everything.]]></description><link>https://marcelmoll.dev/fuck-clean-code-aint-nobody-got-time-for-that/</link><guid isPermaLink="false">69a211fe7ec5f735e0bef053</guid><category><![CDATA[Clean Code]]></category><category><![CDATA[AI]]></category><category><![CDATA[AI Experience]]></category><dc:creator><![CDATA[Marcel Moll]]></dc:creator><pubDate>Tue, 03 Mar 2026 20:04:31 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1608134982564-349bd5379fea?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGZ1Y2slMjBpdHxlbnwwfHx8fDE3NzIyMjkxNDB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1608134982564-349bd5379fea?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fGZ1Y2slMjBpdHxlbnwwfHx8fDE3NzIyMjkxNDB8MA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="Fuck Clean Code - Ain&apos;t Nobody Got Time for That"><p>Clean code is dead. The AI doesn&apos;t care about your variable names, your single responsibility principle, your beautifully extracted service classes. It&apos;ll read your 800-line controller and generate another one without blinking. The humans are leaving the loop.</p><p>Why are we still writing code for them?</p><p>Because the AI is the loop now. And you just taught it everything.</p><hr><h2 id="the-pattern-matcher-in-your-codebase">The Pattern Matcher in Your Codebase</h2><p><strong>AI assistants are pattern matchers. Your codebase is the pattern.</strong></p><p>Every autocomplete suggestion, every refactor request, every &quot;explain this function&quot; runs through a model trying to understand what&#xA0;<em>normal</em>&#xA0;looks like here. What the conventions are. What this team considers acceptable.</p><p>It doesn&apos;t read your best code first. It reads all of it, equally. The clean service you wrote on a good Tuesday and the nightmare controller you hacked together at 6pm before a deadline carry the same weight. Same influence. Same lesson absorbed.</p><p>You already have an AI apprentice. It showed up the moment you installed Copilot, opened Cursor, or started pasting code into Claude. No interview. No onboarding. It just started watching and learning.</p><p>A human junior eventually pushes back. Asks &quot;should this really be in the controller?&quot; Gets uncomfortable and starts questioning it. The AI never gets uncomfortable. It sees the fat controller and thinks:&#xA0;<em>this is how we do things here.</em>&#xA0;Then it makes another one. Faster. With more features. In places you haven&apos;t looked at yet.</p><p>The question isn&apos;t whether it learned from you. It&apos;s what you taught it.</p><hr><h2 id="someone-has-a-different-theory">Someone Has a Different Theory</h2><p>Last week, Joppe De Cuyper published &quot;<a href="https://joppe.dev/2026/02/26/i-deleted-my-source-code/?ref=marcelmoll.dev" rel="noreferrer">I deleted my source code.</a>&quot; He built a Symfony app with an empty&#xA0;<code>src/</code>&#xA0;directory. No committed code. On every CI run, an AI agent reads a spec file and a test suite, generates the full implementation, runs the tests, and throws everything away. The spec is the source. Code is a build artifact, like a compiled binary.</p><p>His conclusion: code is becoming disposable. Clean code is a convention we built for humans, not machines. The machine doesn&apos;t care about your naming. It&apos;ll generate an 800-line controller and move on.</p><p>The direction of travel is real. But his proof of concept is a greenfield TODO app. Clean domain. Controlled from the start. No legacy, no accumulated edge cases, no fix from 2019 that lived only in the implementation because nobody updated the spec.</p><p>He admits it himself:&#xA0;<em>&quot;Most knowledge in a codebase is implicit.&quot;</em>&#xA0;The bug you fixed last year. The subtle business rule that exists because a client complained once in 2022. The performance hack that solved a real production problem. None of that is in the spec. It&apos;s in the code. Regenerate, and it&apos;s gone.</p><p>His experiment works because he controlled every variable. Most of us are working in codebases that have been out of control for years. There&apos;s no spec to regenerate from. There&apos;s just what was left behind. And the AI is working from it right now, on the ticket that just got assigned.</p><hr><h2 id="the-machine-absolutely-cares-about-your-variable-names">The Machine Absolutely Cares About Your Variable Names</h2><p>This is the part of Joppe&apos;s argument that sounds right and isn&apos;t.</p><p>The interpreter doesn&apos;t care.&#xA0;<code>$x</code>&#xA0;parses identically to&#xA0;<code>$userEmailAddress</code>. The runtime has no opinions about your naming choices.</p><p>The AI assistant isn&apos;t the runtime. It&apos;s an LLM, and LLMs understand code through semantics. The meaning of names. The shape of structures. The relationship between concepts. All of it feeds into how the model reasons about what your code does and what it should do next.</p><p>A method called&#xA0;<code>process()</code>&#xA0;that does three different things depending on an internal flag gives the model nothing. It guesses. Then it commits to that guess, because these models don&apos;t say &quot;I&apos;m not sure.&quot; They pattern-match to the most plausible continuation and ship it.</p><p>A method called&#xA0;<code>markTaskAsCompleteIfSubtasksAreDone()</code>&#xA0;is a different thing entirely. Preconditions, side effects, intent &#x2014; all communicated before a single suggestion is made. Every line generated nearby is shaped by that clarity.</p><p>Clean naming isn&apos;t aesthetics. It&apos;s signal. Signal is what separates an AI that makes you faster from one that introduces bugs in files you haven&apos;t opened in months.</p><p>Give it noise, it generates noise. That&apos;s not a metaphor. It&apos;s just how the math works.</p><hr><h2 id="the-debt-doesnt-sit-still-anymore">The Debt Doesn&apos;t Sit Still Anymore</h2><p>You&apos;ve been tolerating a mess. The deadline was real. You&apos;d clean it up later. Only teammates would see it.</p><p>That tolerance used to cost one thing: a developer, slowed down, on a future ticket. Annoying. Manageable. The kind of debt you could argue you&apos;d eventually address.</p><p>The AI changed the math.</p><p>Every bad pattern is now a lesson that gets applied again, faster, in more places, with more confidence, across every interaction with your codebase until someone cleans it up. The debt has an engine now. Tireless. Tasteless. Incapable of telling the difference between your best work and your worst.</p><p>It used to be linear. Now it compounds. It&apos;s been compounding since the day you gave it access.</p><hr><h2 id="specs-and-clean-code-arent-alternatives">Specs and Clean Code Aren&apos;t Alternatives</h2><p>Joppe gets one thing right: specs should be where decisions live. Explicit, testable, precise. If your spec isn&apos;t detailed enough to generate correct code from, it wasn&apos;t detailed enough to build correct code from by hand either.</p><p>But specs describe&#xA0;<em>what</em>. Code is where&#xA0;<em>how</em>&#xA0;lives. How you handle two events arriving out of order. How domain logic stays separated from infrastructure. How the subtle business rule actually gets enforced when things get messy at runtime. No spec captures all of that. The implementation is still full of decisions, and when those decisions are buried in noise, the AI guesses at them. Repeatedly. Confidently.</p><p>Precise specs make the AI more capable. Clean code makes it more&#xA0;<em>correct</em>. Not competing priorities. Two layers of the same problem.</p><p>Write better specs. And clean up the code you actually have today, because that&apos;s what the AI is reading right now.</p><hr><h2 id="so-what-did-you-teach-it">So What Did You Teach It?</h2><p>The AI is reading your codebase while you read this. Building a model of what normal looks like. Your patterns. Your shortcuts. Your tolerated messes.</p><p>In six months, when you open a file you don&apos;t recognize, you&apos;ll know whose fault it is.</p><p>You wrote the curriculum. You left the mess on the desk and called it pragmatism.</p>]]></content:encoded></item><item><title><![CDATA[The Hidden Power of Symfony's EventDispatcher]]></title><description><![CDATA[Most developers use the EventDispatcher for simple notifications and never go deeper. That's a mistake. Here's what's actually in the toolbox and when to reach for it.]]></description><link>https://marcelmoll.dev/the-hidden-power-of-symfonys-eventdispatcher/</link><guid isPermaLink="false">69a1c6cb7ec5f735e0bef03d</guid><category><![CDATA[Symfony]]></category><category><![CDATA[Deep Dive]]></category><dc:creator><![CDATA[Marcel Moll]]></dc:creator><pubDate>Thu, 26 Feb 2026 22:36:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1678287759127-1ad7f38855cb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDl8fGRpc3BhdGNofGVufDB8fHx8MTc3MjIwOTkyOXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1678287759127-1ad7f38855cb?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDl8fGRpc3BhdGNofGVufDB8fHx8MTc3MjIwOTkyOXww&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="The Hidden Power of Symfony&apos;s EventDispatcher"><p>Most developers I know use the EventDispatcher the same way. They create an event, wire up a listener, dispatch it somewhere, and move on. It works. Nobody complains.</p><p>The problem is that &quot;works fine&quot; is exactly what keeps you from going deeper. I spent years in that spot, using the EventDispatcher for simple notifications and never feeling the need to push further. What I missed were the patterns that actually change how you structure an application. That&apos;s what this is about.</p><h2 id="the-basics-so-were-on-the-same-page">The Basics (So We&apos;re On the Same Page)</h2><p>You know this part. Skip ahead if you want.</p><pre><code class="language-php">class OrderPlaced
{
    public function __construct(public readonly Order $order) {}
}

$this-&gt;dispatcher-&gt;dispatch(new OrderPlaced($order));

#[AsEventListener]
class SendOrderConfirmation
{
    public function __invoke(OrderPlaced $event): void
    {
        // send the email
    }
}
</code></pre><p>The&#xA0;<code>#[AsEventListener]</code>&#xA0;attribute (Symfony 6.0+) wires the listener automatically. On a class with&#xA0;<code>__invoke</code>, Symfony infers the event type from the type hint of the first parameter. Leave it untyped and it fails silently. That one will bite you eventually.</p><h2 id="stoppable-events">Stoppable Events</h2><p>Without stoppable events, a &quot;first responder wins&quot; pattern looks like this:</p><pre><code class="language-php">class ResolvePaymentMethod
{
    public function __invoke(ResolvePaymentMethodEvent $event): void
    {
        if ($event-&gt;getMethod() !== null) {
            return; // someone else already resolved it
        }

        if ($this-&gt;supports($event-&gt;getOrder())) {
            $event-&gt;setMethod(&apos;stripe&apos;);
        }
    }
}
</code></pre><p>Every listener manually checks whether a previous one already ran. It works, but the contract is implicit. You&apos;re trusting that every future developer reads every other listener before writing a new one.</p><p>Stoppable events make the contract explicit:</p><pre><code class="language-php">use Symfony\Contracts\EventDispatcher\Event;

class ResolvePaymentMethod extends Event
{
    private ?string $method = null;

    public function setMethod(string $method): void
    {
        $this-&gt;method = $method;
        $this-&gt;stopPropagation();
    }

    public function getMethod(): ?string
    {
        return $this-&gt;method;
    }
}
</code></pre><p>The moment a listener sets a method, propagation stops. No manual checks. No scattered conditionals.</p><p>One caveat I&apos;ve learned the hard way: use stoppable events on events you fully own. If a third-party bundle registers a listener on the same event at a higher priority, it can stop propagation before yours runs. Silently. No error, no warning. Run&#xA0;<code>debug:event-dispatcher</code>&#xA0;before you assume your listener is executing.</p><h2 id="priorities-good-for-ordering-unreliable-for-correctness">Priorities: Good for Ordering, Unreliable for Correctness</h2><p>Priorities let you control when listeners run. Higher numbers run first. In Symfony 6.0+ the attribute handles it cleanly:</p><pre><code class="language-php">#[AsEventListener(event: UserRegistered::class, priority: 100)]
class ValidateUserData { ... }

#[AsEventListener(event: UserRegistered::class, priority: 50)]
class CreateDefaultWorkspace { ... }

#[AsEventListener(event: UserRegistered::class, priority: 0)]
class SendWelcomeEmail { ... }

#[AsEventListener(event: UserRegistered::class, priority: -10)]
class LogRegistrationForAnalytics { ... }
</code></pre><p>On Symfony 5.x, set priorities via&#xA0;<code>getSubscribedEvents()</code>. The nested array format exists because a single event can have multiple listeners in the same subscriber, each with their own priority:</p><pre><code class="language-php">public static function getSubscribedEvents(): array
{
    return [
        // Single listener with priority
        UserRegistered::class =&gt; [[&apos;onUserRegistered&apos;, 50]],

        // Multiple listeners on the same event
        OrderPlaced::class =&gt; [[&apos;firstHandler&apos;, 20], [&apos;secondHandler&apos;, 10]],
    ];
}
</code></pre><p>Here&apos;s the thing I keep coming back to with priorities: they&apos;re fine for loose ordering. &quot;Logging should happen after the main work.&quot; Reasonable. But the moment your application&apos;s correctness depends on strict execution order, you&apos;re on shaky ground. Any listener from a bundle, or from a colleague who skipped the architecture docs, can slip in at any priority without warning.</p><p>Use priorities to express&#xA0;<em>when</em>&#xA0;something runs. If you need to express&#xA0;<em>whether</em>&#xA0;something runs based on what came before, that&apos;s a job for explicit service calls, not priority numbers.</p><h2 id="event-subscribers-co-locating-related-behavior">Event Subscribers: Co-locating Related Behavior</h2><p>Subscribers let a single class listen to multiple events. The reason I reach for them is co-location: all the behavior for a given concern lives in one place.</p><pre><code class="language-php">class AuditLogSubscriber implements EventSubscriberInterface
{
    public static function getSubscribedEvents(): array
    {
        return [
            OrderPlaced::class    =&gt; &apos;onOrderPlaced&apos;,
            OrderCancelled::class =&gt; &apos;onOrderCancelled&apos;,
            OrderRefunded::class  =&gt; &apos;onOrderRefunded&apos;,
        ];
    }

    public function onOrderPlaced(OrderPlaced $event): void
    {
        $this-&gt;auditLog-&gt;record(&apos;order_placed&apos;, $event-&gt;order-&gt;getId());
    }

    // ...
}
</code></pre><p>Logging, auditing, metrics: these are the cases where subscribers earn their keep. The behavior is genuinely independent of the dispatching code. Grouping it by concern, all audit logic together regardless of which events trigger it, makes it easier to find and change.</p><p>That said, the discoverability problem is real. A developer debugging a missing audit entry still needs to know&#xA0;<code>AuditLogSubscriber</code>&#xA0;exists. The grouping helps once you know where to look. It doesn&apos;t help you find it from the dispatch site. That&apos;s a general property of event-driven code, and I think it&apos;s worth being honest about before you wire everything this way.</p><h2 id="extension-points-before-and-after-hooks">Extension Points: Before and After Hooks</h2><p>If you&apos;re building something others will extend, events are one of the cleanest mechanisms I&apos;ve found. Inheritance ties you to a class hierarchy. Callback arrays get unwieldy as the number of extension points grows. Events give you explicit contracts that downstream code can hook into without touching yours.</p><p>The pattern I use most often is a before/after pair:</p><pre><code class="language-php">class PreOrderProcessing extends Event
{
    private bool $cancelled = false;

    public function __construct(public readonly Order $order) {}

    public function cancel(): void
    {
        $this-&gt;cancelled = true;
    }

    public function isCancelled(): bool
    {
        return $this-&gt;cancelled;
    }
}

class PostOrderProcessing extends Event
{
    public function __construct(public readonly Order $order) {}
}

class OrderProcessor
{
    public function process(Order $order): void
    {
        if ($this-&gt;dispatcher-&gt;dispatch(new PreOrderProcessing($order))-&gt;isCancelled()) {
            return;
        }

        // ... core processing logic

        $this-&gt;dispatcher-&gt;dispatch(new PostOrderProcessing($order));
    }
}
</code></pre><p>A listener on the after hook:</p><pre><code class="language-php">#[AsEventListener]
class NotifyWarehouseAfterOrder
{
    public function __invoke(PostOrderProcessing $event): void
    {
        $this-&gt;warehouseNotifier-&gt;notify($event-&gt;order);
    }
}
</code></pre><p><code>OrderProcessor</code>&#xA0;dispatches two events and knows nothing else. Everything that needs to happen before or after plugs in without touching the core. This is how Symfony&apos;s HttpKernel works.&#xA0;<code>kernel.request</code>,&#xA0;<code>kernel.response</code>,&#xA0;<code>kernel.exception</code>: the routing and security system hangs off those hooks. The kernel knows nothing about any of it directly.</p><p>Where this breaks down is when listeners start needing to feed complex information back to the dispatching code. At that point the indirection is fighting you. A direct service call is clearer.</p><h2 id="the-honest-problem-event-driven-code-is-hard-to-trace">The Honest Problem: Event-Driven Code is Hard to Trace</h2><p>I want to be direct about this because I&apos;ve seen it glossed over in every EventDispatcher tutorial I&apos;ve read.</p><p>When you dispatch an event, the execution path splits across however many listeners are registered. You can&apos;t follow it by reading the code at the dispatch site. You have to know the full listener graph, and in a large codebase that&apos;s not always obvious.</p><p>Symfony gives you tools:</p><pre><code class="language-bash"># See all registered listeners
php bin/console debug:event-dispatcher

# Filter by event (single backslashes, works on Linux and macOS without quotes)
# On Windows, behaviour varies by shell; wrap in double quotes if needed
php bin/console debug:event-dispatcher App\Event\UserRegistered
</code></pre><p>The debug toolbar shows the same for web requests: what fired, in what order, whether propagation stopped. Use both regularly.</p><p>But tooling has limits. In a codebase with thirty listeners, debugging an unexpected side effect means tracing a graph, not reading a call stack. That cognitive overhead is real and it compounds as the codebase grows.</p><p>One thing that helps: if your listeners are doing work that doesn&apos;t need to complete synchronously, dispatch via Symfony Messenger. With the&#xA0;<code>sync</code>&#xA0;transport, messages are handled immediately in the same process. No infrastructure required, but also no retry logic or failure isolation. With an async queue transport, you get retries, failure handling, and full decoupling from the request lifecycle. Either way, you&apos;re separating what happened from what should happen next, which makes both sides easier to reason about. I&apos;ll come back to this in the domain events section.</p><h2 id="when-to-reach-for-it-and-when-not-to">When to Reach for It (and When Not To)</h2><p>After nineteen years of Symfony projects, the framing that&apos;s helped me most is this: events are for concerns that are genuinely independent. Not just loosely coupled. Genuinely independent, meaning neither side needs to know the other exists.</p><p>What that looks like in practice: logging and auditing that doesn&apos;t affect the outcome of the dispatching code. Extension points in bundles where you want downstream hooks without modifying core logic. First-responder chains where the chain stops once one listener resolves a value, and you want that contract in the event class rather than scattered across guard clauses.</p><p>What it doesn&apos;t look like: listeners that need to share state or coordinate with each other. Dispatching code that needs reliable return values or structured error handling from what runs next. Flows where execution order is load-bearing for correctness. And situations where the code is simple and stable enough that a direct service call would just be easier to follow. Not everything needs to be decoupled. Indirection has a cost even when it&apos;s justified.</p><p>The test I apply: if I removed the EventDispatcher and replaced this with a direct service call, would the code be meaningfully harder to extend or maintain? If the answer is no, the event is probably not earning its place.</p><h2 id="domain-events-modeling-intent-not-just-plumbing">Domain Events: Modeling Intent, Not Just Plumbing</h2><p>This is the pattern I&apos;d push hardest if someone asked where to invest time beyond the basics.</p><p>Without domain events, side effects accumulate in services:</p><pre><code class="language-php">class PlaceOrderService
{
    public function place(Order $order): void
    {
        $order-&gt;setStatus(&apos;placed&apos;);
        $this-&gt;entityManager-&gt;flush();
        $this-&gt;mailer-&gt;sendConfirmation($order);
        $this-&gt;inventory-&gt;reserve($order);
        // and so on
    }
}
</code></pre><p>Every new consequence of placing an order becomes another dependency on this service. The list grows. The coupling grows. The service becomes the place where everything ends up because it has to be somewhere.</p><p>Domain events break that apart:</p><pre><code class="language-php">class Order
{
    private array $domainEvents = [];

    public function place(): void
    {
        // ... business logic
        $this-&gt;domainEvents[] = new OrderPlaced($this);
    }

    public function pullDomainEvents(): array
    {
        $events = $this-&gt;domainEvents;
        $this-&gt;domainEvents = [];
        return $events;
    }
}
</code></pre><p>Then in the application layer, after the flush:</p><pre><code class="language-php">$entityManager-&gt;persist($order);
$entityManager-&gt;flush();

foreach ($order-&gt;pullDomainEvents() as $event) {
    $this-&gt;dispatcher-&gt;dispatch($event);
}
</code></pre><p><code>place()</code>&#xA0;now expresses intent. It says: this happened. Not: here are all the things that should follow. Each consequence registers itself as a listener. The service stops being the place where everything accumulates.</p><p>The flush ordering is non-negotiable. Dispatch before flush and if the flush fails, you&apos;ve fired side effects for a state change that never committed. I&apos;ve seen this cause hard-to-reproduce bugs in production.</p><p>Even with correct ordering, you&apos;re exposed if dispatching throws mid-loop. Some events have fired, some haven&apos;t. Dispatching via Messenger helps: side effects become retryable messages rather than in-process calls, and a failure in one doesn&apos;t leave the rest in an unknown state.</p><p>For &quot;exactly once&quot; delivery guarantees, the outbox pattern is the real answer. Events are persisted transactionally alongside your domain data, then processed by a dedicated worker. It&apos;s genuinely more complex: a polling mechanism or change data capture tooling like Debezium, idempotency on the consumer side, at-least-once delivery semantics. Reach for it when you actually need those guarantees. Most applications don&apos;t.</p><p>As for when to use this pattern at all: if your entities have real business logic and placing an order means something beyond setting a status field, domain events will pay for themselves quickly. If you&apos;re working in a thin CRUD app where entities are essentially database row representations, the ceremony probably isn&apos;t worth it. The honest test is whether your domain objects have behavior worth expressing. If they do, domain events give you a clean way to express it.</p><h2 id="what-ive-landed-on">What I&apos;ve Landed On</h2><p>The EventDispatcher is one of those tools where the basic usage is obvious and the advanced usage is invisible until you&apos;ve needed it and didn&apos;t have it.</p><p>The question that matters is whether the behavior you&apos;re wiring is genuinely independent of the code dispatching the event. If it is, events are a natural fit and they&apos;ll keep your services small and your extension points clean. If it isn&apos;t, you&apos;re adding indirection to a problem that a direct service call would solve more clearly.</p><p>Used well, the EventDispatcher doesn&apos;t add complexity. It moves complexity to where it belongs.</p>]]></content:encoded></item><item><title><![CDATA[AI Writes the Draft. You Own the Mess.]]></title><description><![CDATA[AI writes code that works. That's not the same as code that's good. The moment you merge it, you own it — the shortcuts, the weak names, the missed boundaries. Here's why clean code matters more now, not less.]]></description><link>https://marcelmoll.dev/ai-writes-the-draft-you-own-the-mess/</link><guid isPermaLink="false">699dfb207ec5f735e0bef024</guid><category><![CDATA[Developer Experience]]></category><category><![CDATA[Clean Code]]></category><category><![CDATA[PHP]]></category><category><![CDATA[AI]]></category><dc:creator><![CDATA[Marcel Moll]]></dc:creator><pubDate>Tue, 24 Feb 2026 20:11:44 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1549319114-d67887c51aed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fG1lc3N8ZW58MHx8fHwxNzcxOTYyNTczfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<h1 id="why-clean-code-still-matters-in-the-age-of-ai-generated-code">Why Clean Code Still Matters in the Age of AI-Generated Code</h1><img src="https://images.unsplash.com/photo-1549319114-d67887c51aed?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=M3wxMTc3M3wwfDF8c2VhcmNofDF8fG1lc3N8ZW58MHx8fHwxNzcxOTYyNTczfDA&amp;ixlib=rb-4.1.0&amp;q=80&amp;w=2000" alt="AI Writes the Draft. You Own the Mess."><p>Recently I caught myself doing something I wouldn&apos;t have let slide in a code review.</p><p>I&apos;d asked an AI assistant to scaffold a Symfony service. It came back with something reasonable: correct structure, working logic, passing tests. I skimmed it, made one small change, and pushed it. Didn&apos;t think twice.</p><p>Some days later I was in that file chasing a bug and I genuinely couldn&apos;t follow what it was doing. Not because it was broken. Because it had no voice. No intent. It was code written by something that doesn&apos;t have to maintain it afterwards.</p><p>That&apos;s what got me paying closer attention to what I was actually shipping.</p><p>I&apos;ve been writing PHP for the better part of 19 years. I&apos;ve watched the language go from something developers apologised for to something they actively choose. I&apos;ve seen frameworks come and go. And I&apos;ve learned, slowly and the hard way, that making things work has never really been the hard part. Making things understandable is the hard part. Right now, I think that skill is quietly getting deprioritised.</p><h2 id="working-and-good-are-not-the-same-thing">Working and good are not the same thing</h2><p>AI is genuinely impressive at producing working code. I don&apos;t want to be unfair about that because it would be dishonest. The tools are fast, they know a lot, and they save real time.</p><p>But there&apos;s a gap between code that works and code that&apos;s good, and AI sits firmly on the working side of it. It doesn&apos;t know your domain. It&apos;s never been burned by a badly-named variable at 11pm on a Friday. It hasn&apos;t had the argument with a colleague about where business logic should live, or watched a codebase become unworkable because nobody drew clear boundaries early enough.</p><p>So it reaches for <code>$data</code> and <code>$result</code> and <code>processItems()</code>. It writes methods that fetch, filter, transform and fire off side effects all in one go, because that&apos;s the most direct path between the prompt and the output. It doesn&apos;t stop to ask whether a class is taking on too much, because responsibility isn&apos;t something it carries.</p><p>None of this is a knock on the tools. It&apos;s just a description of what they are. The problem is when we forget the gap exists.</p><h2 id="you-own-what-you-merge">You own what you merge</h2><p>The thing I keep coming back to is this: the moment AI-generated code lands in your repository, it&apos;s yours. Not the model&apos;s. Yours.</p><p>The architectural shortcuts it took? Yours. The test that only covers the happy path? Yours. The class that quietly violates the single responsibility principle because it was easier to generate that way? Also yours.</p><p>I think of it a bit like buying a second-hand car. You&apos;re still going to look under the hood before you take it on the highway. Not because you assume it&apos;s broken, but because you&apos;re the one driving it.</p><p>Teams that are moving fast with AI are, in some cases, quietly building up technical debt at the same speed they&apos;re shipping features. The code works, the sprints look great, and then six months in the codebase starts fighting back. Every change requires understanding ten things that should only require understanding one. That&apos;s when the bill arrives.</p><blockquote>Working code that nobody understands is not an asset. It&apos;s a liability with passing tests.</blockquote><h2 id="the-fundamentals-dont-go-away-they-just-move">The fundamentals don&apos;t go away. They just move.</h2><p>What I&apos;ve found is that the clean code principles I&apos;ve been applying for years don&apos;t matter less in this new workflow. They matter more, because now they&apos;re the filter rather than the output.</p><p><strong>Naming.</strong> AI leans toward generic names because it has no context for your specific domain. It doesn&apos;t know that in your system, a <code>User</code> isn&apos;t just any user. It might be an authenticated account holder with a billing state, an access tier, and a set of permissions that matter a lot. When I rename <code>$filteredUsers</code> to <code>$activeSubscribers</code>, that&apos;s not cosmetic. That&apos;s the difference between code that speaks your domain&apos;s language and code that could have come from anywhere.</p><p><strong>Function size.</strong> Ask AI for a service method and you&apos;ll often get one that does everything in one shot, because that&apos;s what you asked for. It&apos;s trying to be helpful. But that&apos;s not how I want my Symfony services to look. Each of those concerns deserves its own home. Splitting it apart isn&apos;t refactoring for the sake of it. It&apos;s making the code navigable.</p><p><strong>Architectural boundaries.</strong> AI has no idea what your architecture looks like. It will reach across layers without hesitation, inject infrastructure into your domain, and build a class that answers the prompt perfectly but fits nowhere cleanly. This is where knowing your own system becomes genuinely important. You can see what the AI can&apos;t.</p><p>If you&apos;re reviewing AI-generated code and you&apos;re not running <strong>quality tools</strong> in CI, you&apos;re leaving a lot to chance. Static analysis catches what tired human eyes skip when the code looks reasonable at a glance. In an AI-heavy workflow it&apos;s not optional.</p><h2 id="the-architects-eye">The architect&apos;s eye</h2><p>The framing that&apos;s helped me most is this: my job is the blueprint, not the bricks.</p><p>Before any AI-generated code lands in my codebase, the important decisions are already made. What are the layers? Where does business logic live? What does the domain language look like? What are the rules around dependencies? Those decisions are mine. They come from understanding the system, its history, and where it needs to go. The AI has no access to any of that. It works from the prompt, not from the architecture.</p><p>So when I review AI output, I&apos;m not just checking whether it works. I&apos;m checking whether it fits. Does this class belong in this layer? Does this name match the language we use in this domain? Does this dependency point in the right direction? Those are architectural questions, and they require architectural knowledge to answer. That&apos;s the thing the AI genuinely cannot bring to the table.</p><p>This also changes what &quot;good review&quot; means. It&apos;s less about catching bugs and more about enforcing coherence. Does this fit the system we&apos;re building? If not, it goes back, however cleanly it was generated.</p><blockquote>The AI can build a room. Only you know what building it belongs in.</blockquote><h2 id="think-of-it-as-a-very-fast-junior">Think of it as a very fast junior</h2><p>The mental model I&apos;ve landed on: AI is an exceptionally fast junior developer who&apos;s very good at syntax and not yet great at judgment.</p><p>I wouldn&apos;t merge a junior&apos;s PR without reading it carefully. I wouldn&apos;t assume their naming matched our conventions or that they&apos;d thought through the wider architectural implications. I&apos;d review with genuine curiosity, not looking for reasons to reject but making sure what goes in actually belongs there.</p><p>That&apos;s the posture I try to bring to AI output now. Not suspicion, just attention. The tools have earned a seat at the table. They haven&apos;t earned the right to skip review.</p><p>And the better your architectural instincts are, the faster and more accurate that review gets. You can&apos;t enforce boundaries you haven&apos;t defined. You can&apos;t catch a misplaced dependency if you don&apos;t have a clear picture of where things are supposed to live. The fundamentals aren&apos;t the thing you learn before the real work starts. They are the real work.</p><h2 id="to-wrap-up">To wrap up</h2><p>I want to be clear: I&apos;m not skeptical of AI tools. I use them every day and I&apos;m not going back.</p><p>What I am is someone who&apos;s seen what happens when a team prioritises shipping speed over code quality. The codebases that become genuinely painful to work in don&apos;t usually get there through one big bad decision. They get there through a thousand small ones, each of which seemed fine at the time.</p><p>&quot;It passes the tests&quot; was never the bar. It&apos;s still not.</p><p>Code gets read far more than it gets written. It gets maintained by people who weren&apos;t there when it was made. The decisions baked into it compound over time.</p><p><strong>AI changed who writes the first draft. It hasn&apos;t changed any of that. Write for the reader.</strong></p>]]></content:encoded></item></channel></rss>