Signal Hub logoSignal Hub

PHP news and articles

PHP
Updated just now
Date
Source

11 articles

Dev.to (PHP)
~6 min readMay 6, 2026

EU Digital Product Passport in WooCommerce: Implementing ESPR Compliance with Structured Data and QR Codes

The EU's Ecodesign for Sustainable Products Regulation (ESPR) mandates Digital Product Passports (DPP) for most physical goods sold in Europe starting 2026–2030 (phased by category). If you run a WooCommerce store shipping to EU customers, your products will need machine-readable passports containing lifecycle data, materials, repairability scores, and recycling instructions — all accessible via a physical carrier (QR code, RFID, or datamatrix). This article walks through how we built a self-hosted WooCommerce plugin to generate DPP-compliant structured data and printable QR codes — no third-party SaaS, no ongoing fees. A DPP is a structured data record attached to a physical product containing: Product identity: GTIN/EAN, model, manufacturer, batch/serial Materials & substances: Bill of materials, hazardous substances (SVHC list) Repairability: Spare parts availability, repair manual links, disassembly score Environmental footprint: Carbon footprint (kg CO₂e), energy efficiency class End-of-life: Recycling instructions, collection points, recyclate content % Compliance documents: CE declaration, REACH compliance, test reports The data must be accessible via a carrier (QR code printed on product/packaging) pointing to a URL that returns structured data in a machine-readable format (JSON-LD recommended by the EU). We store DPP data as WooCommerce product meta, mapped to the EU DPP data model: <?php declare(strict_types=1); class SP_DPP_Data_Model { public const FIELDS = [ // Identity 'dpp_gtin' => 'sanitize_text_field', 'dpp_manufacturer_name' => 'sanitize_text_field', 'dpp_manufacturer_country' => 'sanitize_text_field', 'dpp_batch_number' => 'sanitize_text_field', // Materials 'dpp_main_material' => 'sanitize_text_field', 'dpp_recycled_content_pct' => 'absint', 'dpp_hazardous_substances' => 'sanitize_textarea_field', // Repairability 'dpp_repairability_score' => 'sanitize_text_field', 'dpp_spare_parts_url' => 'esc_url_raw', 'dpp_repair_manual_url' => 'esc_url_raw', 'dpp_warranty_years' => 'absint', // Environmental 'dpp_carbon_footprint_kg' => 'sanitize_text_field', 'dpp_energy_class' => 'sanitize_text_field', 'dpp_energy_kwh_year' => 'sanitize_text_field', // End of life 'dpp_recycling_instructions' => 'sanitize_textarea_field', 'dpp_disassembly_time_min' => 'absint', 'dpp_collection_point_url' => 'esc_url_raw', // Compliance 'dpp_ce_declaration_url' => 'esc_url_raw', 'dpp_reach_compliant' => 'absint', ]; public function save_product_meta( int $product_id ): void { if ( ! current_user_can( 'edit_product', $product_id ) ) return; if ( ! isset( $_POST['sp_dpp_nonce'] ) || ! wp_verify_nonce( sanitize_text_field( wp_unslash( $_POST['sp_dpp_nonce'] ) ), 'sp_dpp_save_' . $product_id ) ) return; $product = wc_get_product( $product_id ); if ( ! $product ) return; foreach ( self::FIELDS as $field => $sanitizer ) { if ( isset( $_POST[ $field ] ) ) { $value = call_user_func( $sanitizer, wp_unslash( $_POST[ $field ] ) ); $product->update_meta_data( '_' . $field, $value ); } } $product->save(); } public function get_passport_data( int $product_id ): array { $product = wc_get_product( $product_id ); if ( ! $product ) return []; $data = []; foreach ( array_keys( self::FIELDS ) as $field ) { $data[ $field ] = $product->get_meta( '_' . $field ); } return $data; } } The QR code points to a public REST endpoint returning JSON-LD: <?php declare(strict_types=1); class SP_DPP_REST_Controller extends WP_REST_Controller { protected $namespace = 'sp-dpp/v1'; protected $rest_base = 'passport'; public function register_routes(): void { register_rest_route( $this->namespace, '/' . $this->rest_base . '/(?P<product_id>[\d]+)', [ [ 'methods' => WP_REST_Server::READABLE, 'callback' => [ $this, 'get_passport' ], 'permission_callback' => '__return_true', // Public 'args' => [ 'product_id' => [ 'validate_callback' => fn($v) => is_numeric($v), 'sanitize_callback' => 'absint', ], ], ], ] ); } public function get_passport( WP_REST_Request $request ): WP_REST_Response { $product_id = $request->get_param( 'product_id' ); $product = wc_get_product( $product_id ); if ( ! $product || ! $product->is_visible() ) { return new WP_REST_Response( [ 'error' => 'Product not found' ], 404 ); } $model = new SP_DPP_Data_Model(); $data = $model->get_passport_data( $product_id ); $passport = [ '@context' => [ 'schema' => 'https://schema.org/', 'dpp' => 'https://digital-product-passport.eu/ns#', ], '@type' => [ 'schema:Product', 'dpp:DigitalProductPassport' ], '@id' => get_permalink( $product_id ), 'schema:name' => $product->get_name(), 'schema:gtin' => $data['dpp_gtin'], 'dpp:materials' => [ 'dpp:primaryMaterial' => $data['dpp_main_material'], 'dpp:recycledContentPct' => (int) $data['dpp_recycled_content_pct'], 'dpp:hazardousSubstances' => json_decode( $data['dpp_hazardous_substances'] ?: '[]', true ), ], 'dpp:repairability' => [ 'dpp:score' => $data['dpp_repairability_score'], 'dpp:sparePartsUrl' => $data['dpp_spare_parts_url'], 'dpp:repairManual' => $data['dpp_repair_manual_url'], 'dpp:warrantyYears' => (int) $data['dpp_warranty_years'], ], 'dpp:environmental' => [ 'dpp:carbonFootprintKg' => (float) $data['dpp_carbon_footprint_kg'], 'dpp:energyClass' => $data['dpp_energy_class'], 'dpp:energyKwhYear' => (float) $data['dpp_energy_kwh_year'], ], 'dpp:endOfLife' => [ 'dpp:recyclingInstructions' => $data['dpp_recycling_instructions'], 'dpp:disassemblyTimeMin' => (int) $data['dpp_disassembly_time_min'], 'dpp:collectionPointUrl' => $data['dpp_collection_point_url'], ], 'dpp:compliance' => [ 'dpp:ceDeclarationUrl' => $data['dpp_ce_declaration_url'], 'dpp:reachCompliant' => (bool) $data['dpp_reach_compliant'], ], 'dpp:generatedAt' => gmdate( 'c' ), ]; $response = new WP_REST_Response( $passport, 200 ); $response->header( 'Content-Type', 'application/ld+json' ); $response->header( 'Cache-Control', 'public, max-age=3600' ); return $response; } } Endpoint URL: https://yourstore.com/wp-json/sp-dpp/v1/passport/123 <?php declare(strict_types=1); class SP_DPP_QR_Generator { public function generate_qr( int $product_id, int $size = 300 ): string { $passport_url = rest_url( 'sp-dpp/v1/passport/' . $product_id ); $cache_key = 'sp_dpp_qr_' . $product_id . '_' . $size; $cached = get_transient( $cache_key ); if ( $cached ) return $cached; require_once SP_DPP_PATH . 'vendor/phpqrcode/qrlib.php'; ob_start(); QRcode::png( $passport_url, false, QR_ECLEVEL_M, 10, 2 ); $raw_png = ob_get_clean(); $data_uri = 'data:image/png;base64,' . base64_encode( $raw_png ); set_transient( $cache_key, $data_uri, WEEK_IN_SECONDS ); return $data_uri; } public function save_qr_file( int $product_id ): string { $upload_dir = wp_upload_dir(); $dpp_dir = trailingslashit( $upload_dir['basedir'] ) . 'sp-dpp-qr/'; if ( ! file_exists( $dpp_dir ) ) { wp_mkdir_p( $dpp_dir ); file_put_contents( $dpp_dir . 'index.php', '<?php // Silence is golden.' ); } $filename = 'dpp-qr-' . $product_id . '.png'; require_once SP_DPP_PATH . 'vendor/phpqrcode/qrlib.php'; QRcode::png( rest_url( 'sp-dpp/v1/passport/' . $product_id ), $dpp_dir . $filename, QR_ECLEVEL_M, 10, 2 ); return trailingslashit( $upload_dir['baseurl'] ) . 'sp-dpp-qr/' . $filename; } } <?php add_filter( 'woocommerce_product_data_tabs', function( array $tabs ): array { $tabs['sp_dpp'] = [ 'label' => __( 'Digital Passport', 'sp-dpp' ), 'target' => 'sp_dpp_product_data', 'priority' => 85, ]; return $tabs; } ); add_action( 'woocommerce_product_data_panels', function(): void { global $post; wp_nonce_field( 'sp_dpp_save_' . $post->ID, 'sp_dpp_nonce' ); echo '<div id="sp_dpp_product_data" class="panel woocommerce_options_panel">'; echo '<div class="options_group">'; woocommerce_wp_text_input([ 'id' => 'dpp_gtin', 'label' => __( 'GTIN / EAN', 'sp-dpp' ), 'description' => __( 'Global Trade Item Number', 'sp-dpp' ), 'desc_tip' => true, 'value' => get_post_meta( $post->ID, '_dpp_gtin', true ), ]); woocommerce_wp_text_input([ 'id' => 'dpp_repairability_score', 'label' => __( 'Repairability Score (e.g. 7.5/10)', 'sp-dpp' ), 'value' => get_post_meta( $post->ID, '_dpp_repairability_score', true ), ]); woocommerce_wp_text_input([ 'id' => 'dpp_carbon_footprint_kg', 'label' => __( 'Carbon Footprint (kg CO₂e)', 'sp-dpp' ), 'value' => get_post_meta( $post->ID, '_dpp_carbon_footprint_kg', true ), ]); echo '</div>'; // QR preview $qr = new SP_DPP_QR_Generator(); $src = $qr->generate_qr( $post->ID, 200 ); echo '<div class="options_group">'; echo '<p><strong>' . esc_html__( 'DPP QR Code', 'sp-dpp' ) . '</strong></p>'; echo '<img src="' . esc_attr( $src ) . '" width="150" />'; echo '<p><a href="' . esc_url( rest_url( 'sp-dpp/v1/passport/' . $post->ID ) ) . '" target="_blank">' . esc_html__( 'View Passport JSON-LD', 'sp-dpp' ) . '</a></p>'; echo '</div>'; echo '</div>'; } ); <?php add_filter( 'woocommerce_csv_product_import_mapping_default_columns', function( array $columns ): array { return array_merge( $columns, [ 'GTIN' => 'dpp_gtin', 'Manufacturer' => 'dpp_manufacturer_name', 'Carbon Footprint kg' => 'dpp_carbon_footprint_kg', 'Recycled Content %' => 'dpp_recycled_content_pct', 'Repairability Score' => 'dpp_repairability_score', 'Energy Class' => 'dpp_energy_class', ] ); } ); add_filter( 'woocommerce_product_import_pre_insert_product_object', function( WC_Product $product, array $data ): WC_Product { foreach ( array_keys( SP_DPP_Data_Model::FIELDS ) as $field ) { if ( isset( $data[ $field ] ) ) { $product->update_meta_data( '_' . $field, sanitize_text_field( $data[ $field ] ) ); } } return $product; }, 10, 2 ); Year Categories 2026 Batteries, textiles, electronics 2027 Furniture, steel, cement, chemicals 2028–2030 All remaining categories The DPP endpoint must stay accessible for the full product lifecycle — typically 10+ years after sale. DPP SaaS platforms (Circularise, Renoon, Fairly Made) charge €200–500/mo per brand. For most WooCommerce stores, a one-time plugin generating compliant JSON-LD and QR codes covers the full technical requirement without ongoing costs. The full plugin is on CodeCanyon: EU Digital Product Passport for WooCommerce (ESPR) Questions about the JSON-LD structure or ESPR data requirements? Drop them in the comments.

Dev.to (PHP)
~5 min readMay 6, 2026

Implementing ZATCA Phase 2 E-Invoicing in WooCommerce: UBL XML, XAdES Signing, and Hash Chains

Saudi Arabia's ZATCA Phase 2 e-invoicing mandate is one of the most technically demanding compliance requirements I've encountered in WooCommerce development. It's not just generating a PDF — it's UBL 2.1 XML, XAdES-BES digital signatures, a cryptographic hash chain, TLV-encoded QR codes, and real-time API submission to government servers. Here's how I built FatooraPro, a WooCommerce plugin that handles the full ZATCA Phase 2 flow. Every invoice must: Be generated as UBL 2.1 compliant XML Be digitally signed with XAdES-BES using a ZATCA-issued certificate Include a TLV-encoded QR code with 8 specific fields Maintain a hash chain (each invoice's hash references the previous one) Be submitted to ZATCA's API — either for clearance (B2B) before delivery, or reporting (B2C) within 24 hours Miss any of these and the invoice is legally invalid. Standard Tax Invoice (B2B — Clearance) before the invoice is delivered to the buyer. ZATCA must approve it. This means your checkout flow has to wait for an API response, or queue it and hold the invoice. Simplified Tax Invoice (B2C — Reporting) In WooCommerce, I detect the invoice type based on whether the buyer has a VAT number (B2B) or not (B2C). ZATCA requires a specific XML namespace and field order. Here's a simplified version of what the XML looks like: <?xml version="1.0" encoding="UTF-8"?> <Invoice xmlns="urn:oasis:names:specification:ubl:schema:xsd:Invoice-2" xmlns:cac="urn:oasis:names:specification:ubl:schema:xsd:CommonAggregateComponents-2" xmlns:cbc="urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2"> <cbc:ProfileID>reporting:1.0</cbc:ProfileID> <cbc:ID>INV-2026-001</cbc:ID> <cbc:UUID>6f4d3885-0e99-4f13-9c8b-...</cbc:UUID> <cbc:IssueDate>2026-05-06</cbc:IssueDate> <cbc:IssueTime>14:30:00</cbc:IssueTime> <cbc:InvoiceTypeCode name="0200000">388</cbc:InvoiceTypeCode> <!-- Seller info, buyer info, line items, tax totals... --> </Invoice> The InvoiceTypeCode name attribute is a 7-digit bitmask that encodes invoice properties (standard/simplified, credit/debit, etc.). This is where it gets complex. ZATCA requires XAdES-BES (XML Advanced Electronic Signatures, Basic Electronic Signature profile). In PHP: function sign_invoice( string $xml, string $private_key_pem, string $certificate_pem ): string { // 1. Canonicalize the XML (C14N) $dom = new DOMDocument(); $dom->loadXML( $xml ); $canonical = $dom->C14N( false, false ); // 2. Hash the canonicalized XML $digest = base64_encode( hash( 'sha256', $canonical, true ) ); // 3. Build SignedInfo element with digest $signed_info = $this->build_signed_info( $digest ); // 4. Sign the SignedInfo with the private key openssl_sign( $signed_info, $signature_raw, $private_key_pem, OPENSSL_ALGO_SHA256 ); $signature_value = base64_encode( $signature_raw ); // 5. Embed certificate hash and signature into XML return $this->embed_signature( $xml, $signature_value, $certificate_pem, $digest ); } The certificate itself is issued by ZATCA through a 4-step onboarding process: CSR generation → Compliance CSID → Simulation testing → Production CSID. ZATCA requires sequential invoice integrity. Each invoice must contain: ICV (Invoice Counter Value): sequential integer starting at 1 PIH (Previous Invoice Hash): SHA-256 hash of the previous invoice's XML This means invoices are cryptographically linked — you can't insert, delete, or modify an old invoice without breaking the chain. In WooCommerce, I store the last hash in a WordPress option and update it atomically after each successful submission: function get_and_increment_icv(): array { // Use DB transaction to prevent race conditions global $wpdb; $wpdb->query( 'START TRANSACTION' ); $current_icv = (int) get_option( 'fatoorapro_icv', 0 ); $previous_hash = get_option( 'fatoorapro_pih', hash( 'sha256', 'NWZlY2ViNjZmZmM4NmYzOGQ5NTI3ODZjNmQ2OTZjNzljMmRiYzIzOWRkNGU5MWI0NjcyOWQ3M2EyN2YzNDkyMQ==' ) ); $new_icv = $current_icv + 1; update_option( 'fatoorapro_icv', $new_icv ); $wpdb->query( 'COMMIT' ); return [ 'icv' => $new_icv, 'pih' => $previous_hash ]; } The default PIH for the first invoice is a SHA-256 hash of a specific base64 string defined in ZATCA's spec. ZATCA's QR code uses Tag-Length-Value encoding, not a simple URL. It must contain 8 fields: function generate_tlv_qr( array $data ): string { $tlv = ''; $fields = [ 1 => $data['seller_name'], // Seller name 2 => $data['vat_number'], // VAT registration number 3 => $data['timestamp'], // Invoice timestamp 4 => $data['total_with_vat'], // Invoice total with VAT 5 => $data['vat_amount'], // VAT amount 6 => $data['xml_hash'], // Invoice XML hash (B2B only) 7 => $data['ecdsa_signature'], // ECDSA signature (B2B only) 8 => $data['public_key'], // Public key (B2B only) ]; foreach ( $fields as $tag => $value ) { $encoded = mb_convert_encoding( $value, 'UTF-8' ); $length = strlen( $encoded ); $tlv .= chr( $tag ) . chr( $length ) . $encoded; } return base64_encode( $tlv ); } B2B clearance requires synchronous submission (blocking). But B2C reporting can be async. I use WP-Cron for retry logic on failures: function schedule_submission( int $invoice_id ): void { wp_schedule_single_event( time() + 30, // 30 second delay 'fatoorapro_submit_invoice', [ $invoice_id ] ); } add_action( 'fatoorapro_submit_invoice', function( int $invoice_id ): void { $result = submit_to_zatca( $invoice_id ); if ( ! $result['success'] ) { $attempts = (int) get_post_meta( $invoice_id, '_zatca_attempts', true ); if ( $attempts < 3 ) { update_post_meta( $invoice_id, '_zatca_attempts', $attempts + 1 ); wp_schedule_single_event( time() + 300, 'fatoorapro_submit_invoice', [ $invoice_id ] ); // retry in 5 min } } } ); ZATCA data is stored per-order, so it needs to work with both classic order posts and HPOS. I use WooCommerce's order meta abstraction: // Works with both post meta and HPOS order tables $order->update_meta_data( '_zatca_status', 'cleared' ); $order->update_meta_data( '_zatca_uuid', $uuid ); $order->update_meta_data( '_zatca_qr', $qr_code ); $order->save(); Getting a production certificate from ZATCA requires 4 steps: CSR generation — create a private key + certificate signing request with your VAT number embedded Compliance CSID — submit CSR to ZATCA, get a compliance certificate back Compliance tests — run 6 predefined test invoices against ZATCA's sandbox Production CSID — exchange compliance certificate for production certificate I built a 4-step wizard in WP Admin that walks store owners through this. Most of them aren't developers, so the UI has to handle the complexity invisibly. I packaged all of this into FatooraPro — a complete ZATCA Phase 2 solution for WooCommerce stores in Saudi Arabia: FatooraPro on CodeCanyon Happy to go deeper on any part — the XAdES signing and hash chain were the most challenging pieces.

Dev.to (PHP)
~4 min readMay 6, 2026

PHP Fibers and the Future of MySQL Drivers

PHP is moving into a new era. With the introduction of Fibers in PHP 8.1, we can finally build long-running and high-performance applications that handle thousands of concurrent tasks. However, to truly unlock this power, we need database drivers that are built specifically for an asynchronous world. Most existing PHP drivers are blocking. When you send a query, the entire process stands still while waiting for the database. This is a massive bottleneck for modern apps. I built the Hibla MySQL Client to solve this. It is a ground-up implementation of the MySQL Binary Protocol designed specifically for the Hibla ecosystem. In a traditional script, you connect and disconnect for every single request. In an async application, that is simply too slow. Hibla features a built-in Pool Manager that keeps connections warm and ready to use. It includes "Check-on-Borrow" logic. This ensures that every connection you get is healthy and active. It also handles connection resetting. This means session variables or temporary tables from a previous task never leak into the next one. use Hibla\Mysql\MysqlClient; $db = new MysqlClient( config: 'mysql://root:secret@127.0.0.1/app_db', minConnections: 5, maxConnections: 20 ); // You can easily monitor your pool health print_r($db->stats); /* Output includes: active_connections, pooled_connections, waiting_requests, and more. */ Transactions are often the most fragile part of database logic. Deadlocks and lock wait timeouts are common in busy systems. Usually, you have to write manual loops and complex retry logic to handle these transient errors. Hibla builds this directly into the API. You can define a transaction block and tell the client how many times it should try again if an error occurs. It simplifies your code and makes your application much more resilient. use Hibla\Sql\TransactionOptions; await($db->transaction(function($tx) { // This entire block will be retried automatically on deadlocks $balance = await($tx->fetchValue("SELECT balance FROM accounts WHERE id = :id", null, ['id' => 1])); await($tx->execute( "UPDATE accounts SET balance = :b WHERE id = :id", ['b' => $balance - 100, 'id' => 1] )); }, TransactionOptions::default()->withAttempts(3))); When you need to process a million rows, loading them all into memory is not an option. Hibla supports true unbuffered streaming. This allows you to work through massive result sets one row at a time. We also implemented a smart backpressure handler. If your code is processing rows slower than the database can send them, Hibla automatically pauses the underlying socket. It only resumes when your code is ready for more. This keeps your memory usage low and predictable even during heavy data exports. $stream = await($db->stream("SELECT * FROM massive_log_table")); // The stream pauses the socket automatically if the buffer fills up foreach ($stream as $row) { // Process each row one by one await(doComplexProcessing($row)); } echo "Stream finished. Total rows: " . $stream->stats->rowCount; In a standard PHP environment, once you send a query, you are committed. If the user cancels their request or a timeout hits, the database keeps running that query anyway. This wastes server resources and holds up important locks. Hibla solves this with side-channel cancellation. When you cancel a promise in PHP, the client opens a brief secondary connection to the server and issues a KILL QUERY command. The server stops the work immediately. use Hibla\EventLoop\Loop; $promise = $db->query("SELECT some_very_slow_calculation()"); // If we decide we do not want this anymore after 1 second... Loop::addTimer(1.0, fn() => $promise->cancel()); try { await($promise); } catch (CancelledException $e) { // The query was killed on the MySQL server instantly echo "Query stopped and server resources were freed."; } This client is not a wrapper around older extensions. It talks directly to the MySQL socket. This allows us to support a wide range of modern features out of the box without any external dependencies. This includes SSL and TLS encryption for secure data transfer. We also support zlib compression to save bandwidth on large queries. The driver handles both Native and SHA256 authentication plugins automatically. // Simple configuration via URI for complex features $db = new MysqlClient( config: 'mysql://user@host/db?ssl=true&compress=true&charset=utf8mb4' ); The goal of this project is to make PHP feel as modern and capable as any other async-first language. By implementing the protocol from scratch, we get absolute control over the connection lifecycle. You can find the project and the full documentation here: https://github.com/hiblaphp/mysql I would love to hear your feedback. Are you building long-running PHP applications? How are you handling database concurrency today?

Dev.to (PHP)
~4 min readMay 6, 2026

mdparser: a native CommonMark + GFM parser for PHP

Several of my projects do heavy markdown parsing. Comment rendering, documentation pipelines, content management. The volume keeps growing, and I've been hitting the point where pure-PHP parsers (Parsedown, league/commonmark, cebe/markdown, michelf) just can't keep up. They're solid libraries, but parsing thousands of documents per request or chewing through 200 KB files in interpreted PHP is slow no matter how well the code is written. I wanted something 10x+ faster that could serve as a drop-in replacement for the common cases. The result is mdparser, a native C extension that wraps cmark-gfm (GitHub's CommonMark parser) and exposes it through a clean PHP 8.3+ OO API. I'm releasing it today. mdparser vendors a copy of cmark-gfm 0.29.0.gfm.13 directly into the extension's shared object. No external library to link against, no cmake, no runtime dependencies. The entire cmark-gfm codebase compiles alongside the PHP wrapper into a single .so (or .dll on Windows). Four cherry-picked commits from cmark upstream close the 0.29-to-0.31 spec gap, giving full CommonMark 0.31 conformance: 652 out of 652 spec examples pass. The PHP API is intentionally small. Two classes, one exception: use MdParser\Parser; use MdParser\Options; // Defaults: safe mode on, GFM extensions on. $parser = new Parser(); echo $parser->toHtml('# Hello'); // Or the static shorthand: echo Parser::html('# Hello'); // Custom options via named arguments: $parser = new Parser(new Options( smart: true, footnotes: true, sourcepos: true, )); // Three output formats: $html = $parser->toHtml($markdown); $xml = $parser->toXml($markdown); $ast = $parser->toAst($markdown); // nested PHP arrays Options is final readonly with 17 boolean fields. The Parser constructor translates those bools into cmark's internal bitmask once, so every subsequent parse call is pure cmark work with zero per-call overhead. Static factory presets (Options::strict(), Options::github(), Options::permissive()) cover common deployment patterns. If you're migrating from Parsedown's line() or cebe/markdown's parseParagraph(), there's toInlineHtml(): inline-only HTML without the wrapping <p> tags. Useful for chat messages, table cells, and short user-facing strings. This was the primary motivation. Measured on PHP 8.4 with each parser in its default configuration: Parser Small (200 B) Medium (1.8 KB) Large (200 KB) mdparser 30,447 ops/s 5,697 ops/s 105 ops/s Parsedown 1,651 ops/s (18x slower) 325 ops/s (17x) 6 ops/s (17x) cebe/markdown (GFM) 1,350 ops/s (22x) 374 ops/s (15x) 6 ops/s (16x) michelf (Extra) 1,006 ops/s (30x) 209 ops/s (27x) 5 ops/s (19x) 15-30x faster, from 200-byte chat messages to 200 KB documents. Your absolute numbers will differ by hardware, but the ratios hold. mdparser processes roughly 100 full CommonMark-spec-sized documents per second on a single core. The pure-PHP parsers manage 5-6. The benchmark uses hrtime(true) around each parse call, 200 iterations with warm-up, trimmed mean to filter GC pauses. Reproducible scripts are in the bench/ directory. mdparser covers CommonMark core plus all five GFM extensions. Here's how it stacks up against the pure-PHP alternatives: Feature mdparser Parsedown league/cm cebe GFM michelf Extra CommonMark core full partial full partial partial GFM tables yes yes via ext yes via Extra Strikethrough yes yes via ext yes no Task lists yes no via ext no no Autolinks yes yes via ext yes no Tag filter yes yes via ext partial no Smart punctuation yes no via ext no no Footnotes yes Extra via ext no yes Sourcepos yes no yes no no XML output yes no no no no AST output yes (arrays) no yes (objects) no no mdparser is scoped to what cmark-gfm supports: CommonMark core plus five GFM extensions. It doesn't cover definition lists, abbreviations, attribute syntax, heading permalinks, table of contents, YAML front matter, mentions, LaTeX math, emoji shortcodes, or custom containers. If you need those, league/commonmark is the right choice. It's the most featureful pure-PHP option and actively maintained. Speed doesn't help if the feature you need isn't there. mdparser builds and tests on PHP 8.3, 8.4, and 8.5 across Linux (x86_64), macOS (arm64/x86_64), and Windows (x86/x64, both TS and NTS). CI runs on all three platforms, with an ASAN job on Linux to catch memory issues. Pre-built Windows DLLs ship with each GitHub release. pie install iliaal/mdparser PIE handles the download, phpize, configure, make, and install. On a minimal PHP image you'll need git, bison, and libtool-bin as build dependencies. From source: git clone https://github.com/iliaal/mdparser.git cd mdparser phpize && ./configure --enable-mdparser make -j && sudo make install GitHub: github.com/iliaal/mdparser Packagist: packagist.org/packages/iliaal/mdparser

Dev.to (PHP)
~2 min readMay 6, 2026

conclusion

Moi c’est Oussama, étudiant en développement digital. Pour un projet récent nommé "formulaire.php", j'ai dû mettre en place une gestion de connexion de A à Z. C’est un exercice classique, mais il y a quelques détails qui font la différence quand on veut un truc propre. La structure du projetJ'ai séparé le code en trois fichiers pour ne pas mélanger les responsabilités : connect.php : Ce fichier gère uniquement la liaison avec la base de données MySQL. form.php : C'est la partie interface avec le formulaire de saisie. loggedin.php : La page de destination qui valide la session de l'utilisateur. Ce que j'ai retenu de l'implémentationAu début, on veut souvent aller vite, mais j'ai réalisé que la sécurité doit être la priorité.L'importance du traitement des données : Dans form.php, il faut bien s'assurer que les données envoyées via POST sont récupérées correctement par le script de traitement. Sécurité des sessions : Une fois que l'utilisateur est connecté, le fichier loggedin.php doit vérifier que la session existe réellement avant d'afficher du contenu sensible. C'est un point sur lequel je ne fais plus de compromis. Organisation du code : Isoler la connexion dans connect.php permet de changer de base de données en modifiant une seule ligne de code au lieu de repasser sur tous les fichiers. Quelques conseils pratiquesSi vous bossez sur un système similaire, essayez de toujours utiliser des requêtes préparées (avec PDO par exemple) pour éviter les injections. Aussi, prenez l'habitude de bien indenter votre code dès le début ; ça évite de perdre des heures à chercher une accolade manquante dans une condition complexe. C'est un projet simple mais c'est une base solide pour la suite de mon parcours vers le diplôme de Technicien Spécialisé en 2026.

Dev.to (PHP)
~2 min readMay 6, 2026

I didn't expect a template project to turn into a CMS — meet QuickSite

About six months ago — November 2025 — I started a small "make a clean That thing is QuickSite: A file-based, API-first website operations platform with a visual PHP, runs on Apache or Nginx, no database. JSON files on disk are the I'm sharing it now because I genuinely didn't think it would go this far. What it tries to be (and stay): Open source forever — GNU AGPL v3. Zero dependencies — works without any. The recent BYOK/local LLM integration is opt-in and ships with a no-AI fallback path so the promise still holds. No lock-in — your project is plain JSON + PHP files in a folder. Walk away with it whenever you want; no proprietary export, no database to dump. Reachable beyond developers — the visual editor is the primary surface. You don't need to read PHP to build a site. Multi-layer learning — every visual action surfaces the underlying concept (selectors, events, CSS variables, JSON structures) so curious users naturally pick up real web fundamentals along the way. Not a black box. Frontend-first, API integration coming — beta.7 wires pages to live data; beta.8 makes those pages server-rendered for SEO/AEO. Yes, AEO — when LLM-driven discovery becomes a meaningful chunk of traffic, structured server-rendered content matters even more than it already does. I post a 1-2 week dev log on YouTube about whatever I just built — no Repo (the README answers most of the obvious questions): https://github.com/Sangiovanni/quicksite Dev log playlist: https://www.youtube.com/playlist?list=PLULtElcjV8r-o8uVM9bS86ZYk0rvKuZwL This is openly self-promotion; I won't pretend otherwise. But it's also genuinely "tell me what breaks" — feedback at this stage is much more valuable than stars.

Dev.to (PHP)
~6 min readMay 6, 2026

I redirected laravel/nightwatch to my own Postgres and hit 13,400 payloads/s on a single instance

If you run a Laravel app on a hosted observability platform like Nightwatch, you've probably sampled your telemetry down to keep the bill manageable. I wanted to keep all of it. laravel/nightwatch is Laravel's official observability SDK and the instrumentation itself is genuinely good. It's the hosted side that bothered me. Ingestion is usage-priced, throughput is bounded by what you're willing to pay for, and your telemetry lives in someone else's warehouse. Plenty of teams are happy with that trade. Others aren't: high-traffic apps that don't want to sample, regulated stacks where stack traces can't leave the perimeter, smaller teams whose Postgres already has the headroom to absorb the writes. They want the same SDK pointed somewhere else. So I wrote an agent that intercepts Nightwatch's ingest binding and redirects payloads to a local TCP socket, then drains them into a Postgres database I provision. On a single instance it sustains around 13,400 payloads/s. That's enough headroom for an app doing 2,000-5,000 req/s without sampling. Three layers, each chosen to solve a specific bottleneck. laravel/nightwatch │ TCP │ ▼ ReactPHP listener │ ▼ SQLite WAL buffer │ ▼ Postgres (COPY protocol) The ingest path and the drain path are decoupled. Ingest must never block on Postgres. Drain must never lose data if Postgres goes away. The TCP listener is a ReactPHP\Socket\TcpServer running on a single event loop. One process, accepting payloads from many concurrent connections and pushing them into the buffer. PHP-FPM workers don't enter the picture. Nightwatch's ingest binding is hijacked at request shutdown to write to the local TCP socket instead of phoning home to Laravel Cloud. The wire protocol is deliberately minimal: [length]:[version]:[tokenHash]:[payload], with gzip detected by magic byte (0x1f 0x8b) and the xxh128 token hash truncated to 7 chars. The reason it stays that minimal is that the agent never re-encodes the payload. Nightwatch sends JSON, the buffer stores it as-is, and the drain worker is the first process that parses it, only because it needs to route fields to the right columns. Skipping a json_decode/json_encode round-trip on the hot path was worth roughly 30-50µs per payload in profiling, which is a meaningful chunk of the per-payload budget at this rate. Why SQLite for a buffer? Because it's the only embedded database that gives you crash-safe writes at the speed of a memory-mapped file, with zero ops overhead. The pragma sequence matters and broke me once: PRAGMA busy_timeout = 5000; PRAGMA journal_mode = WAL; PRAGMA synchronous = NORMAL; PRAGMA cache_size = -64000; -- ~64 MB PRAGMA mmap_size = 268435456; -- 256 MB busy_timeout has to be set before journal_mode = WAL. If you do it the other way, the first concurrent write under load races and one of the writers gets SQLITE_BUSY immediately instead of waiting. I lost an afternoon to this. synchronous = NORMAL on the buffer is fine because Postgres is the durable store. The buffer just needs to survive a process crash, not a kernel panic. Rows get a single synced column with three states: 0 (pending), 100+workerId (claimed by drain worker N), 1 (drained). Drain workers atomically mark a batch with their own claim value, then SELECT it. The UPDATE is the atomic part; the SELECT just hands the rows to the worker. If a worker dies mid-batch, the parent's SIGCHLD handler releases its claimed rows back to pending. The drain worker uses pgsqlCopyFromArray() for the 10 high-volume tables (requests, queries, jobs, logs, cache events, mail, notifications, outgoing requests, scheduled tasks, commands). COPY is roughly 5-10x faster than equivalent multi-row INSERTs at this batch size; the parse-plan overhead per statement disappears, and the wire format is denser. INSERT survives for the exception path (which upserts a grouped issue row by fingerprint) and for per-user counters. COPY can't do upserts, so those stay on the slower path. They're also the lowest-volume tables, so it doesn't matter. The single biggest single-line change for throughput: SET synchronous_commit = off; This is the 2-5x win. The agent drops synchronous_commit on the drain connection because durability is already guaranteed upstream by SQLite WAL. Worst case under crash is that the same batch gets COPY'd twice. Acceptable for a monitoring product. Batch size is 5,000 rows per COPY call. I tested 1k, 5k, 10k, 50k. Past 5k, Postgres write latency dominates and the buffer fills up faster than the drain can clear it. This took me an entire weekend. pcntl_fork() is how the agent spawns N drain workers. Each child needs its own SQLite handle and its own Postgres handle. The naive approach (open both in the parent, fork, and let the children inherit) corrupts the SQLite WAL when the first child exits. The fix is unintuitive: close the parent's SQLite PDO immediately before fork, and recreate it in both the parent and each child after fork. PDO sets up file locks and per-connection state that get partially cloned by fork(2)'s copy-on-write semantics. When the child exits and runs its destructor, it tears down state the parent still thinks it owns. There's no clean error message. You just get random SQLITE_CORRUPT errors hours later with no obvious trigger. For Postgres the same rule applies, but the failure mode is more honest: you immediately get "broken pipe" errors because both processes try to read from the same TCP socket. After all this, ingest tops out around 13,400 payloads/s on a single instance. That's not the SQLite ceiling (the buffer can absorb much faster than that). It's not Postgres (with 4 drain workers and COPY, it sustains ~22,000 rows/s). It's the TCP accept loop on a single PHP event loop. The fix is SO_REUSEPORT and multiple agent processes listening on the same port. Linux kernel distributes new connections across them. macOS doesn't (it just hands every connection to whichever process accepts first), so this is a Linux-only optimization. You don't have to rip out the hosted plan to try this. Set NIGHTOWL_PARALLEL_WITH_NIGHTWATCH=true and the agent's service provider wraps Nightwatch's Core::ingest binding with a fan-out adapter. Every payload goes to both Laravel Cloud and your local TCP socket, so you can run the two side-by-side and compare what you actually use before committing either way. The fan-out runs after Nightwatch has accepted the payload, so it can't break the hosted path you're already paying for. The whole thing is MIT, on Packagist as nightowl/agent, and runs in any Laravel 11 or 12 app: composer require nightowl/agent php artisan nightowl:install php artisan nightowl:agent Repo: github.com/lemed99/nightowl-agent There's a hosted dashboard at usenightowl.com if you don't want to build a UI on top of the Postgres tables yourself. The agent runs fine without it.

Dev.to (PHP)
~2 min readMay 6, 2026

Why Choosing the Right Laravel Development Company Matters

In today’s fast-paced digital world, businesses need powerful, scalable, and secure web applications to stay competitive. This is where choosing the right Laravel development company becomes crucial. Laravel, one of the most popular PHP frameworks, is known for its elegant syntax, robust features, and ability to build high-performing applications. What Makes Laravel a Preferred Framework? Laravel stands out among other frameworks due to its developer-friendly environment and advanced capabilities. It offers features like MVC architecture, built-in authentication, routing, and caching, making development faster and more efficient. Businesses prefer Laravel because it ensures: Benefits of Hiring a Laravel Development Company 1. Expertise and Experience 2. Time and Cost Efficiency 3. Custom Web Solutions 4. Ongoing Support and Maintenance Why Choose WebMavens for Laravel Development? Key Highlights of WebMavens: WebMavens ensures that every project is built with precision, keeping both user experience and business goals in mind. Use Cases of Laravel Development E-commerce platforms Its flexibility makes it an ideal choice for startups as well as large enterprises. Conclusion Choosing the right Laravel development company is essential for building a successful web application. With the right expertise, tools, and approach, you can create a powerful digital presence for your business. WebMavens offers the perfect blend of experience, innovation, and quality to bring your ideas to life.

Dev.to (PHP)
~6 min readMay 6, 2026

MySQL Query Cache vs Magento Cache: What's the Difference and When to Use Each

Caching is one of the most effective levers you have for speeding up a Magento 2 store. But "caching" is not a single thing — it's a stack of overlapping layers, each operating at a different level of your infrastructure. Two layers that often cause confusion are MySQL's query cache and Magento's built-in cache. They sound similar, they both exist to serve data faster, but they work in completely different ways and serve completely different purposes. This post breaks down exactly what each one does, where they overlap, where they don't, and how to configure them properly for a production Magento 2 environment. MySQL's query cache is a server-level feature that stores the result set of a SELECT query alongside the raw SQL string. If the exact same query comes in again — character for character — MySQL returns the cached result without re-executing the query against the tables. It sounds like a win, but the reality is more nuanced. A SELECT query arrives at MySQL. MySQL hashes the query string and checks the query cache. If there's a hit, the result is returned immediately. If there's a miss, the query executes, and the result is stored in the cache. The key problem: any write to a table invalidates all cached queries that reference that table. In a busy Magento store, tables like sales_order, catalog_product_entity, quote, and cataloginventory_stock_item are written to constantly. Every new order, every cart update, every stock decrement — all of these flush cached query results. MySQL 8.0 deprecated and removed the query cache. The MySQL team found that in high-concurrency environments, the query cache was actually a bottleneck due to the global mutex needed to manage cache invalidation. For Magento stores running MySQL 8.0 (which is now the standard), the query cache is simply not available. If you're still on MySQL 5.7, the query cache is present but disabled by default (query_cache_type = 0). For most Magento workloads, leaving it disabled is the right call. Magento's cache operates at the application layer, not the database layer. Rather than caching SQL results, it caches serialized PHP objects, rendered HTML blocks, configuration arrays, and more. It uses a cache backend (File, Redis, Varnish) to store and retrieve these objects. Magento ships with several distinct cache types, each serving a specific purpose: Cache Type What It Stores config Merged XML configuration layout Page layout handles and block structure block_html Rendered HTML of individual blocks collections EAV collection results reflection PHP class reflection data db_ddl Database table schema metadata compiled_config DI compiled configuration full_page Complete rendered page HTML (FPC) translate Translation strings eav EAV attribute metadata Each of these can be enabled or disabled independently: bin/magento cache:status bin/magento cache:enable full_page bin/magento cache:disable block_html The most impactful of these is full_page — Magento's Full Page Cache (FPC). When a CMS page, category page, or product page is first rendered, the entire HTML response is stored. Subsequent requests for the same page serve the cached HTML without touching PHP or MySQL at all. FPC can be served by: Magento's built-in FPC (file-based or Redis-based, ~50–200ms response) Varnish (reverse proxy, ~5–20ms response — the production-grade choice) Here's the key insight: these two caching layers are almost entirely independent. When Magento serves a cached full page, MySQL is not involved at all. When Magento's block cache is warm, it may skip several database queries entirely. Conversely, MySQL query cache (on 5.7) only kicks in when Magento actually executes a SELECT — which it tries hard to avoid by using its own cache first. The interaction diagram looks like this: Request └─► Varnish FPC hit? → Serve HTML (MySQL never touched) └─► Magento FPC hit? → Serve HTML (MySQL never touched) └─► Block cache / config cache hit? → Partial PHP execution └─► MySQL query └─► MySQL query cache hit? (5.7 only) → Return result └─► Execute query against tables In practice, a warm Magento cache means MySQL sees dramatically fewer queries. The MySQL query cache, when it was still available, only helped at the very bottom of this chain — for queries that escaped all of Magento's own caching layers. If you're still on MySQL 5.7, keep the query cache disabled: # /etc/mysql/mysql.conf.d/mysqld.cnf query_cache_type = 0 query_cache_size = 0 The write-invalidation behavior combined with Magento's write-heavy workload makes it a net negative in most cases. Spend the memory on innodb_buffer_pool_size instead — that's where your MySQL performance gains are. # For a server with 16GB RAM dedicated to MySQL innodb_buffer_pool_size = 10G innodb_buffer_pool_instances = 8 Nothing to configure — the query cache doesn't exist. Focus on InnoDB tuning. For any production store, use Redis as the cache backend. File-based caching is fine for development but falls apart under load due to filesystem contention. bin/magento setup:config:set \ --cache-backend=redis \ --cache-backend-redis-server=127.0.0.1 \ --cache-backend-redis-port=6379 \ --cache-backend-redis-db=0 For the session cache, use a separate Redis instance or at least a separate database index: bin/magento setup:config:set \ --session-save=redis \ --session-save-redis-host=127.0.0.1 \ --session-save-redis-port=6379 \ --session-save-redis-db=1 In app/etc/env.php, confirm FPC is set to Redis (or Varnish if applicable): 'cache' => [ 'frontend' => [ 'full_page' => [ 'backend' => 'Cm_Cache_Backend_Redis', 'backend_options' => [ 'server' => '127.0.0.1', 'port' => '6379', 'database' => '2', ], ], ], ], For high-traffic stores, put Varnish in front: bin/magento config:set --scope=default system/full_page_cache/caching_application 2 Situation Recommended Action Pages loading slowly, TTFB > 500ms Enable Varnish or Magento FPC MySQL CPU spiking under load Check InnoDB buffer pool, not query cache Slow admin panel after deploy Run bin/magento cache:flush Query cache enabled on MySQL 5.7 Disable it — likely a net negative Dev environment with slow page loads Enable all Magento cache types Session-related slowness Move sessions to Redis MySQL query cache and Magento cache are two different tools solving different problems at different layers of the stack. For modern Magento 2 on MySQL 8.0, MySQL query cache is not a factor — it no longer exists. Your caching focus should be entirely on Magento's application cache, with Redis as the backend and Varnish (or Magento FPC) handling full-page responses. The real performance wins in Magento come from eliminating PHP and database execution altogether — not from making individual SQL queries slightly faster. Invest in the upper layers of the cache stack first, then tune the database buffer pool, and don't waste time chasing a query cache that either doesn't exist or actively hurts you.

Dev.to (PHP)
~7 min readMay 6, 2026

Email notifications on comments: PHPMailer without Composer

The problem Someone leaves a comment on your blog. You find out whenever you happen to remember to check. No database, no admin panel, no push notification — just a flat file sitting on disk waiting for you to look. That's the deal when you go the no-DB route. Time to fix at least the notification part. There are exactly three ways to send email from PHP, and only one of them is worth your time here. Native mail(): depends on a local sendmail or Postfix config, behaves differently on every host, and on shared hosting it almost always lands straight in spam. You have no control over headers, no TLS, no authentication. Hard pass. DIY SMTP over fsockopen(): technically possible. You open a socket, send EHLO, negotiate STARTTLS, handle AUTH LOGIN, base64-encode credentials, manage timeouts manually. It works right up until Gmail changes something and your handshake breaks at 2am. You're writing a worse version of PHPMailer for no reason. PHPMailer: battle-tested since 2001. It handles encoding, TLS negotiation, authentication errors, timeouts, and multipart bodies. The library is three files. You copy them manually, no Composer needed. This is the only sensible option. Go to the PHPMailer GitHub repository and grab three files from the src/ directory: PHPMailer.php, SMTP.php, and Exception.php. Drop them into blog/lib/PHPMailer/. blog/ └── lib/ └── PHPMailer/ ├── Exception.php ├── PHPMailer.php └── SMTP.php Then require them manually at the top of your notify file: <?php require_once __DIR__ . '/../lib/PHPMailer/Exception.php'; require_once __DIR__ . '/../lib/PHPMailer/PHPMailer.php'; require_once __DIR__ . '/../lib/PHPMailer/SMTP.php'; use PHPMailer\PHPMailer\PHPMailer; use PHPMailer\PHPMailer\Exception; use PHPMailer\PHPMailer\SMTP; No vendor/, no autoload.php, no composer.json. Three files, three requires. Done. The whole thing lives in one function. Here is the complete notify.php: <?php require_once __DIR__ . '/../lib/PHPMailer/Exception.php'; require_once __DIR__ . '/../lib/PHPMailer/PHPMailer.php'; require_once __DIR__ . '/../lib/PHPMailer/SMTP.php'; use PHPMailer\PHPMailer\PHPMailer; use PHPMailer\PHPMailer\Exception; function notify_new_comment(string $post_slug, string $post_title, string $author, string $content): bool { // Guard clause: missing config = silent no-op, not a crash if (!defined('NOTIFY_EMAIL') || !defined('SMTP_USER') || !defined('SMTP_PASS')) { return false; } $mail = new PHPMailer(true); // true = throw exceptions try { // SMTP config $mail->isSMTP(); $mail->Host = 'smtp.gmail.com'; $mail->SMTPAuth = true; $mail->Username = SMTP_USER; $mail->Password = SMTP_PASS; $mail->SMTPSecure = PHPMailer::ENCRYPTION_STARTTLS; $mail->Port = 587; $mail->CharSet = 'UTF-8'; // Recipients $mail->setFrom(SMTP_USER, 'Blog Notifications'); $mail->addAddress(NOTIFY_EMAIL); // Content — plain text is enough for a notification $mail->isHTML(false); $excerpt = mb_substr(strip_tags($content), 0, 200, 'UTF-8'); if (mb_strlen(strip_tags($content), 'UTF-8') > 200) { $excerpt .= '...'; } $mail->Subject = 'New comment on: ' . $post_title; $mail->Body = implode("\n\n", [ 'New comment on your blog.', 'Post: ' . $post_title, 'Author: ' . $author, 'Excerpt:', $excerpt, '---', 'Permalink: ' . SITE_URL . '/blog/' . $post_slug, ]); $mail->send(); return true; } catch (Exception $e) { // Log the error, never expose it to the user error_log('[notify_new_comment] Failed for slug=' . $post_slug . ': ' . $mail->ErrorInfo); return false; } } A few deliberate choices worth noting: Guard clause at the top: if the SMTP constants are not defined (local dev, missing config file), the function returns false immediately and silently. No exception, no fatal error, no white page. STARTTLS on port 587: the correct setup for Gmail in 2026. SSL on port 465 also works; STARTTLS is the standard recommendation. Plain text body: this is an internal notification, not a newsletter. HTML adds complexity for zero benefit. mb_substr for the excerpt: comment content can contain any UTF-8 character. substr() would corrupt multibyte sequences. error_log on failure, nothing else: the caller gets a false return value. The visitor never sees an SMTP error. The caller ignores the return value — intentionally. This is best-effort. The comment is the important thing; the notification is a convenience. Gmail has blocked basic password authentication for years. If you try to use your regular Gmail password, you'll get an authentication failure immediately. What you need is an App Password. Go to your Google Account → Security → Two-Factor Authentication → App Passwords. Generate one for "Mail" / "Other". You get a 16-character string. That's your SMTP_PASS. Why not OAuth2? Because OAuth2 involves redirect URIs, token storage, refresh logic, and a 45-minute setup for a blog that receives two comments a week. App Passwords exist precisely for this use case. If you're building a SaaS sending thousands of emails, use a proper transactional provider. For a personal blog, App Password is fine. Credentials go in config.local.php, which is gitignored: <?php // config.local.php — never commit this file define('SMTP_USER', 'you@gmail.com'); define('SMTP_PASS', 'abcd efgh ijkl mnop'); // App Password, spaces are fine define('NOTIFY_EMAIL', 'you@gmail.com'); // where to receive notifications And a safe template to commit instead: <?php // config.local.example.php — commit this, fill it in on each server define('SMTP_USER', ''); define('SMTP_PASS', ''); define('NOTIFY_EMAIL', ''); In .gitignore: config.local.php The critical rule: save the comment first, send the notification after. If the email fails, the comment is still stored. If you did it the other way around, a transient SMTP error could silently drop comments. <?php require_once __DIR__ . '/notify.php'; // ... validation and comment saving happen here ... // Comment is saved at this point. Now try to notify — best-effort. $post_title = get_post_title($slug); // extracts h1 from the post file, falls back to $slug notify_new_comment($slug, $post_title, $comment['author'], $comment['content']); // Redirect regardless of notification outcome header('Location: ' . SITE_URL . '/blog/' . $slug . '#comments'); exit; For get_post_title(), a simple approach is to read the post file, run a regex for the first <h1>, and return the slug as fallback if nothing is found. The notification subject will still make sense. A few things worth being explicit about: Credentials are gitignored. The live credentials never appear in version control. The example file contains only empty strings. No user input in email headers. The Subject and From fields are built from internal data only (post slug, post title from the file itself). The comment content goes in the body, where header injection is not a concern. PHPMailer also sanitizes headers internally. Errors go to error_log, nowhere else. SMTP error messages can contain the username, partial passwords, or server details. None of that should reach the HTTP response. The guard clause prevents crashes on misconfigured environments. A missing config.local.php in staging or local dev will not throw an uncaught exception. Gmail daily limit: 500 emails. If your blog gets more than 500 comments per day, you have bigger problems to deal with first. For a personal site, this limit is irrelevant. SMTP adds ~1-2 seconds to the POST request. PHPMailer opens a TCP connection, negotiates TLS, authenticates, sends. On a typical server with decent latency to Gmail's SMTP, this takes 1-2 seconds. Since the handler immediately redirects after, the user does not wait for this — the redirect happens, the browser follows it, and the SMTP work finishes in the background. Acceptable. Actually, it is not background: PHP is synchronous and the redirect header is buffered. The SMTP call blocks before the redirect is sent. On most setups this is imperceptible. If it becomes a problem, the fix is fastcgi_finish_request() on PHP-FPM or a proper job queue — neither of which is worth adding for 2 comments a week. Port 587 may be blocked on some shared hosting. Some hosts only allow outbound connections on port 25, or block SMTP entirely and want you to use their relay. Check your host's documentation. Port 465 (SSL) is the common alternative. No queue, no retry. If the SMTP call fails (Gmail rate limit, network blip, wrong password), the notification is lost. The comment is not. For a personal blog, losing an occasional notification is acceptable. Adding a retry queue with file-based persistence would be 10x the code for a marginal benefit. About 40 lines of wrapper, 3 PHPMailer files, and a gitignored config. The interesting parts — TLS handshake, encoding, error handling — are PHPMailer's problem. That's the entire point of using a library.

Dev.to (PHP)
~2 min readMay 6, 2026

Bridge: Write Logic Once, Compile Everywhere | Cahyanudien Blogs

Sometimes the problem isn’t complexity. It’s repetition. In almost every project, I end up writing the same logic twice. Validation rules Price calculations Data transformations Once in backend (PHP), It’s not hard. fragile. One change → forget to sync → subtle bug. And over time, that duplication becomes technical debt. What if the logic itself was the source of truth? Not PHP. Not TypeScript. Just logic. So I started experimenting with a small language: function calculateTax(price: float, rate: float): float { let tax = price * rate return tax } Then compile it into: PHP (for backend) TypeScript (for frontend) Node.js (for shared tooling) That became Bridge. Bridge is not a framework. It’s a CLI compiler. Input: .bridge file Output: real code (PHP / TS / Node) No runtime. Just transformation. I asked myself the same thing. Schemas are good for structure. You can’t express: calculations conditional logic transformations …cleanly in JSON. Bridge sits in that gap: structured logic, not just structured data I kept it intentionally small. Lexer → tokenize input Parser → build AST Compiler → generate target code Each target has its own emitter: PHP TypeScript Node.js Nothing fancy. Just predictable output. Bridge: function calculateTax(price: float, rate: float): float { let tax = price * rate return tax } TypeScript: export function calculateTax(price: number, rate: number): number { const tax = price * rate; return tax; } PHP: function calculateTax(float $price, float $rate): float { $tax = $price * $rate; return $tax; } Same logic. A language without tooling is painful. So I built a simple VS Code extension: syntax highlighting snippets 👉 https://marketplace.visualstudio.com/items?itemName=FlagoDNA.bridge-vscode And the CLI: https://www.npmjs.com/package/@cas8398/bridge-cli Source code: https://github.com/cas8398/bridge-cli https://github.com/cas8398/bridge-vscode Bridge is not trying to replace real languages. Right now: No classes No async No complex types It focuses on: small, deterministic business logic That constraint is intentional. The moment it tries to do everything, it becomes another language problem. Same input → same output builds trust. And boring problems are perfect for automation. Better type system More targets (Python, Go) Watch mode (bridge watch) Real-world integrations Or maybe it stays small. That’s fine too. Bridge is just an attempt to reduce friction. Not a big framework. Just a small layer between logic and implementation. If this resonates with you, try it. Or break it. Both are useful.