Signal Hub logoSignal Hub

C / C++ news and articles

C / C++
Updated just now
Date
Source

10 articles

Dev.to (C/C++)
~6 min readMay 6, 2026

Finding Memory Leaks in Legacy C++ Applications with Valgrind

Legacy C++ services don't crash — they slowly bleed memory until someone restarts them at 3 AM. If you've inherited a 20‑year‑old codebase with mysterious memory growth, this guide is for you. You can't fix a leak if you can't reproduce. This is your complete, production‑focused Valgrind investigation playbook. The Workflow Step 1 — Reproduce the Leak Step 2 — Static Analysis Step 3 — Compile for Valgrind Step 4 — Run Valgrind Step 5 — Understand Valgrind's Leak Types Step 6 — Capture the Stack Trace Step 7 — Optional Regression Test Quick Reference Real‑World Example The Golden Rule [Leak Trigger] → [Static Analysis] → [Compile Debug Build] ↓ [Run Valgrind] → [Interpret Leak Types] → [Stack Trace] ↓ [Regression Test] You cannot find a leak if you cannot trigger it. use the following script to track the total memory of your application used. PID=$(pgrep your_service) while true; do echo "$(date): $(pmap $PID | grep total | awk '{print $2}')" sleep 60 done Interpretation: Pattern Meaning Linear growth Per‑operation leak Step‑function growth Specific trigger No growth Wrong hypothesis Your goal: reproduce the leak in under 10 minutes. Why? Valgrind slows execution by 20–50× A 10‑minute trigger becomes 3–8 hours A 1‑hour trigger becomes 2–5 days Why this matters: If your trigger is too slow, Valgrind becomes unusable. Before running anything, let the compiler find the obvious issues. scan-build make Look for "Memory leak" warnings (ignore "Potential leak"). clang-tidy legacy_file.cpp \ --checks='-*,clang-analyzer-*,cppcoreguidelines-owning-memory' Finds: new without delete malloc without free Raw owning pointers Misses: Cycles Third‑party leaks Runtime‑dependent leaks Why this matters: Static analysis gives you free wins before you even run the program. Valgrind is useless without debug symbols. So first thing you should do is to compile the whole application with debug flag. g++ -g3 -O0 -fno-omit-frame-pointer -o your_service your_service.cpp Flag Purpose -g3 Full debug info -O0 Clean stack frames -fno-omit-frame-pointer Reliable backtraces Why this matters: Without debug symbols, Valgrind can't show you file/line numbers. Run only the trigger you identified in Step 1. valgrind --leak-check=full \ --show-leak-kinds=definite,indirect \ --track-origins=yes \ --log-file=valgrind_out.txt \ ./your_service --run-trigger Use vgdb to inspect leaks mid‑run: valgrind --vgdb=yes --vgdb-error=0 --leak-check=full ./your_service Then: vgdb leak_check full definite indirect Why this matters: You don't need to wait hours — you can inspect leaks while running. After the run, Valgrind will give you a report about memory lost in valgrind_out.txt. Example summary: definitely lost: 1,024 bytes indirectly lost: 6,144 bytes possibly lost: 0 bytes still reachable: 45,000 bytes Valgrind gives the following types of memory lost. Based on the types, you decides your action. Type Meaning Action definitely lost Real leak Fix first indirectly lost Child of a lost block Fix parent possibly lost Pointer arithmetic / corruption Investigate still reachable Globals/statics Ignore unless growing Use Massif: valgrind --tool=massif ./your_trigger ms_print massif.out Why this matters: "Still reachable" is not a leak — unless it grows. A real leak looks like the following. With the stack trace and debug symbols, exact source file name and line number will be given. That is where memory is allocated. To fix it, you need to find out why the allocated memory was not released, e.g. delete is only called on one running path. With the trigger, another running path is active. 1,024 bytes in 1 blocks are definitely lost at operator new by DatabaseConnection::ExecuteQuery (db_connection.cpp:67) by CustomerLoader::FetchCustomer (customer_loader.cpp:89) Extract only leak blocks: grep -A10 "definitely lost" valgrind_out.txt Why this matters: The stack trace is the map that leads you to the leak. Useful when multiple developers touch the code. TEST(LeakTest, ConfirmLeakExists) { size_t before = get_current_rss(); for (int i = 0; i < 100; i++) { suspect_function(); } size_t after = get_current_rss(); EXPECT_LT((after - before) / 100, 1024); } Why this matters: Regression tests prevent old leaks from returning. Task Command Basic leak check valgrind --leak-check=full ./binary Only real leaks --show-leak-kinds=definite,indirect Save output --log-file=leak.log Check running service vgdb leak_check full definite indirect Heap profiling valgrind --tool=massif Extract leak grep -A10 "definitely lost" Imagine a legacy service that loads customers from a database and caches them. // customer_loader.h struct Customer { int id; std::string name; }; class CustomerRepository { public: Customer* LoadCustomer(int id); }; // customer_loader.cpp #include "customer_loader.h" #include "db_connection.h" Customer* CustomerRepository::LoadCustomer(int id) { DatabaseConnection* conn = DatabaseConnection::Get(); // singleton ResultSet* rs = conn->ExecuteQuery("SELECT id, name FROM customers WHERE id = " + std::to_string(id)); if (!rs->Next()) { return nullptr; } Customer* c = new Customer{}; c->id = rs->GetInt(0); c->name = rs->GetString(1); // BUG: ResultSet is never deleted // delete rs; // missing return c; // caller owns Customer* } Caller code: void ProcessRequest(int customerId) { CustomerRepository repo; Customer* c = repo.LoadCustomer(customerId); if (!c) { return; } // ... use c ... delete c; // correct } At first glance, this looks “fine” because Customer is deleted. But ResultSet is leaked on every call. You run your request handler under Valgrind: valgrind --leak-check=full \ --show-leak-kinds=definite,indirect \ --track-origins=yes \ --log-file=valgrind_leak.log \ ./service --handle-request 42 Relevant part of the report: ==12345== 128 bytes in 1 blocks are definitely lost in loss record 3 of 5 ==12345== at 0x4C2F1A3: operator new(unsigned long) (vg_replace_malloc.c:422) ==12345== by 0x401F8B: ResultSet::ResultSet(DBHandle*) (result_set.cpp:27) ==12345== by 0x4023D1: DatabaseConnection::ExecuteQuery(std::string const&) (db_connection.cpp:88) ==12345== by 0x4039A4: CustomerRepository::LoadCustomer(int) (customer_loader.cpp:11) ==12345== by 0x40412F: ProcessRequest(int) (request_handler.cpp:25) ==12345== by 0x4043C9: main (main.cpp:17) Key points: “128 bytes in 1 blocks are definitely lost” → real leak Allocation happens in ResultSet::ResultSet The call chain leads to CustomerRepository::LoadCustomer You don’t need to know ResultSet internals—only that you allocated it and never freed it. Customer* CustomerRepository::LoadCustomer(int id) { DatabaseConnection* conn = DatabaseConnection::Get(); ResultSet* rs = conn->ExecuteQuery("SELECT id, name FROM customers WHERE id = " + std::to_string(id)); if (!rs->Next()) { delete rs; // ✅ free on early return return nullptr; } Customer* c = new Customer{}; c->id = rs->GetInt(0); c->name = rs->GetString(1); delete rs; // ✅ free after use return c; } Re‑run Valgrind: ==12345== HEAP SUMMARY: ==12345== in use at exit: 0 bytes in 0 blocks ==12345== total heap usage: 1,234 allocs, 1,234 frees, 98,765 bytes allocated ==12345== ==12345== All heap blocks were freed -- no leaks are possible Never start fixing until you can reproduce the leak in under 10 minutes. The trigger is your truth. The stack trace is your map.

Dev.to (C/C++)
~8 min readMay 6, 2026

Designing an E2E-Encrypted Terminal Chat in C++17: SRP-6a, HKDF, and a Relay-Blind Server

There is a class of security properties that most hobby chat implementations simply skip: the server should not be able to read your messages, and authentication should not require trusting the server with a password hash. cmd_chat is a deliberately minimal C++17 implementation that takes both of these seriously — using SRP-6a, HKDF-SHA256, and a Fernet-compatible AEAD scheme — without hiding the mechanics behind a TLS library. This post is about the design decisions, the trade-offs, and the places where I deliberately kept the implementation simple in ways that a production system would not. Goals: The server relays ciphertext it cannot decrypt. No key material touches the server after authentication. Authentication is mutual and zero-knowledge. Neither side learns the other's secret; both sides prove they share it. The crypto stack is auditable in a single afternoon. No opaque abstractions. Cross-platform: Windows (Winsock2), Linux, macOS — same source tree, CI on all three. Non-goals (explicitly deferred): Perfect forward secrecy. All session confidentiality is tied to the room password. An ephemeral ECDH layer per connection would fix this; it is on the roadmap. TLS transport. The current framing is JSON lines over raw TCP — entirely appropriate for a trusted network or a demo, not appropriate for public deployment without wrapping in TLS. Persistent storage. The message store is in-memory. Replacing it with SQLite is a small mechanical change that would add no insight here. cmd_chat_cpp/ ├── CMakeLists.txt ├── client/ ← TCP connect, SRP handshake, send/recv threads, UI ├── server/ ← accept loop, per-client threads, in-memory stores, broadcast └── common/ ← crypto.hpp, base64.hpp, json_io.hpp, uuid.hpp (header-only) Transport: Newline-delimited JSON (NDJSON) over raw TCP. One nlohmann::json object per line. The framing is intentionally simple — any JSON parser and nc can participate, which matters for debugging and future language interoperability. Server threading model: One std::thread per accepted connection, detached. Shared state (MessageStore, UserSessionStore, ConnectionManager, SRPAuthManager) is guarded by per-object mutexes. This is the classic thread-per-connection model: straightforward to reason about, does not scale to thousands of concurrent clients, and is entirely appropriate for the stated scope. Client threading model: Main thread owns stdin and the send path. A dedicated receive thread runs recv_loop() and updates the display. The two threads share no mutable state beyond the socket handle, which is safe after connection establishment. Client Server | | |--- SRP step 1: username + A ------>| |<-- SRP step 2: salt + B -----------| |--- SRP step 3: client proof M ---->| |<-- SRP step 4: server proof H_AMK -| | | | (both sides now hold session_key) | | | | room_key = HKDF(session_key, | | room_salt, | | "room_key") | | | |--- Fernet(room_key, plaintext) --->| ← server sees only opaque base64 |<-- Fernet(room_key, plaintext) ----| SRP is a PAKE: it gives you mutual authentication and a shared session key, and the wire messages are computationally indistinguishable from random to a passive observer who does not know the password. The server stores a verifier v = g^x mod N (where x = H(salt | password)), never the password itself. The handshake produces session_key = H(A | B | S) independently on both sides, where S is the shared premaster secret. Neither x nor S is ever transmitted. I used csrp with SRP_NG_2048 and SRP_SHA256. The server creates the verifier once at startup: srp_create_salted_verification_key( SRP_SHA256, SRP_NG_2048, "chat", reinterpret_cast<const unsigned char*>(password.data()), password.size(), &bytes_s, &len_s, &bytes_v, &len_v, nullptr, nullptr); One deliberate simplification: all clients authenticate as the identity "chat". The username is a display name only, not a separate credential. This means the SRP verifier is shared across all clients — the password is the room credential, not a per-user one. That is a group chat model, not a user account model. After SRP, every client that authenticated with the same password holds the same session_key. HKDF turns that into a deterministic, domain-separated encryption key: room_key = HKDF-SHA256(ikm=session_key, salt=room_salt, info="room_key", len=32) room_salt is 16 bytes of RAND_bytes generated at server startup and transmitted during the auth handshake. The info parameter provides domain separation — if you later derive a MAC key or a different-purpose key from the same IKM, use a different info string and you get an independent key with no relation to room_key. The implementation in common/crypto.hpp covers both the OpenSSL 3 EVP_KDF API and the legacy EVP_PKEY_derive path, since both are in active use in the wild: std::vector<uint8_t> hkdf_sha256( const std::vector<uint8_t>& ikm, const std::vector<uint8_t>& salt, const std::string& info, size_t out_len); Each message is encrypted into a Fernet token. The layout is: token = version(1B 0x80) | timestamp(8B big-endian) | IV(16B) | ciphertext | HMAC-SHA256(32B) The HMAC covers everything from version through the end of ciphertext. This is encrypt-then-MAC — the MAC is over the ciphertext, not the plaintext, which is what you want for a padding-oracle-resistant scheme. AES-128-CBC with a fresh RAND_bytes IV per message. The server stores and rebroadcasts the base64-encoded token unchanged. It has no key. It cannot decrypt, forge, or modify a message without the HMAC check failing on the receiving client. std::string fernet_encrypt(const std::vector<uint8_t>& key, const std::string& plaintext); std::string fernet_decrypt(const std::vector<uint8_t>& key, const std::string& token); One honest note: AES-128-CBC is not the current best practice for new designs — AES-256-GCM gives you authenticated encryption natively, without a separate HMAC, and eliminates the IV-reuse-is-catastrophic property of CBC. I used CBC + HMAC to match the Fernet specification precisely and to keep the construction transparent. For a production system, reach for GCM or ChaCha20-Poly1305. Authentication phase (four round trips): {"cmd": "srp_init", "username": "alice", "A": "<base64>"} {"user_id": "<uuid>", "B": "<base64>", "salt": "<hex>", "room_salt": "<base64>"} {"cmd": "srp_verify", "user_id": "<uuid>", "M": "<base64>"} {"H_AMK": "<base64>", "session_key": "<base64>"} Chat phase: {"type": "init", "messages": [...], "users": ["alice", "bob"]} {"type": "message", "text": "<fernet-token>"} {"type": "message", "username": "alice", "text": "<fernet-token>", "timestamp": "..."} {"type": "user_joined", "username": "bob"} {"type": "user_left", "username": "bob"} The json_io.hpp helpers handle framing — send appends \n, recv reads until \n: bool send_json(SOCKET sock, const nlohmann::json& j); std::optional<nlohmann::json> recv_json(SOCKET sock); std::optional on recv_json is deliberate — nullopt signals EOF or a parse error, which the caller uses to terminate the connection cleanly rather than entering an invalid state. No forward secrecy. If the room password is ever compromised, all past session keys (which are derived from the password via SRP) can be recomputed and all past messages can be decrypted — assuming an adversary recorded the traffic. An ephemeral ECDH exchange per connection would bound the blast radius to that session. Replay is possible at the SRP layer. The current implementation does not validate that the SRP ephemeral values A and B are fresh across reconnections. A full implementation would include a session nonce in the proof. This is a known omission. session_key is transmitted but not used for chat crypto. After SRP, the server sends session_key to the client for potential per-session keying. In the current implementation, encrypt_text and decrypt_text use room_key (derived from the password directly), not from session_key. This is documented as a known gap between the README security diagram and the actual implementation — aligning them is a clean next step. Thread-per-connection does not scale. For the stated use case (small team, trusted network) this is fine. For anything beyond ~100 concurrent connections, io_uring or epoll/kqueue with an event loop would be the right move. All C++ dependencies except OpenSSL are fetched at configure time via CMake FetchContent: FetchContent_Declare(nlohmann_json URL https://github.com/nlohmann/json/releases/download/v3.11.3/json.tar.xz) FetchContent_Declare(csrp GIT_REPOSITORY https://github.com/cocagne/csrp.git GIT_TAG 15d6bd7) csrp has no CMakeLists.txt of its own, so it is built manually as a static library after FetchContent_Populate: add_library(csrp STATIC ${csrp_SOURCE_DIR}/srp.c) target_link_libraries(csrp PUBLIC OpenSSL::Crypto) OpenSSL is the only system dependency. CI builds on GitHub Actions cover Windows (Chocolatey), Ubuntu (libssl-dev), and macOS (Homebrew openssl@3). cmake -S . -B build -DCMAKE_BUILD_TYPE=Release cmake --build build -j$(nproc) ./build/cmd_chat_server serve 0.0.0.0 9000 --password roomsecret ./build/cmd_chat_client connect 127.0.0.1 9000 alice roomsecret AES-256-GCM instead of AES-128-CBC + HMAC. Simpler construction, no padding oracle surface, authenticated natively by the AEAD mode. Ephemeral ECDH per session (X25519) layered on top of SRP to achieve forward secrecy. TLS for the transport layer. The current raw TCP is fine for a controlled environment; wrapping with OpenSSL::SSL (or just boringssl) would take the transport threat model off the table. Per-user credentials. The current single-verifier model makes sense for a shared room but not for a multi-room or multi-tenant system. io_uring-based event loop on Linux to replace thread-per-connection for anything that needs to scale. GitHub: double-k-3033/cmd-chat-cpp The implementation is intentionally small — around 600 lines across all source files — so the full crypto flow is traceable from a single reading session. If you are evaluating PAKE schemes, building a custom secure channel in C++, or just want a working reference for HKDF + Fernet without an opaque wrapper library, it should be useful.

Dev.to (C/C++)
~2 min readMay 6, 2026

Quickshell: Build Your Own Desktop on Linux

Instead of relying on ready-made solutions (Waybar, Polybar, ...), you create your own. Quickshell is a modern toolkit built with C++ for creating desktop interface components — bars, widgets, lock screens, launchers, and even complete environments — using QtQuick + QML. It is not a "bar program". It is also not a complete, ready-made desktop. It is a foundation for building a custom desktop, running alongside a compositor like Hyprland, Sway, or i3. In practice, it replaces several pieces: status bar notifications widgets lockscreen display manager system controls Quickshell uses: QtQuick (UI) QML (configuration/programming) Hot reload (save → instant update) ([Quickshell][2]) Simple example (bar): PanelWindow { anchors { top: true left: true right: true } implicitHeight: 30 Text { anchors.centerIn: parent text: "hello world" } } One of its strengths is that it comes already integrated with the system: Wayland + X11 (windowing) Hyprland, i3, Sway (workspaces) PipeWire (audio) BlueZ (Bluetooth) UPower (battery) MPRIS (media players) standard system tray This eliminates a lot of boilerplate. Arch Linux / EndeavourOS / Manjaro yay -S quickshell ### Or paru -S quickshell Or build from scratch on any system: Dependencies: sudo apt install cmake ninja-build qt6-base-dev qt6-declarative-dev \ qt6-wayland wayland-protocols libpipewire-0.3-dev \ libdbus-1-dev libxkbcommon-dev Clone: git clone https://github.com/quickshell-mirror/quickshell.git cd quickshell Build: cmake -B build -G Ninja cmake --build build Install: sudo cmake --install build Run: quickshell Configuration: ~/.config/quickshell/main.qml Minimal example: import QtQuick import Quickshell PanelWindow { anchors.top: true anchors.left: true anchors.right: true implicitHeight: 30 Text { anchors.centerIn: parent text: "Quickshell is working" } } Works best on Wayland (Hyprland, Sway, etc.) May be limited on X11 Still in development → bugs are normal No config = blank screen For more information, visit the repository.

Dev.to (C/C++)
~2 min readMay 6, 2026

Quickshell: construa seu próprio desktop no Linux

Em vez de depender de soluções prontas (Waybar, Polybar, ...), você cria o seu próprio. O Quickshell é um toolkit moderno feito com C++ para criar componentes de interface de desktop — barras, widgets, lock screens, launchers e até ambientes completos — usando QtQuick + QML. Não é um "programa de barra". Também não é um desktop completo pronto. É uma base para construir um desktop customizado, rodando junto com um compositor como Hyprland, Sway ou i3. Na prática, ele substitui várias peças: barra (status bar) notificações widgets lockscreen display manager controles de sistema O Quickshell usa: QtQuick (UI) QML (configuração/programação) Hot reload (salvou → atualizou na hora) ([Quickshell][2]) Exemplo simples (barra): PanelWindow { anchors { top: true left: true right: true } implicitHeight: 30 Text { anchors.centerIn: parent text: "hello world" } } Um dos pontos fortes é já vir integrado com o sistema: Wayland + X11 (windowing) Hyprland, i3, Sway (workspaces) PipeWire (áudio) BlueZ (Bluetooth) UPower (bateria) MPRIS (players de mídia) system tray padrão Isso elimina muito boilerplate. Arch Linux / EndeavourOS / Manjaro yay -S quickshell ### Ou paru -S quickshell Ou construa do zero em qualquer sistema: Dependências: sudo apt install cmake ninja-build qt6-base-dev qt6-declarative-dev \ qt6-wayland wayland-protocols libpipewire-0.3-dev \ libdbus-1-dev libxkbcommon-dev Clone: git clone https://github.com/quickshell-mirror/quickshell.git cd quickshell Construa: cmake -B build -G Ninja cmake --build build Instale: sudo cmake --install build Rode: quickshell Configuração: ~/.config/quickshell/main.qml Exemplo mínimo: import QtQuick import Quickshell PanelWindow { anchors.top: true anchors.left: true anchors.right: true implicitHeight: 30 Text { anchors.centerIn: parent text: "Quickshell funcionando" } } Funciona melhor em Wayland (Hyprland, Sway, etc.) Em X11 pode ser limitado Ainda está em desenvolvimento → bugs são normais Sem config = tela vazia Para mais informações acesse o repositório. Aprenda Qt https://terminalroot.com.br/qt Aprenda C++ Completo https://terminalroot.com.br/promo

Dev.to (C/C++)
~2 min readMay 6, 2026

Quickshell: construa seu próprio desktop no Linux

Em vez de depender de soluções prontas (Waybar, Polybar, ...), você cria o seu próprio. O Quickshell é um toolkit moderno feito com C++ para criar componentes de interface de desktop — barras, widgets, lock screens, launchers e até ambientes completos — usando QtQuick + QML. Não é um "programa de barra". Também não é um desktop completo pronto. É uma base para construir um desktop customizado, rodando junto com um compositor como Hyprland, Sway ou i3. Na prática, ele substitui várias peças: barra (status bar) notificações widgets lockscreen display manager controles de sistema O Quickshell usa: QtQuick (UI) QML (configuração/programação) Hot reload (salvou → atualizou na hora) ([Quickshell][2]) Exemplo simples (barra): PanelWindow { anchors { top: true left: true right: true } implicitHeight: 30 Text { anchors.centerIn: parent text: "hello world" } } Um dos pontos fortes é já vir integrado com o sistema: Wayland + X11 (windowing) Hyprland, i3, Sway (workspaces) PipeWire (áudio) BlueZ (Bluetooth) UPower (bateria) MPRIS (players de mídia) system tray padrão Isso elimina muito boilerplate. Arch Linux / EndeavourOS / Manjaro yay -S quickshell ### Ou paru -S quickshell Ou construa do zero em qualquer sistema: Dependências: sudo apt install cmake ninja-build qt6-base-dev qt6-declarative-dev \ qt6-wayland wayland-protocols libpipewire-0.3-dev \ libdbus-1-dev libxkbcommon-dev Clone: git clone https://github.com/quickshell-mirror/quickshell.git cd quickshell Construa: cmake -B build -G Ninja cmake --build build Instale: sudo cmake --install build Rode: quickshell Configuração: ~/.config/quickshell/main.qml Exemplo mínimo: import QtQuick import Quickshell PanelWindow { anchors.top: true anchors.left: true anchors.right: true implicitHeight: 30 Text { anchors.centerIn: parent text: "Quickshell funcionando" } } Funciona melhor em Wayland (Hyprland, Sway, etc.) Em X11 pode ser limitado Ainda está em desenvolvimento → bugs são normais Sem config = tela vazia Para mais informações acesse o repositório. Aprenda Qt https://terminalroot.com.br/qt Aprenda C++ Completo https://terminalroot.com.br/promo

Dev.to (C/C++)
~3 min readMay 6, 2026

How I load an exe directly into memory without touching disk — manual PE mapping

most people think running an exe means writing it to disk first. it doesn't. as part of building TinyLoad, a Windows PE packer, I had to write a PE loader that maps an executable directly into memory and runs it without ever creating a file. here's how it works. PE (Portable Executable) is the format Windows uses for .exe and .dll files. it's basically a structured blob with a header describing how to load it, followed by sections containing code, data, resources etc. to run a PE file manually you have to do what the Windows loader does — but yourself, in memory. every PE starts with a DOS header, then an NT header. the NT header tells you everything you need: SizeOfImage — how much memory to allocate ImageBase — where the linker expected the binary to live AddressOfEntryPoint — where to jump to start execution SizeOfHeaders — how much of the front to copy as-is IMAGE_DOS_HEADER* dos = (IMAGE_DOS_HEADER*)data.data(); IMAGE_NT_HEADERS64* nt = (IMAGE_NT_HEADERS64*)(data.data() + dos->e_lfanew); allocate a block of memory the size of the image, then copy the headers in. after that, iterate the section table and copy each section to its virtual address: void* base = VirtualAlloc(NULL, nt->OptionalHeader.SizeOfImage, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); memcpy(base, data.data(), nt->OptionalHeader.SizeOfHeaders); IMAGE_SECTION_HEADER* sect = IMAGE_FIRST_SECTION(nt); for (int i = 0; i < nt->FileHeader.NumberOfSections; i++) { if (sect[i].SizeOfRawData > 0) memcpy((BYTE*)base + sect[i].VirtualAddress, data.data() + sect[i].PointerToRawData, sect[i].SizeOfRawData); } the linker assumed the binary would load at ImageBase. if it lands somewhere else (which it usually does since ASLR), every absolute address in the binary is wrong by delta = actual_base - preferred_base. the relocation table tells you exactly which addresses need fixing: size_t delta = (size_t)base - nt->OptionalHeader.ImageBase; if (delta != 0) { auto* rel = (IMAGE_BASE_RELOCATION*)((BYTE*)base + relDir->VirtualAddress); while (rel->VirtualAddress > 0) { WORD* list = (WORD*)(rel + 1); DWORD count = (rel->SizeOfBlock - sizeof(IMAGE_BASE_RELOCATION)) / sizeof(WORD); for (DWORD i = 0; i < count; i++) { if ((list[i] >> 12) == IMAGE_REL_BASED_DIR64) { size_t* p = (size_t*)((BYTE*)base + rel->VirtualAddress + (list[i] & 0xFFF)); *p += delta; } } rel = (IMAGE_BASE_RELOCATION*)((BYTE*)rel + rel->SizeOfBlock); } } the import directory tells you which DLLs the binary needs and which functions to load from each. iterate the import descriptors, load each DLL, resolve each function by name or ordinal, and write the function addresses into the import address table: auto* imp = (IMAGE_IMPORT_DESCRIPTOR*)((BYTE*)base + impDir->VirtualAddress); while (imp->Name) { HMODULE mod = LoadLibraryA((char*)((BYTE*)base + imp->Name)); auto* thunk = (IMAGE_THUNK_DATA64*)((BYTE*)base + imp->FirstThunk); auto* orig = (IMAGE_THUNK_DATA64*)((BYTE*)base + imp->OriginalFirstThunk); while (orig->u1.AddressOfData) { if (IMAGE_SNAP_BY_ORDINAL64(orig->u1.Ordinal)) { thunk->u1.Function = (size_t)GetProcAddress(mod, (char*)(orig->u1.Ordinal & 0xFFFF)); } else { auto* name = (IMAGE_IMPORT_BY_NAME*)((BYTE*)base + orig->u1.AddressOfData); thunk->u1.Function = (size_t)GetProcAddress(mod, name->Name); } thunk++; orig++; } imp++; } using EntryPoint = void(WINAPI*)(); EntryPoint entry = (EntryPoint)((BYTE*)base + nt->OptionalHeader.AddressOfEntryPoint); entry(); that's it. the exe runs directly from the allocated memory block, no file on disk, no Windows loader involved. TinyLoad packs your exe with LZ77 compression and VM encryption. when the packed stub runs, it decrypts and decompresses the original exe in memory, then calls this loader directly. the original exe never exists as a file — it goes straight from encrypted bytes to running process. full source (single .cpp file, no deps): https://github.com/iamsopotatoe-coder/TinyLoad

Dev.to (C/C++)
~5 min readMay 6, 2026

Mastering State in Modern C++: Making It Explicit

Passing state as data in the functional core–imperative shell In an earlier post, we saw that the functional core – imperative shell pattern reduces complexity by centralizing state mutation in the shell, turning hidden state dependencies into explicit ones. The underlying idea is simple: keep state out of the business logic. But putting this into practice is not: how can the business logic still evolve state that lives elsewhere—and why does this make dependencies explicit? Without a clear model at the code level, state easily becomes an implicit dependency again—undermining the whole design. The functional core – imperative shell pattern separates business logic from side effects. State handling is just another side effect. For state, this means that although the business logic in the core drives changes, it does not persist or mutate state. This is resolved by letting state live in the shell, while the core receives it as input and returns updates. So, state evolution follows a clear pattern: persisted in the shell: the shell holds the current state and passes it as data into a core function driven by the core: the core function computes and returns an updated state mutated in the shell: the shell applies the returned state This way, the business logic still determines how state evolves—but without persisting or mutating it. Let’s dive into an example, to see how state flows between shell and core Here is a concrete example from my funkysnakes github project that illustrates the pattern clearly. The project implements the actor-based functional core–imperative shell architecture introduced in an earlier post. In this example, the snakes are part of the game state, and moving them means evolving that state. The GameEngineActor, as shell, holds the GameState struct that aggregates the relevant sub-states: // Shell: where state is persisted class GameEngineActor : public Actor<GameEngine> { ... struct GameState { PerPlayerSnakes snakes; FoodItems food_items; Board board; }; GameState state_; }; The core provides the pure function moveSnakes depending on the sub-states snakes, board, and food_items. It advances the snakes by one step, returning updated snakes while board and food_items are only read: // Core: where state evolution is driven PerPlayerSnakes moveSnakes(PerPlayerSnakes snakes, const Board& board, const FoodItems& food_items); Finally, moveSnakes is called by the shell within the game loop of the GameEngineActor: // Shell: where state is mutated state_.snakes = moveSnakes(state_.snakes, state_.board, state_.food_items); This example maps to the functional core–imperative shell design: the core is the pure function moveSnakes and the data structures it operates on the shell is the GameEngineActor, which holds and mutates the state both connect at the function call, where the core’s result is applied to the state A key detail is how state is perceived differently. In the shell, GameState persists across calls and is mutated over time—this is what makes it state. In the core, however, there is no notion of state—only data passed in and returned. This is exactly what allows pure functions to drive state changes without mutating state themselves. Looking at the following benefits through the lens of dependencies reveals why they arise: One key aspect is transparency. All state appears at the top level, making it obvious which state exists. State changes are explicit and easy to follow, rather than scattered deep inside objects as is often the case in nested OOP designs. What makes this possible is that state dependencies are no longer hidden, but explicit. There's also high flexibility in which state can be processed by which function. Here the snakes sub-state is mutated, but depends on the board and the currently existing food_items. Whatever data a pure function needs can simply be passed in by the shell. This flexibility comes from decoupling logic from stored state and connecting both only through data passed to the function, keeping dependencies simple and explicit. And another great benefit of this design is that it improves testability significantly. As pointed out in the intro post, testing stateful code is often complex. To test a specific behavior, you first need to get your software into the right state. This means using the regular API, test-specific APIs, or mocks. This changes when the business logic no longer depends on hidden internal state, but only on explicit input. With these dependencies exposed, you can simply pass in whatever state you need, making testing straightforward. In conclusion, all of these benefits stem from the same shift: state dependencies become explicit instead of hidden. So far, we looked at state that has meaning at the domain level: snakes, food items, and the board. This kind of state belongs to the game model, so it makes sense that the game engine shell holds it explicitly and passes it into the core. But not all state should be understood at that level. Some state only exists to support a specific module—parser state, cache state, or other implementation details. Exposing all of that at the domain level would clutter the shell with details it should not need to understand—blurring the domain model and making the system harder to reason about. So the next question is how to reintroduce encapsulation without giving up the functional mechanics we established here. In the upcoming post, we will look at how to handle such module-internal state while keeping state evolution explicit and dependencies under control. This post is created with AI assistance for brainstorming and improving formulation. Original and canonical source: https://github.com/mahush/funkyposts (v01)

Dev.to (C/C++)
~8 min readMay 6, 2026

10 best places to learn C++ in 2026 I wish I knew earlier

If you’re searching for the best places to learn C++ in 2026, you’re already taking a serious step toward becoming a strong systems-level engineer. C++ is not just another programming language; it is one of the few that gives you fine-grained control over memory, performance, and system architecture. From operating systems and game engines to embedded systems and high-frequency trading platforms, C++ continues to power some of the most performance-critical applications in the world. The challenge, however, is that C++ is not easy to learn casually. It requires discipline, structured learning, and consistent hands-on practice. That is why choosing the best places to learn C++ in 2026 is so important. The right platform can help you understand complex concepts faster and build real-world skills that are highly valued in the industry. :contentReference[oaicite:0]{index=0} C++ remains highly relevant in 2026 because it is deeply embedded in industries that require speed, efficiency, and low-level system control. It is widely used for building operating systems, browsers, real-time applications, and high-performance backend systems. These use cases demand precise memory management and optimization, which C++ provides better than most modern languages. Another reason for its continued importance is its evolution. Modern C++ standards such as C++17, C++20, and C++23 have introduced features that make the language safer and more expressive. Despite these improvements, companies still expect developers to understand core concepts like pointers, memory allocation, and object lifetime. This combination of modern features and low-level control makes C++ a powerful and future-proof skill. When evaluating the best places to learn C++ in 2026, it is important to prioritize platforms that offer hands-on learning. A good platform should allow you to write and execute code directly, helping you understand how concepts behave in real scenarios. This is especially critical for C++, where understanding memory and execution flow is essential. You should also look for platforms that explain low-level concepts clearly, including stack versus heap memory, compilation processes, and object lifetimes. Additionally, platforms that include real-world projects and updated content aligned with modern C++ standards will help you stay relevant. Finally, access to exercises, algorithms, and system design topics ensures that your learning goes beyond syntax and prepares you for professional work. Platform Best For Learning Style Pricing Coursera Academic learning Video + Projects Paid/Free Udemy Flexible learning Video-based Budget-friendly Educative.io Hands-on coding Interactive Paid Codecademy Beginners Interactive Freemium freeCodeCamp Free practice Interactive Free Pluralsight Advanced developers Video + Assessments Paid LinkedIn Learning Quick professional skills Short videos Paid Bootcamps Career switchers Intensive Expensive YouTube Supplementary learning Video Free Khan Academy Absolute beginners Guided lessons Free Coursera is one of the best places to learn C++ in 2026 if you prefer a structured, academic-style learning experience. The platform offers courses from top universities that cover everything from basic syntax to advanced topics like data structures, algorithms, and object-oriented programming. This makes it a strong choice for learners who want a comprehensive and well-organized curriculum. It is particularly useful for those preparing for fields like robotics, embedded systems, or game development. The combination of theory and guided assignments ensures a solid understanding of both concepts and practical applications. Udemy provides a flexible and affordable way to learn C++, offering a wide range of courses for beginners and advanced learners alike. You can find courses covering everything from basic syntax to advanced topics such as memory management, template programming, and modern C++ features. The platform works best for learners who prefer self-paced study and enjoy exploring different teaching styles. With the right course selection, Udemy can provide both foundational knowledge and practical experience. Educative.io stands out as one of the most practical options among the best places to learn C++ in 2026. Its interactive, in-browser coding environment allows you to practice concepts in real time, which is essential for mastering a language as complex as C++. Instead of passively watching videos, you actively engage with the material. The platform provides structured learning paths that cover modern C++ standards, data structures, algorithms, and system design. It also offers clear explanations of complex topics such as pointers and memory management, making it an excellent choice for serious learners. Codecademy is a beginner-friendly platform that simplifies the process of learning C++. Its interactive lessons break down concepts into manageable steps, helping you build confidence gradually. The platform’s in-browser coding environment ensures that you are consistently practicing. This makes it a great starting point for those who are new to programming or need a gentle introduction to C++ fundamentals. freeCodeCamp is one of the most accessible options among the best places to learn C++ in 2026, especially for learners on a budget. While it is not as C++-focused as some platforms, it provides valuable practice through coding challenges and algorithm exercises. It is particularly useful for self-motivated learners who want to strengthen their problem-solving skills. The platform can serve as a strong supplement to more structured learning environments. Pluralsight is designed for developers who want to deepen their expertise and move into professional roles. It offers advanced courses on topics such as concurrency, memory management, and performance optimization. These topics are essential for building efficient and scalable applications. The platform also includes skill assessments that help you evaluate your progress and identify areas for improvement. It is an excellent choice for intermediate and advanced learners. LinkedIn Learning is ideal for professionals who want to learn C++ in a structured yet time-efficient manner. The courses are short, practical, and easy to integrate into a busy schedule. Additionally, certifications are linked to your LinkedIn profile, enhancing your professional presence. While it may not replace a comprehensive learning platform, it is highly effective for targeted skill development. Bootcamps provide an immersive and intensive learning experience, making them one of the fastest ways to gain job-ready skills. Some specialized bootcamps focus on C++ for areas like embedded systems and game development. They often include mentorship, real-world projects, and career support. Although bootcamps require a significant investment, they are ideal for learners who want a structured and accelerated path into the industry. YouTube serves as a valuable supplementary resource for learning C++. It offers tutorials and explanations of complex topics such as pointers, recursion, and memory management. This makes it useful for reinforcing concepts and gaining additional insights. However, it should not be relied upon as a primary learning platform due to its lack of structure. Khan Academy is a great starting point for beginners who need to understand programming fundamentals before tackling C++. It focuses on logic, control flow, and problem-solving, which are essential skills for learning any programming language. Although it does not provide advanced C++ content, it builds the foundation needed for more specialized platforms. Choosing among the best places to learn C++ in 2026 depends on your learning style, experience level, and career goals. Beginners should focus on platforms that provide structured and interactive learning environments, while more experienced developers should look for platforms that offer deeper technical content. Your learning style also plays a significant role. If you prefer hands-on practice, interactive platforms like Educative.io and Codecademy are ideal. If you prefer structured instruction, platforms like Coursera and Udemy may be more suitable. Aligning your platform choice with your goals will help you learn more efficiently. Stage Focus Area Key Topics Covered 1 Fundamentals Variables, loops, pointers, OOP 2 Memory concepts Stack vs heap, allocation, RAII 3 STL Vectors, maps, iterators, algorithms 4 Projects Tools, simulators, basic engines 5 Advanced topics Concurrency, C++20, performance optimization 6 Interview preparation Algorithms, system design A strong approach begins with mastering the fundamentals of C++, including variables, control structures, and object-oriented programming. Once you are comfortable, you should focus on understanding memory concepts such as stack versus heap and resource management. As you progress, learning the Standard Template Library (STL) will significantly improve your productivity. Building real-world projects and exploring advanced topics like concurrency and performance optimization will prepare you for professional roles. Finally, consistent practice and interview preparation will help you transition into a successful engineering career. Finding the best places to learn C++ in 2026 is not just about selecting a platform, but about choosing the right approach to learning. C++ is a demanding language, but it offers unmatched control and performance for those who master it. If you stay consistent, focus on hands-on practice, and build real-world projects, you will develop skills that are highly valued in the industry. With the right learning environment and dedication, C++ can open doors to some of the most challenging and rewarding roles in software engineering.

Dev.to (C/C++)
~11 min readMay 6, 2026

CLAUDE.md for Modern C++: 12 Rules That Stop AI from Writing 1998-Style C++

You ask Claude to "add a Subscription service that calls Stripe" inside your C++ codebase, and you get back something that compiles cleanly and is still wrong: A function returning Subscription* whose ownership convention is "trust me, you delete it." Three new/delete pairs in the happy path because that's what the model saw on Stack Overflow circa 2009. A using namespace std; at the top of a header, leaking into every consumer. A std::mutex locked with m.lock() / m.unlock() by hand — guaranteed to leak on the next thrown exception. A C-style cast (MyType*)ptr that would be static_cast if the model knew you were past 2003. A class with seven uninitialized member variables, default-constructed into UB. The model isn't lazy. It's been trained on 25 years of C++ code, the median of which is C++98 with <vector>. A CLAUDE.md at the root of your project drags it forward to where you actually live — C++17 with selective C++20. Here are 12 rules I drop into every Modern C++ project. Each one closes a class of bug AI assistants generate by default. new / delete in Application Code Why: Every new paired with a hand-written delete is a leak waiting on the next thrown exception. AI tools default to manual allocation because that's what the average training-set C++ does. The fix is structural: ownership lives in a smart pointer, period. Bad: class Service { public: Service() : client_(new HttpClient(/*...*/)) {} ~Service() { delete client_; } // and now it's not copy-safe private: HttpClient* client_; }; Good: class Service { public: Service() : client_(std::make_unique<HttpClient>(/*...*/)) {} // destructor, copy/move semantics: defaulted correctly by the compiler private: std::unique_ptr<HttpClient> client_; }; Rule for CLAUDE.md: No raw new / delete / malloc / free in application code. Use std::make_unique<T>(args...) and std::make_shared<T>(args...). Owning raw pointers (T* that the caller must delete) are forbidden across API boundaries. Return std::unique_ptr<T>, return by value, or hand out a non-owning view. Init() / Cleanup() Pairs Why: Resources that aren't memory — file handles, sockets, mutexes, GPU buffers, OS handles — are leaked exactly as easily as memory. AI assistants love Init() / Cleanup() patterns because they translate cleanly from imperative pseudocode. The C++ idiom is the opposite: the destructor runs on every exit path, including thrown exceptions. Bad: File f; f.Open("config.toml"); auto data = parse(f.Read()); // throws → f.Close() never called f.Close(); Good: class File { public: explicit File(const std::string& path) : fd_(::open(path.c_str(), O_RDONLY)) { if (fd_ < 0) throw std::system_error(errno, std::generic_category(), path); } ~File() { if (fd_ >= 0) ::close(fd_); } File(const File&) = delete; File& operator=(const File&) = delete; File(File&& other) noexcept : fd_(std::exchange(other.fd_, -1)) {} File& operator=(File&& other) noexcept { if (this != &other) { if (fd_ >= 0) ::close(fd_); fd_ = std::exchange(other.fd_, -1); } return *this; } // ... private: int fd_ = -1; }; Rule for CLAUDE.md: Every resource (file, socket, mutex, GL handle, GPU buffer, OS handle) is wrapped in an RAII type whose destructor releases it. No Init() / Cleanup() pairs that callers must remember to invoke. Wrappers delete copy and explicitly = default the move pair. std::mutex Is Always Held by lock_guard / scoped_lock Why: Manual mutex.lock() / mutex.unlock() is broken the moment any code between them throws — and "any code" in C++ includes most allocations. AI-generated concurrency code is full of this pattern. RAII fixes it for free. Bad: void Cache::insert(Key k, Value v) { mu_.lock(); map_[k] = std::move(v); // bad_alloc here → mutex stays locked mu_.unlock(); } Good: void Cache::insert(Key k, Value v) { std::scoped_lock lock(mu_); // released on every exit path map_[k] = std::move(v); } Rule for CLAUDE.md: std::mutex is held via std::lock_guard / std::scoped_lock — never by hand. Hold locks for the minimum span. Never call user code (callbacks, virtual methods) while a lock is held — ordering inversions are silent and brutal. For multi-mutex critical sections, use std::scoped_lock(m1, m2, ...). noexcept Move, = delete Copy When Owning Why: A move constructor that isn't noexcept is silently downgraded to a copy by the standard library. std::vector<T>::push_back will copy your "movable" type on every reallocation if T's move isn't noexcept. AI assistants forget noexcept constantly because the language doesn't force them. Bad: class Buffer { public: Buffer(Buffer&& other) : data_(other.data_), size_(other.size_) { // not noexcept! other.data_ = nullptr; other.size_ = 0; } // copy constructor implicitly defined → deep copy happens silently char* data_; std::size_t size_; }; Good: class Buffer { public: Buffer() = default; Buffer(const Buffer&) = delete; // no implicit deep copy Buffer& operator=(const Buffer&) = delete; Buffer(Buffer&&) noexcept = default; // explicit, fast move Buffer& operator=(Buffer&&) noexcept = default; ~Buffer() = default; // unique_ptr handles it private: std::unique_ptr<char[]> data_; std::size_t size_ = 0; }; Rule for CLAUDE.md: Move constructor and move-assignment are noexcept — the standard library only uses moves when they are. A non-noexcept move silently becomes a copy. Types that own resources delete the copy pair unless deep copy is genuinely cheap. Default the move pair (= default), don't hand-roll if a member-by-member move works. const Correctness Is Not Optional Why: Every method that doesn't mutate state should say so. AI-generated classes treat const as decoration — a method that "looks read-only" but isn't marked const infects every caller, since const references can't call non-const methods. Bad: class Polygon { public: double area() { // missing const return /* ... */; } }; void render(const Polygon& p) { auto a = p.area(); // compile error — method isn't const } Good: class Polygon { public: double area() const { // marked const return /* ... */; } }; Rule for CLAUDE.md: Every method that doesn't mutate observable state is marked const. Every parameter the function won't modify is const T& (or T for trivially copyable). Member fields that genuinely don't change after construction are const, or are exposed only via const accessors. std::move In Why: The constructor pattern most AI assistants generate — Foo(const std::string& name) : name_(name) {} — costs a copy from every rvalue caller. Take by value and std::move into the member: the compiler picks the right move/copy at the call site, and you have one overload. Bad: class User { public: User(const std::string& name, const std::string& email) : name_(name), email_(email) {} // always copies private: std::string name_; std::string email_; }; User u(get_name(), get_email()); // two copies of temporaries Good: class User { public: User(std::string name, std::string email) : name_(std::move(name)), email_(std::move(email)) {} private: std::string name_; std::string email_; }; User u(get_name(), get_email()); // moves from temporaries User u2(name_var, email_var); // copies once, moves into member Rule for CLAUDE.md: Sink parameters (functions that store the value) take by value and std::move. Read-only parameters take const T& (or T for trivially copyable / string_view / span). Don't write four overloads (const&, &&, etc) — by-value-and-move handles all callers. std::string_view and std::span<T> for Read-Only Views Why: Functions that take const std::string& reject const char* literals at the call site without a std::string allocation. std::string_view accepts both for free. The same logic applies to arrays vs std::vector<T> vs std::array<T, N> — std::span<T> accepts all of them. Bad: bool starts_with(const std::string& s, const std::string& prefix) { /* ... */ } starts_with("/api/v1/users", "/api/"); // two allocations Good: bool starts_with(std::string_view s, std::string_view prefix) { return s.size() >= prefix.size() && s.substr(0, prefix.size()) == prefix; } starts_with("/api/v1/users", "/api/"); // zero allocations double sum(std::span<const double> values) { return std::accumulate(values.begin(), values.end(), 0.0); } std::vector<double> v = {/* ... */}; double arr[] = {1.0, 2.0, 3.0}; sum(v); // works sum(arr); // works Rule for CLAUDE.md: Read-only string parameters take std::string_view, not const std::string&. Read-only array parameters take std::span<const T> (C++20), not const std::vector<T>&. Never store a string_view or span past the lifetime of its source — they're non-owning. using namespace std; in Headers, Ever Why: A using namespace std; in a header pollutes every translation unit that includes that header — directly or transitively. Name lookup ambiguities surface in code that doesn't even mention std, and the diagnostic points somewhere unrelated. Bad: // renderer.hpp #include <vector> #include <string> using namespace std; // poisons every consumer class Renderer { /* ... */ }; Good: // renderer.hpp — fully qualify in headers #include <string> #include <vector> class Renderer { public: void draw(std::span<const Mesh> meshes); }; // renderer.cpp — narrow `using` in implementation files is fine using std::vector, std::string; Rule for CLAUDE.md: No `using namespace std;` at file scope, ever — especially not in headers. Narrow using-declarations (`using std::vector;`) inside .cpp files are acceptable. In headers, always fully qualify (std::vector<T>, std::string_view). if constexpr Replace SFINAE Why: SFINAE-based templates (std::enable_if, void_t tricks) produce diagnostics that scroll for pages and are unreadable. C++20 concepts produce errors that read like English. Bad: template <typename T, typename = std::enable_if_t<std::is_integral_v<T>>> T half(T x) { return x / 2; } // Error message on a non-integral T: "no matching function for call to half<...>" // followed by 40 lines of substitution failure Good (C++20): template <std::integral T> T half(T x) { return x / 2; } // Error on a non-integral T: // "constraints not satisfied for `half<std::string>`: // the constraint `std::integral<std::string>` was not satisfied" Rule for CLAUDE.md: Every template parameter with any requirement is constrained — std::integral, std::ranges::range, or a custom concept defined in the header. Replace std::enable_if SFINAE with concepts as you touch the code. Use if constexpr (C++17) for compile-time branching, not tag dispatch. Why: An uninitialized scalar member is undefined behavior on first read. AI-generated classes leave int count; and bool ready; to "be set in the constructor body" — and then they're not, on the path the model didn't think about. Bad: class Counter { public: Counter() {} // count_ uninitialized → UB void tick() { ++count_; } // reading uninitialized int private: int count_; bool active_; }; Good: class Counter { public: Counter() = default; void tick() { ++count_; } private: int count_ = 0; // default member initializers bool active_ = false; std::string name_{}; std::vector<int> events_{}; }; Rule for CLAUDE.md: Every member variable has a default initializer at its declaration. Constructor bodies should be empty or near-empty — initialize in the member-initializer list, not by assignment in the body. Uninitialized scalar reads are UB and silent. Brace-init defaults are free. Why: A C-style cast (T)x silently does whatever it takes — static_cast, const_cast, reinterpret_cast, or all three at once. The verbose forms are searchable, intent-revealing, and rejected by the compiler when they're wrong. Bad: void* raw = some_api(); MyType* p = (MyType*)raw; // is this a static_cast or reinterpret_cast? const auto& s = (std::string&)other; // const_cast hidden in plain sight Good: auto* p = static_cast<MyType*>(raw); // safe pointer conversion const auto* derived = dynamic_cast<const Derived*>(b); // checked downcast auto* bytes = reinterpret_cast<std::byte*>(buffer); // intent-flagged auto& mutable_s = const_cast<std::string&>(other); // const_cast is loud — use sparingly Rule for CLAUDE.md: C-style casts (T)x are forbidden. Use static_cast for safe conversions, dynamic_cast for checked downcasts, const_cast only at well-documented boundaries, reinterpret_cast for intentional bit-level reinterpretation. Each one is searchable in review. target_*, Out-of-Source, No include_directories Why: include_directories and link_libraries are directory-scoped and global — they leak into every target defined in or below that CMakeLists.txt. AI-generated CMake is full of them because they look like "the simple version." The target_* family is per-target, scoped (PRIVATE / PUBLIC / INTERFACE), and exported correctly by install. Bad: # CMakeLists.txt — global, leaks everywhere include_directories(${CMAKE_SOURCE_DIR}/include) link_libraries(fmt::fmt) add_library(engine src/engine.cpp) add_library(physics src/physics.cpp) # silently links fmt::fmt too Good: add_library(engine src/engine.cpp) target_include_directories(engine PUBLIC $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/include> $<INSTALL_INTERFACE:include> PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/src) target_link_libraries(engine PUBLIC fmt::fmt) target_compile_features(engine PUBLIC cxx_std_20) target_compile_options(engine PRIVATE -Wall -Wextra -Wpedantic -Werror) Rule for CLAUDE.md: Use target_include_directories, target_link_libraries, target_compile_options — never include_directories / link_libraries (directory-scoped, leak everywhere). PRIVATE for impl, PUBLIC for API, INTERFACE for header-only deps. Out-of-source builds only (build/). cmake_minimum_required(VERSION 3.21) at minimum. Sanitizers wired via option(ENABLE_ASAN "..." OFF) — Debug builds default ON in dev. Every rule above traces to a real production bug from an AI-generated PR. A delete that ran twice on a hot path because move semantics weren't thought through. A std::mutex that stayed locked for the rest of the process when an allocation threw inside a critical section. An uninitialized bool active_ whose value depended on whatever was on the stack — green in debug, red in release. A (MyType*)voidptr that reinterpret_cast-ed into a different type because the layouts coincidentally matched on x86_64 and didn't on ARM. You can keep catching these in review forever. Or you can write a CLAUDE.md, drop it at the repo root, and stop seeing 80% of them. The 12 rules above are a starting point — the full pack has 50+ production-tested rules covering Modern C++, Rust, Go, TypeScript, React, Vue, Django, FastAPI, Postgres, Kubernetes, Docker, and more. Free C++ gist with all 3 rules → https://gist.github.com/oliviacraft/1f74c314f1f2f7b47f2bddf236977dcb Full CLAUDE.md Rules Pack → https://oliviacraftlat.gumroad.com/l/skdgt