Signal Hub logoSignal Hub

React Native news and articles

React Native
Updated just now
Date
Source

11 articles

Dev.to (React Native)
~6 min readMay 6, 2026

React Native offline: SQLite migrations without pain

I ship schema changes in an offline-first Expo app. I run migrations inside a single SQLite transaction. I use PRAGMA user_version as my only “migration table”. I added a tiny verifier to catch broken migrations fast. I’m building a React Native fitness app. Expo. SQLite. Offline-first. Workouts must log without internet. Always. Then the real problem hits. Schema changes. I started with “I’ll just add a column”. Brutal. I tried keeping migrations in my head. That lasted 2 days. What finally worked: PRAGMA user_version + transactional migrations + a startup verifier. I don’t keep a migrations table. SQLite already has it: PRAGMA user_version. So my app has a single source of truth: current schema version in code current schema version in the database a deterministic path forward Here’s the core helper I run at app startup. // db/migrate.ts import * as SQLite from 'expo-sqlite'; export const SCHEMA_VERSION = 4; export async function getUserVersion(db: SQLite.SQLiteDatabase) { const row = await db.getFirstAsync<{ user_version: number }>( 'PRAGMA user_version;' ); return row?.user_version ?? 0; } export async function setUserVersion(db: SQLite.SQLiteDatabase, v: number) { // PRAGMA doesn't support bindings here. await db.execAsync(`PRAGMA user_version = ${v};`); } export async function migrateIfNeeded(db: SQLite.SQLiteDatabase) { const from = await getUserVersion(db); if (from === SCHEMA_VERSION) return; if (from > SCHEMA_VERSION) throw new Error(`DB version ${from} > app ${SCHEMA_VERSION}`); await db.execAsync('BEGIN;'); try { for (let v = from + 1; v <= SCHEMA_VERSION; v++) { await runMigration(db, v); await setUserVersion(db, v); } await db.execAsync('COMMIT;'); } catch (e) { await db.execAsync('ROLLBACK;'); throw e; } } async function runMigration(db: SQLite.SQLiteDatabase, toVersion: number) { switch (toVersion) { case 1: await db.execAsync(` CREATE TABLE IF NOT EXISTS workouts ( id TEXT PRIMARY KEY NOT NULL, started_at INTEGER NOT NULL, ended_at INTEGER ); `); return; case 2: await db.execAsync(` CREATE TABLE IF NOT EXISTS sets ( id TEXT PRIMARY KEY NOT NULL, workout_id TEXT NOT NULL, exercise_id TEXT NOT NULL, reps INTEGER NOT NULL, weight_kg REAL NOT NULL, created_at INTEGER NOT NULL ); `); await db.execAsync('CREATE INDEX IF NOT EXISTS idx_sets_workout ON sets(workout_id);'); return; case 3: // Add RPE. Nullable first. Backfill later. await db.execAsync('ALTER TABLE sets ADD COLUMN rpe INTEGER;'); return; case 4: // New index for my “last set per exercise” screen. await db.execAsync('CREATE INDEX IF NOT EXISTS idx_sets_exercise_created ON sets(exercise_id, created_at);'); return; default: throw new Error(`Missing migration for version ${toVersion}`); } } One thing that bit me — ALTER TABLE ... ADD COLUMN is forgiving, but not free. NOT NULL or a default, you’re in table-rebuild territory. SQLite will happily let you ship a schema that “kinda works”. So I verify after migrating. This caught a real bug: I renamed weight to weight_kg in code, but never migrated old installs. // db/verify.ts import * as SQLite from 'expo-sqlite'; import { SCHEMA_VERSION, getUserVersion } from './migrate'; type Col = { name: string }; export async function verifySchema(db: SQLite.SQLiteDatabase) { const v = await getUserVersion(db); if (v !== SCHEMA_VERSION) throw new Error(`verifySchema: expected v${SCHEMA_VERSION}, got v${v}`); const cols = await db.getAllAsync('PRAGMA table_info(sets);'); const names = new Set(cols.map(c => c.name)); // Guard the query paths my app uses every session. for (const required of ['id', 'workout_id', 'exercise_id', 'reps', 'weight_kg', 'created_at', 'rpe']) { if (!names.has(required)) { throw new Error(`verifySchema: missing sets.${required}`); } } // Also verify my hot index exists. const idx = await db.getAllAsync<{ name: string }>("PRAGMA index_list('sets');"); const idxNames = new Set(idx.map(i => i.name)); if (!idxNames.has('idx_sets_exercise_created')) { throw new Error('verifySchema: missing idx_sets_exercise_created'); } } Yes, it’s a little paranoid. If verification throws, I show a blocking screen with a support code. At some point you’ll want: NOT NULL real defaults dropping columns changing types SQLite doesn’t support most of that directly. I messed this up once and duplicated rows. This is the rebuild pattern I use now. // db/migrations/rebuildSets.ts import * as SQLite from 'expo-sqlite'; export async function rebuildSetsWithDefaults(db: SQLite.SQLiteDatabase) { // 1) New table with the final shape. await db.execAsync(` CREATE TABLE sets_new ( id TEXT PRIMARY KEY NOT NULL, workout_id TEXT NOT NULL, exercise_id TEXT NOT NULL, reps INTEGER NOT NULL, weight_kg REAL NOT NULL, rpe INTEGER NOT NULL DEFAULT 0, created_at INTEGER NOT NULL ); `); // 2) Copy data over. COALESCE handles old NULL rpe. await db.execAsync(` INSERT INTO sets_new (id, workout_id, exercise_id, reps, weight_kg, rpe, created_at) SELECT id, workout_id, exercise_id, reps, weight_kg, COALESCE(rpe, 0), created_at FROM sets; `); // 3) Swap. await db.execAsync('DROP TABLE sets;'); await db.execAsync('ALTER TABLE sets_new RENAME TO sets;'); // 4) Recreate indexes. await db.execAsync('CREATE INDEX IF NOT EXISTS idx_sets_workout ON sets(workout_id);'); await db.execAsync('CREATE INDEX IF NOT EXISTS idx_sets_exercise_created ON sets(exercise_id, created_at);'); } I only do this inside the migration transaction. And yeah — on huge tables this can be slow. My first attempt ran migrations after the UI mounted. The app rendered screens that queried the DB while migrations were running. SQLiteException: no such table: sets and once, a silent empty list because my query failed and I swallowed the error Now I block rendering until: DB opens migrations run verification passes // App.tsx import React from 'react'; import * as SQLite from 'expo-sqlite'; import { migrateIfNeeded } from './db/migrate'; import { verifySchema } from './db/verify'; export default function App() { const [ready, setReady] = React.useState(false); const [err, setErr] = React.useState(null); React.useEffect(() => { let cancelled = false; (async () => { try { const db = await SQLite.openDatabaseAsync('gym.db'); await migrateIfNeeded(db); await verifySchema(db); if (!cancelled) setReady(true); } catch (e: any) { if (!cancelled) setErr(e?.message ?? String(e)); } })(); return () => { cancelled = true; }; }, []); if (err) return DB init failed: {err}; if (!ready) return Opening database…; return ; } function Text(props: any) { return <>{/* replace with RN Text in real app */}{props.children}; } function MainApp() { return App ready; } In my real project I pass db via context. I shipped 4 schema versions across 3 physical devices (Pixel 7, iPhone 13, a cheap Android 11 burner). Before this setup, upgrading from my old build failed on 2 of those 3 devices with no such column: sets.rpe. user_version + transactional migrations + verification, I did 27 cold starts in a row without a single migration-related crash. The biggest win wasn’t performance. Use PRAGMA user_version for schema state. One number. No drama. Wrap every migration in BEGIN/COMMIT. Always rollback on error. Add columns nullable first. Rebuild later if you need constraints. Verify the schema after migrating with PRAGMA table_info and PRAGMA index_list. Block UI until migrations + verification finish, or you’ll chase race-condition ghosts. I’m still deciding how aggressive to be on verification failures. If you ship offline-first SQLite apps, do you ever auto-reset the local DB on schema mismatch, or do you always force a manual recovery path?

Dev.to (React Native)
~5 min readMay 6, 2026

React Native Testing in 2026: Jest, Detox, and Maestro Compared

The React Native testing consensus has fractured. For years it was "Jest for units, Detox for E2E, that's the answer." In 2026, Maestro has crossed 10,000 GitHub stars, Detox holds at 11,800, and most production teams now run a layered combination of all three. Here's a practical, opinionated breakdown of when to use each — based on actually shipping React Native apps, not just reading the docs. Tool What it's for Setup time Flakiness Jest Unit + component tests Built-in <0.1% Detox Gray-box E2E with native synchronization 1-3 days <2% Maestro Black-box E2E with YAML 15 minutes <1% If you only read this far: start with Jest + Maestro. Add Detox only when you feel the specific pain it solves. Jest plus React Native Testing Library is the universally accepted baseline. Jest 30 (late 2025) brought meaningful performance improvements, and RNTL 12.x now plays nicely with the New Architecture. Use it for: Pure logic (reducers, hooks, utils) Component render tests with testID and accessibility queries Mocking native modules without touching a simulator The ceiling is anything involving the actual rendering pipeline, real native modules, or multi-screen flows. Detox's superpower is gray-box synchronization — it hooks into your app's internals and waits for the JavaScript thread, network calls, animations, and timers to genuinely idle before firing the next action. That's how Wix runs thousands of Detox tests with sub-2% flakiness. The catch: # What setting up Detox actually involves 1. Install detox CLI + detox-cli global 2. Add detox config to package.json (testRunner, apps, devices, configurations) 3. Create separate Detox build for iOS (Podfile, AppDelegate patches) 4. Same for Android (build.gradle changes) 5. Wire up Jest-based test runner with Detox globals 6. Configure CI with macOS for iOS + Android emulator hardware accel Worth it for native-heavy apps with a dedicated platform team. Painful for two-person teams. A complete login E2E test in Maestro: appId: com.mycompany.myapp --- - launchApp - tapOn: "Email" - inputText: "test@example.com" - tapOn: "Password" - inputText: "supersecret" - tapOn: "Log in" - assertVisible: "Welcome back" That's the entire file. One CLI install, point at your built app, maestro test login.yaml. Done. Why teams are picking it up: Cross-platform by default — same YAML runs on iOS and Android Built-in flakiness tolerance (auto-retries, waits for content) Maestro Studio for record-and-playback Maestro Cloud for parallel real-device runs The trade-off: it's not a great fit for "verify this specific selector returns the right value when the network is offline." That's still Jest's job. Jest → every commit, runs in seconds React Native TL → component tests for happy + worst-case paths Maestro → every PR, covers your top 5-15 user flows Detox (optional) → nightly or pre-release, native-heavy flows only Most mobile teams should ignore the classic 70/20/10 testing pyramid. UI bugs are what users actually see. A 50/20/30 split usually serves better. A category that didn't exist three years ago: testing React Native code your team didn't write. Tools like RapidNative generate full Expo apps from a prompt or screenshot. The code is real and exportable, but you didn't internalize the structure as you went. Maestro is a near-perfect fit here. Because the tests are written from the user's perspective ("tap Login, type email, assert Welcome back"), they don't depend on the internal implementation of components you didn't write yourself. You can write a robust smoke test in 5 minutes via Maestro Studio. Workflow that works: Export the project, run npm install && npx expo start Write one Maestro test for the most critical user flow After the next AI-driven iteration, re-run Maestro — anything that breaks is a real regression Layer Jest tests on the parts you start modifying by hand Solo or small team, JS-heavy app → Jest + Maestro Large team, native modules, custom animations → Jest + Detox Both? Critical-path safety net + broad smoke coverage → All three Testing an AI-generated app you'll iterate on → Jest + Maestro, lean on Maestro Studio The teams I see struggle most are the ones that pick Detox because it sounds serious, then never finish the setup. A working Maestro suite catching 80% of regressions beats a half-configured Detox setup catching none. If you're spending more time configuring testing than writing tests, that's a signal — pick the lighter tool, ship, then upgrade when you feel real pain. What's your current React Native testing stack? Curious how many people have made the Detox → Maestro switch. Posting instructions for Dev.to: Best time: Tuesday or Wednesday, 8-10 AM EST (peak Dev.to traffic) Tags: reactnative, testing, mobile, javascript (Dev.to allows max 4) Set canonical_url in frontmatter to RapidNative blog URL (do not skip — this protects SEO) Engage in comments within the first 4 hours; Dev.to weighs early engagement heavily for the homepage Don't add the schema markup recommendations here — Dev.to handles its own structured data

Dev.to (React Native)
~2 min readMay 6, 2026

I built my own IAP backend instead of using RevenueCat — what 3 weeks of pain taught me

I'm shipping a subscription-based React Native app and went through the To be clear — RevenueCat is good. For a lot of apps it's the right call. Revenue share scales with you. 1% after $2.5K MRR is fair pricing, but it's a surface I want to own for the lifetime of the product, not rent. My subscription state lives in their DB. I still need to mirror "user X is subscribed" into my own Postgres to join with the rest of my data, which means I'm running a webhook handler from them either way. Felt like I was paying to add a hop. So I started writing it myself. Here's where the time actually went. You don't just trust the JWT. You walk the x5c chain in the JWT OAuth2 service account is fine. The non-obvious bit: use purchases.subscriptionsv2.get — it returns a subscriptionState expiryTimeMillis + cancelReason, just read the enum. This is where it got nasty. Apple's DID_FAIL_TO_RENEW with subtype GRACE_PERIOD vs GRACE_PERIOD_EXPIRED. Google's IN_GRACE_PERIOD, ON_HOLD, SUBSCRIPTION_PAUSED. I needed an active: boolean for Google auto-refunds any purchase you don't acknowledgePurchase within Apple's App Store Server Notifications V2 are reliable but not /status checks, treat subscriptionsv2.get. Once it was working in production, none of the above was app-specific. github.com/jeonghwanko/onesub One line: app.use(createOneSubMiddleware(config)); MIT licensed. Pluggable subscription store (PostgreSQL built-in, useOneSub() hook + paywall component) but the server works with any No analytics dashboard yet. RevenueCat's actual moat is cohort retention / LTV / experiments, not the receipt validation. There's a self-hosted Docker dashboard but it's operational (active counts, failed webhooks) — not cohort analysis. No hosted version. You run your own server. If "I want to ship an MVP without running infra" is the goal, RevenueCat still wins. Apple Family Sharing and Promotional Offers aren't implemented yet. An MCP server is bundled — point Claude Code or Cursor at it and you can say "add a monthly subscription to this Expo app" and it generates the App Store Connect product, the Play Console product, and the client integration. Not the main feature but it's the part that surprised me with how much friction it removed. 296+ tests, including multi-notification e2e scenarios for the lifecycle stuff above. That's where most of the bugs live. If you've shipped IAP yourself in RN — what edge case tripped you up Repo: github.com/jeonghwanko/onesub — MIT licensed. Issues and PRs welcome.

Dev.to (React Native)
~17 min readMay 6, 2026

How I stopped Claude Code from hallucinating 42% of my React Code

TL;DR I tracked 6 months of my own AI coding sessions in React Native. In my logs, 42% of AI-generated diffs contained at least one hallucinated import, fake API, or duplicate component. Token costs were the second tax. Re-loading project context every session cost roughly $135/month per developer at the model pricing I was using. Better prompts didn’t fix either problem. The AI didn’t need smarter instructions : it needed memory and a map. I built U-AMOS (Universal AI Memory Operating System): a 3-tier memory bank, a context map, a rule priority system that splits “what to do” from “how to do it,” a 7-point anti-hallucination checklist, and a plan/act workflow that runs before any code is generated. After deploying U-AMOS across my own projects over a 3-month tracking period: hallucinations dropped from 42% to 3%. Token costs dropped from $180/month to $18/month. Feature velocity increased roughly 5x. These are my internal numbers: I’ll note where external research reports similar magnitudes. The framework is open and documented. U-AMOS 2.0 also ships pre-configured inside AI Mobile Launcher for anyone who doesn’t want to build it from scratch. Everything in this article that is quantified — the 42%, the $135/month, the 91% reduction — comes from 6 months of my own session logs across my React Native projects. I tracked hallucinations manually, counted tokens via API usage dashboards, and measured debugging time against my own estimates. These are not controlled experiments. What I can say is that the direction of the results matches what external research is starting to report. Memory-system papers are showing 40–60% accuracy improvements and 60–90% token reductions when you introduce structured memory into LLM workflows. Mem0’s Claude Code integration reports roughly 90% lower token usage with persistent memory vs full-context prompting. The order of magnitude is consistent. The exact numbers are mine. It was a Tuesday in October. I was building a functionality for my app. I asked Claude Code to add a Redux toolkit usage to manage user accounts. It generated something that looked correct. I committed it. Twenty minutes later, the build failed. The AI had been imported useRouter from next/router. In a React Native project. That hook doesn’t exist on mobile. It was a 30-second fix, but it wasn’t the first time. It was the fourth time that week. I started keeping a log. Every wrong thing the AI generated, I wrote down. After a month, I had the data from my own sessions: 42% of AI-generated diffs had at least one hallucinated import, function, or component 25% of the components it created already existed in the codebase under a different name I was spending roughly 4 hours a week debugging things the AI had invented I was using Cursor much more than Claude that time, so with Cursor, I had analytics dashboard, an d confirm some of my thesis The frustrating part was that I knew the AI wasn’t getting worse. I was paying for the best models. The prompts were detailed. The context windows were huge. The problem wasn’t the model. The problem was that I was treating it like a senior developer when it was behaving like a junior with no memory of the project, and no map of the codebase. I have played before by adding rules, memory bank,.. but there were always issues in grasping the whole context, and i need to remind him much more often. While I was tracking hallucinations, I also started tracking token usage. The numbers were uncomfortable. Every session, I was loading the same context: project structure, architecture decisions, naming conventions, what components already existed. The AI had no memory between sessions, so I kept reexplaining everything. Worse, when I didn’t re-explain, the AI would explore : running directory listings, opening files at random, building up its own picture of the codebase by trial and error. That exploration is where the worst of the token bleeding happens. Asking “where is the authentication logic?” can trigger 25,000 tokens of blind navigation through folders before the AI finds it. The math, at the model pricing I was using at the time: Session 1: Re-load + explore project structure → 50,000 tokens Session 2: Re-load + explore project structure → 50,000 tokens Session 3: Re-load + explore project structure → 50,000 tokens Daily total: 150,000 tokens Monthly cost: ~$135/month per developer (based on ~$30 per million tokens, prompt + completion) That’s the invisible tax. Even when the AI was generating correct code, I was paying to give it the same context every time, plus paying for it to wander around the repo finding things it should already know about. I do remember creating one file, that has architecture.md, where i put this type of context that i give each time, and then i created review_best_practices.md, to have the rules for the mistakes that he was repeating. Then it comes the Claude Code best practices usage, I tried the obvious approaches first. Longer CLAUDE.md files. More detailed system prompts. Better instructions on what to remember. None of it worked sustainably. The AI would hold context for a session or two, then drift. Because the problem wasn’t the prompt. It was the architecture. The shift came when I stopped thinking of AI as a developer and started thinking of it as a system that needed memory built for it, and a map handed to it. I do remember watching an intreview by Thomas Dohmke, and he asked one of the best practices is to look at it as a colleague, not a tool. A junior dev with no memory of your project would also generate hallucinated imports. Would also recreate components that already existed. Would also waste hours wandering through unfamiliar code looking for the right file. The AI wasn’t broken. The relationship was broken. I was asking it to behave like it had context it didn’t have. A lot of content I’ve seen treats this as a prompting problem. Write a better system prompt. Use a longer context window. Be more specific in your instructions. My experience, and increasingly what I see from teams who’ve shipped real production AI-assisted codebases, is that prompts plateau. Durable context compounds. The teams getting consistent AI output aren’t writing better prompts : they’re building memory systems that load the right context at the right time and update automatically when something changes. you can read this article about best prompt engineering approach here: Essential guide of Prompt Engineering for Software Engineers That’s what I built. I called it U-AMOS. U-AMOS : Universal AI Memory Operating System, is a framework for managing AI-assisted development. It has five components, each solving a specific failure mode I’d logged. ┌──────────────────────┐ │ Memory Bank │ │ (Cold / Warm / Hot) │ └─────────┬────────────┘ ↓ ┌──────────────────────┐ │ Context Map │ │ (Index / Lookup) │ └─────────┬────────────┘ ↓ ┌──────────────────────┐ │ Plan Mode │ │ (before execution) │ └─────────┬────────────┘ ↓ ┌──────────────────────┐ │ Validation Layer │ │ (7-point checklist) │ └─────────┬────────────┘ ↓ ┌──────────────────────┐ │ Code Generation │ └─────────┬────────────┘ ↓ ┌──────────────────────┐ │ Progress Logging │ │ (.memory updates) │ └─────────┬────────────┘ ↓ └──────→ FEEDBACK LOOP ──────┘ Not all context is equally important for every task. So I tiered it. Cold tier (project identity — loads rarely, ~10% of sessions): 00-description.md — what we’re building, in 500 words 01-brief.md — non-negotiable constraints 10-product.md — feature specs Warm tier (architecture — loads on demand, ~30% of sessions): 20-system.md — how the system works 30-tech.md — stack and dependencies 60-decisions.md — why we chose what we chose 70-knowledge.md — lessons learned Hot tier (current state — loads every session, 100%): 40-active.md — what we’re working on right now (max 500 words) 50-progress.md — what shipped recently The hot tier is small (~2,000 tokens) and always loads. The warm tier loads when the task touches architecture (~5,000 tokens). The cold tier almost never loads during development — it’s the onboarding layer. A new developer (or a new AI agent starting a session) reads the cold tier once and understands the project without hunting through the entire repo. The result: 2,000–10,000 tokens per session instead of 50,000. That assumes you’re maintaining the files actively — see the hygiene section below. This is the piece that does the most work for the lowest cost. context_map.md is a single 500-token lookup file at the root of the project. It indexes everything: every feature, every service, every core UI component, with the entry path next to each one. # Context Map ## Features (14) | Feature | Entry Point | Purpose | |----------------|----------------------------------|--------------------| | auth | src/features/auth/index.ts | Authentication | | onboarding | src/features/onboarding/index.ts | User onboarding | | todos | src/features/todos/index.ts | Todo management | ## Services (15) | Service | Path | Responsibility | |----------------|----------------------------------|--------------------| | logger | src/services/logging/logger.ts | Centralized logs | | analytics | src/services/analytics/... | Firebase analytics | ## UI Components (40+) | Category | Components | |----------------|----------------------------------| | Buttons | Button, IconButton, FAB | | Forms | Input, ControlledInput, Switch | When the AI starts a session and needs to know “where does authentication live?”, it reads one 500-token file instead of running directory listings, opening five files to compare them, and burning 25,000 tokens building its own mental model of the repo. In my own logs, this single file removed roughly 60% of the per-session token consumption that wasn’t already covered by the memory bank. The math: 500 tokens replaces 25,000. That’s a 50x reduction on the most expensive part of every session : discovery. The same logic applies to coding rules. Critical rules (always load, ~4,000 tokens): Meta-rules and session protocol Anti-hallucination checklist Common violations (no inline styles, no console.log, no hardcoded strings, no API keys) Important rules (task-specific, ~2,000 tokens each): Design system patterns: loads if working on UI State management rules: loads if working on the state i18n patterns : loads if adding translations Navigation patterns: loads if adding routes Recommended rules (load if relevant): Performance optimizations Testing patterns Security and platform-specific privacy rules The other architectural distinction that mattered: I separated generators from rules. They look similar but they solve different problems. Generators answer what to do. Step-by-step implementation guides for recurring tasks: “add a new language,” “add a new screen,” “add a paywall.” They’re workflow documents — copy this template, register here, run this script. https://aimobilelauncher.com/, and i explained them there, you can check the code about different generators. Rules answer how to do it well. Code quality patterns and constraints: this is what good styling looks like; this is what the wrong import path looks like. When you mix the two, when your “how to add a language” doc also tries to explain every i18n best practice, the AI gets overwhelmed and follows neither cleanly. Splitting them means the AI reads the generator to know the steps, then reads the matching rule pack to write the code correctly. Two clean reads. No drift. This is a philosophical point but it’s the reason U-AMOS rules actually work. Most rule documents read like this: “Use proper styling conventions. Avoid inline styles where possible.” Rules in U-AMOS read like this: ## Styling ### ❌ WRONG — inline styles <View style={{ marginTop: 20, padding: 16 }}> ### ✅ CORRECT — Restyle props <Box marginTop="xl" padding="lg"/> ### Exception: unsupported properties <Box marginTop="xl" style={{ opacity: 0.5 }}> (opacity is not a Restyle prop, inline is acceptable here) LLMs don’t generalize abstract principles well. They pattern-match. If you show them what wrong looks like next to what right looks like, they reliably produce the right pattern. If you tell them to “follow good practices,” they produce whatever the training data nudged them toward last time. Every rule pack in U-AMOS is built this way. ❌ wrong → ✅ correct → exception (if any). No paragraphs of theory. No abstract guidelines. Just visual diffs. This is the single biggest determinant of whether a rule actually changes the AI’s output or gets ignored. Before any code is generated, the AI verifies: Does the file I’m editing exist? Did I check the component inventory before creating something new? Did I check the service registry? Is the import path correct? Does the function I’m calling actually exist in that file? Am I using the project’s i18n pattern, not hardcoded strings? Am I using the project’s logger, not console.log? If any answer is no, the AI stops and verifies before continuing. The first week I deployed this, my hallucination rate in my own sessions dropped from 42% to under 5%. Not because the model improved. Because I made verification mandatory before generation. Each of these rules is manually crafted. This is the piece I added after the initial U-AMOS deployment, and it might be the highest-leverage addition. Before touching more than one file, the AI must: Read .memory/40-active.md (current focus) Draft an implementation plan in plain markdown Wait for my confirmation Execute only after approval Log what it actually shipped back into .memory/50-progress.md This sounds slow. It’s actually faster because you catch architectural mistakes at the plan stage instead of the debugging stage. Tweag’s Agentic Coding Handbook and Lullabot’s memory bank guide both document the same pattern. It’s becoming standard practice in teams using agentic coding seriously. I tracked the same metrics for 3 months after deploying U-AMOS across my own projects. Hallucinations (from my logs): 42% → 3% (93% reduction) Tokens per session (average): 48,000 → 4,200 (91% reduction) Token cost (at my model tier): ~$180/month → ~$18/month Time debugging AI errors: 4 hours/week → 20 minutes/week Duplicate components created: 23 in the 3 months before → 0 in the 3 months after Feature velocity: roughly 5x faster on features I tracked end-to-end I also started tracking which rule packs loaded most often and which hallucination types were still slipping through. That observability layer is what tells you where the system needs a new rule file vs where the AI needs better examples. The mistake I see in most memory bank setups is treating the files as append-only. They’re not. They need pruning. My current hygiene routine: 40-active.md updates at the start of every work session (what’s the actual focus today) 50-progress.md gets a new entry after every shipped feature : old entries archive monthly 70-knowledge.md gets pruned weekly : if a lesson is now in a rule file, it gets removed from the knowledge doc 20-system.md only updates when architecture actually changes If the AI proposes changes to any memory file, it does it as a plan diff I review : it never writes to memory silently There’s one more file that prevents documentation rot: updated_rules.md. It’s a changelog for rule exceptions. When the team makes a real exception to a rule : for example, “we never use inline styles, EXCEPT for the opacity prop because Restyle doesn’t support it” : that exception goes in updated_rules.md with a date and a reason. Not into the main rule file. # Updated Rules (Living Document) ## 2025-12-20 — Inline styles exception **Original rule**: NO inline styles ever **Updated rule**: NO inline styles EXCEPT for single properties not supported by Restyle (opacity) **Why**: Restyle doesn’t support opacity prop **Example**: ✅ <Box marginTop="xl" style={{ opacity: 0.5 }} /> Why this matters: rules become outdated quickly, and rewriting them every time creates drift. The living rules file lets the AI always check the latest guidance without losing the original logic. Exceptions are explicit and dated. Historical context is preserved. The main rule files stay clean. The 2,000–10,000 token figure holds only if you maintain all of this. If you let the files grow unchecked, you’ll hit 50,000 tokens again within two months. The context window isn’t the bottleneck : your maintenance habits are. This isn’t a finished system. Four things still fail or are incomplete: Long sessions. Context degrades over multi-hour conversations. I re-attach memory bank files every 30–40 messages. A better solution is probably an MCP server that handles re-injection automatically, but I haven’t built it. Performance edge cases. The AI generates working code that sometimes re-renders too aggressively. Architecture rules help, but don’t eliminate this. I m fixing this by creating performance rules for expo apps. i m using the official one from Expo, but it is not enough, and with the project architecture, it needs a lot of fixes and improvement. Cross-project memory. U-AMOS handles per-project memory. The next layer — preferences and patterns that follow you across every project you touch — is what tools like Mem0’s MCP integration and Claude Code’s own auto-memory system are starting to solve. If you find yourself re-teaching the same conventions in every new repo, cross-project memory is the fix. I’m watching this space closely. I have created a Prompt intialization for the system, i test it on some of my projects, and it was succefful. not so many rules though, but you can customize that part You can check it here: link Thanks for reading Code Meet AI: Stay relevant in the AI era! Subscribe for free to receive new posts and support my work. U-AMOS didn’t emerge from a vacuum. These are the guides I’ve found most aligned with the same pattern: Tweag’s Agentic Coding Handbook: memory bank system and plan/act mode, well documented Mem0’s Claude Code integration: if you want cross-project memory on top of U-AMOS, this is the current best path Anthropic’s Claude Code best practices: the official guidance on CLAUDE.md structure, memory, and tool use The pattern is converging across all of these. Structured memory, tiered loading, mandatory verification before generation, plan-before-execute. U-AMOS is my implementation of that pattern for React Native specifically, with the anti-hallucination rules, the context map, and the mobile-specific constraints built in. I built AI Mobile Launcher as the productized version of U-AMOS for React Native. It ships with: The full 9-file memory bank is pre-structured for a new project A pre-built context map of every feature, service, and UI component All critical, important, and recommended rule packs — written as visual diffs, not paragraphs The split between generators (workflows) and rules (patterns) is already in place Pre-built component and service inventories Cursor and Claude Code entry points configured with plan/act mode Generators for common features (onboarding, paywalls, i18n, design system) The 7-point anti-hallucination checklist is embedded in every entry point A starter updated_rules.md ready for your first exception The Lite tier is free on GitHub. U-AMOS 2.0 ships fully configured in the Starter tier. If you’re starting a new React Native project and want the memory system running from day one without the setup work, that’s the fastest path. aimobilelauncher.com If you’re adding U-AMOS to an existing project, the steps above are enough to get started. The framework isn’t magic — it’s the result of 6 months of failed sessions, logged and analyzed, until the AI stopped fighting me and started shipping with me. The content I see most often on AI coding frames is this as a prompting problem. Use a better system prompt. Be more specific. Add more examples to your instructions. My experience over 6 months of tracking my own sessions is that prompts hit a ceiling. Once you’ve written a clear, specific prompt, the next 10 iterations give you marginal gains. Memory and structure compound differently . every lesson added to the memory bank improves every future session. Every entry in the context map saves another exploration loop. Every rule written as a visual diff prevents an entire category of hallucination permanently. The AI isn’t a developer you prompt. It’s a system you build context for. Build the memory. Hand it the map. Show it what wrong looks like next to what right looks like. Stop paying to re-explain the same architecture every day. U-AMOS is how I did it. The principles work without my specific files. The files work better with the principles. Either way: fix the memory and the map first, then build the product. I write Code Meet AI weekly — AI in mobile development, real tradeoffs, what’s actually working in production. Next issue: agent-first mobile architecture and why most “AI features” in apps are just bolted-on chatbots pretending to be product. → https://codemeetai.substack.com/

Dev.to (React Native)
~6 min readMay 6, 2026

E2E Testing React Native with Maestro: A Practical Guide

Originally published on PEAKIQ Source: https://www.peakiq.in/blog/e2e-testing-your-react-native-app-with-maestro Writing end-to-end tests for mobile apps has always been painful. Appium needs drivers. Detox needs native build hooks. Everything breaks when your UI shifts by 4 pixels. Maestro changes that. It talks to your app the same way a user does — through the accessibility layer — and writes tests in plain YAML. This guide covers everything: installation, writing your first flow, handling iOS quirks, and plugging Maestro into your CI pipeline. Before we dive in, here's why Maestro stands out against the traditional tools. Feature Maestro Detox Appium Setup complexity Minimal High Very High Language YAML JS/TS JS/Python/Java Platform support iOS + Android iOS + Android iOS + Android Zero app changes ✅ ❌ ❌ Flakiness handling Built-in auto-wait Manual sleeps Manual sleeps Expo support ✅ First-class Limited Limited Info: Maestro operates entirely at the accessibility layer. It doesn't touch your JavaScript source, doesn't need npm packages inside your app, and tests the final compiled binary — just like a real user would use it. Node.js ≥ 18 React Native CLI or Expo project For iOS: Xcode + Simulator running For Android: Android Studio + AVD Manager with an emulator running curl -Ls "https://get.maestro.mobile.dev" | bash Maestro is compatible with macOS, Windows, and Linux. Verify the install: maestro --version iOS (macOS only): open -a Simulator Android: emulator -avd Pixel_6_API_33 # React Native CLI npx react-native run-ios npx react-native run-android # Expo npx expo start --ios npx expo start --android Create a .maestro/ folder at the root of your project. This is where all your test flows live. your-app/ ├── .maestro/ │ ├── sign-in-flow.yaml │ └── onboarding-flow.yaml ├── src/ └── package.json Here's a minimal flow that launches your app and taps a button: # .maestro/sign-in-flow.yaml appId: com.yourapp.bundle --- - launchApp - tapOn: "Sign In" - inputText: text: "hello@example.com" label: "Email" - inputText: text: "supersecret" label: "Password" - tapOn: "Continue" - assertVisible: "Welcome back" Run it: maestro test .maestro/sign-in-flow.yaml That's it. No compilation step, no linking, no driver setup. Maestro gives you two clean ways to target elements in your React Native app. If your component renders visible text, Maestro can find it directly: - tapOn: "Add to Cart" - assertVisible: "Item added" testID For components without unique visible text (icons, image buttons), add a testID: <TouchableOpacity testID="submit-button" onPress={handleSubmit}> <Icon name="arrow-right" /> </TouchableOpacity> Then in your flow: - tapOn: id: "submit-button" Tip: Prefer testID over text matching for interactive elements. It decouples your tests from copy changes and internationalization. Text input requires two steps: focus the field, then type. Your React Native component: <TextInput testID="email-input" placeholder="Enter your email" onChangeText={setEmail} keyboardType="email-address" /> Your Maestro flow: - tapOn: id: "email-input" - inputText: "hello@maestro.dev" - hideKeyboard Maestro ships with a concise set of assertions out of the box. # Assert an element is visible - assertVisible: "Dashboard" # Assert by testID - assertVisible: id: "user-avatar" # Assert something is NOT on screen - assertNotVisible: "Loading..." # Assert text content - assertVisible: text: "You have 3 notifications" On iOS, React Native sometimes "swallows" touch events when components are deeply nested. You'll hit this when trying to tap something inside a Pressable or TouchableOpacity that's inside another tappable container. The fix — in your React Native component: // ❌ Before — inner Text is unreachable <TouchableOpacity onPress={handlePress}> <View> <Text>Tap me</Text> </View> </TouchableOpacity> // ✅ After — disable accessibility on outer, enable on inner <TouchableOpacity accessible={false} onPress={handlePress}> <View> <Text accessible={true} testID="tap-target">Tap me</Text> </View> </TouchableOpacity> Your flow stays clean: - tapOn: id: "tap-target" Avoid duplicating flows for different test scenarios. Use Maestro's parameter support: # .maestro/sign-in-flow.yaml appId: com.yourapp.bundle --- - launchApp - tapOn: "Email" - inputText: ${EMAIL} - tapOn: "Password" - inputText: ${PASSWORD} - tapOn: "Sign In" - assertVisible: "Dashboard" Run with parameters: maestro test .maestro/sign-in-flow.yaml \ -e EMAIL=test@example.com \ -e PASSWORD=mypassword This is especially powerful for testing multiple user roles or locales without duplicating flow files. Run every flow in your .maestro/ directory with a single command: maestro test .maestro/ Maestro will execute all .yaml files sequentially and report results for each one. # .github/workflows/e2e.yml name: E2E Tests on: push: branches: [main] pull_request: jobs: e2e-android: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Set up JDK uses: actions/setup-java@v4 with: distribution: "temurin" java-version: "17" - name: Install Maestro run: curl -Ls "https://get.maestro.mobile.dev" | bash - name: Build APK run: cd android && ./gradlew assembleDebug - name: Start emulator uses: reactivecircus/android-emulator-runner@v2 with: api-level: 33 script: | adb install android/app/build/outputs/apk/debug/app-debug.apk maestro test .maestro/ For parallel execution across real devices, push to Maestro Cloud: maestro cloud \ --apiKey $MAESTRO_API_KEY \ android/app/build/outputs/apk/debug/app-debug.apk \ .maestro/ This gives you a visual dashboard of historical runs, device logs, and screenshots on failure. Here's a minimal React Native screen and its corresponding Maestro flow. App.tsx: import React, { useState } from "react"; import { SafeAreaView, Button, Text, TextInput, StyleSheet } from "react-native"; export default function App() { const [taps, setTaps] = useState(0); const [text, setText] = useState(""); return ( <SafeAreaView style={styles.container}> <Button title="Add one" onPress={() => setTaps(taps + 1)} /> <Button title="Add ten" testID="add_ten" onPress={() => setTaps(taps + 10)} /> <Text>Number of taps: {taps}</Text> <TextInput testID="text_input" placeholder="Change me!" onChangeText={setText} style={styles.input} /> <Text>You typed: {text}</Text> </SafeAreaView> ); } const styles = StyleSheet.create({ container: { flex: 1, alignItems: "center", justifyContent: "center" }, input: { borderWidth: 1, width: 200, padding: 8, marginTop: 12 }, }); .maestro/counter-flow.yaml: appId: com.myapp --- - launchApp - tapOn: "Add one" - tapOn: id: "add_ten" - assertVisible: "Number of taps: 11" - tapOn: "Change me!" - inputText: "Hello, Maestro!" - assertVisible: "You typed: Hello, Maestro!" Always use testID for anything interactive. Text changes. testID doesn't (unless you change it deliberately). Don't use sleep. Maestro automatically waits for elements to appear. Explicit sleeps are a code smell — if you think you need one, something else is wrong. Keep flows small and composable. One flow = one user journey. A sign-in flow shouldn't also test the profile page. Reset app state between runs. Use clearState: true in launchApp to ensure a clean slate: - launchApp: clearState: true Use runFlow for shared setup: # .maestro/helpers/login.yaml --- - tapOn: "Email" - inputText: ${EMAIL} - tapOn: "Sign In" # Your main flow --- - launchApp - runFlow: helpers/login.yaml - assertVisible: "Dashboard" Maestro removes the friction from mobile E2E testing. No native bridges, no custom drivers, no flaky sleeps — just YAML that reads like a QA script and runs like a charm. With full support for both Expo and React Native CLI, built-in handling of timing and async UI, and a one-line CI setup, there's very little reason not to add Maestro to your project today. Get started: curl -Ls "https://get.maestro.mobile.dev" | bash maestro test .maestro/your-first-flow.yaml Happy testing. Maestro Official Docs React Native + Maestro Quickstart Expo + EAS + Maestro Guide Maestro GitHub

Dev.to (React Native)
~2 min readMay 5, 2026

React Native: Build Native Mobile Apps with One Codebase

Originally published on PEAKIQ Source: https://www.peakiq.in/technology/software-development/react-native React Native is an open-source mobile application framework developed and maintained by Meta (formerly Facebook). It enables developers to build truly native mobile applications using JavaScript and React, while sharing most of the code across platforms. Unlike hybrid approaches, React Native renders real native UI components instead of web views. This results in near-native performance, smooth animations, and a native look and feel on both iOS and Android. Build apps for iOS and Android from a single codebase — write once, deploy everywhere, with platform-specific tweaks where needed. Powered by real native components, not web views. Expect smooth 60fps animations and near-native speed out of the box. Uses familiar concepts like components, hooks, and state management — no new paradigm to learn if you already know React. Instantly see changes during development without losing app state. Faster iteration, shorter feedback loops. A thriving community backs thousands of third-party libraries, making it easy to find solutions for almost any use case. Easily bridge to platform-specific native code when you need full access to device hardware or OS APIs. Use Case Description Cross-platform mobile apps One codebase targets both iOS and Android MVPs & startup products Ship faster, validate ideas sooner Enterprise-grade apps Scalable architecture, maintainable codebases Social & messaging platforms Real-time interactions with rich, responsive UI E-commerce applications Smooth, performant shopping experiences Fintech applications Secure, high-performance financial apps React Native reduces development time and cost by enabling a shared codebase across platforms. With its native performance, modern React architecture, and massive ecosystem, it is a reliable choice for building scalable and maintainable mobile applications. Metric Benefit ~50% cost saved One team, one codebase for two platforms 2-in-1 deployment iOS + Android from a single build pipeline 60fps animations Native rendering engine, no web view overhead Bottom line: If your team knows React and you need to ship on mobile, React Native is the most productive path — without sacrificing performance or user experience.

Dev.to (React Native)
~14 min readMay 4, 2026

Setting up MSW v2 in React Native

Why MSW over manual mocks Most React Native projects mock their API layer with jest.fn(). You mock fetch or your Axios instance, define what it returns, and test against that. It works. Until it doesn't. The problem: you're testing your code's interaction with a mock, not with an HTTP layer. If your API client changes how it constructs URLs, adds headers, or handles retries, the mock doesn't catch the regression. This matters even more if you're validating responses at runtime with something like Zod, because you want the validation layer to run against real response shapes, not hand-crafted mock objects. The mock always returns what you told it to return, regardless of what the code actually sent. Mock Service Worker (MSW) intercepts requests at the network level. Your code makes real HTTP calls. MSW catches them before they leave the process and returns your mock responses. Everything between your component and the network is exercised: the Redux thunk, the Axios interceptors, the error handling, the response parsing. 💡 The key difference: manual mocks replace your code. MSW replaces the network. Your code runs exactly as it would in production, up to the point where the request would leave the device. The setup below was written against: React Native 0.74+ with the default react-native Jest preset TypeScript with the standard RN Babel config Redux Toolkit (the custom render wrapper assumes this) Node 18 or later (Node 20 recommended) If you're on an older RN version, an Expo Jest preset, or no Redux, the concepts still apply but a few snippets will need adjustment. MSW v2 runs in Jest tests via the Node.js server. The browser service worker isn't relevant for mobile, so ignore everything in the MSW docs about service-worker registration. yarn add -D msw node-fetch@2 web-streams-polyfill msw is the obvious one. node-fetch and web-streams-polyfill are the polyfills MSW v2 needs in the React Native Jest environment, which I'll wire up in the next step. 💡 Why pin node-fetch@2? node-fetch v3+ is ESM-only and won't load through require() in a CommonJS Jest setup file. Either pin to v2 (what this post does), or migrate the polyfills file to ESM. v2 is the lower-friction path on a default React Native Jest preset. 💡 Don't trust posts that say "no polyfills required". MSW v2 is built on the Fetch API and Web Streams. Some Node + Jest combinations have these globals; the React Native Jest preset doesn't. Without the polyfills you'll see ReferenceError: Response is not defined or TextEncoder is not defined the first time MSW tries to construct a response. Create jest.polyfills.cjs at the project root. It must be .cjs (not .ts) because Jest loads it before the TypeScript transformer is set up: /** * MSW polyfills for React Native. * Required for Mock Service Worker v2 in Jest tests. */ // TextEncoder / TextDecoder const { TextEncoder, TextDecoder } = require('util'); global.TextEncoder = TextEncoder; global.TextDecoder = TextDecoder; // Fetch API if (!global.fetch) { global.fetch = require('node-fetch'); global.Headers = require('node-fetch').Headers; global.Request = require('node-fetch').Request; global.Response = require('node-fetch').Response; } // ReadableStream (for response streaming) if (!global.ReadableStream) { try { const { ReadableStream } = require('web-streams-polyfill'); global.ReadableStream = ReadableStream; } catch { // web-streams-polyfill is optional for older MSW v2 } } This file runs before the test framework loads, so beforeAll, jest, etc. aren't available here. It's purely for setting up globals. Wire the polyfills file and a separate setup file into jest.config.cjs: module.exports = { preset: 'react-native', testEnvironment: 'node', setupFiles: ['<rootDir>/jest.polyfills.cjs'], setupFilesAfterEnv: ['<rootDir>/jest.setup.ts'], transformIgnorePatterns: [ // The default RN preset ignores most of node_modules; MSW needs to be transformed. 'node_modules/(?!(react-native|@react-native|msw|until-async)/)', ], moduleFileExtensions: ['ts', 'tsx', 'js', 'jsx', 'json', 'node'], }; Two keys do the work: Key When it runs Use for setupFiles Before the Jest framework is installed Polyfills, global variables, anything that doesn't need jest/expect setupFilesAfterEnv After Jest framework, before each test file beforeAll/afterEach hooks, MSW server lifecycle, custom matchers The transformIgnorePatterns line is the other gotcha: the default RN preset skips transforming node_modules, but MSW ships modern syntax that Jest can't run as-is. Add msw|until-async to the allow-list or you'll see SyntaxError: Cannot use import statement outside a module from inside node_modules/msw/. Create src/test-utils/msw/server.ts: import { setupServer } from 'msw/node'; import { handlers } from './handlers'; /** * MSW server for Jest. Started/stopped in jest.setup.ts. * Use `server.use(...errorHandlers)` to override per test. */ export const server = setupServer(...handlers); The server takes your default handlers (success responses) and intercepts matching requests. In jest.setup.ts (which Jest loads via setupFilesAfterEnv), start the server before tests, reset between tests, close after: import '@testing-library/jest-native/extend-expect'; import { server } from './src/test-utils/msw/server'; // MSW server lifecycle beforeAll(() => server.listen({ onUnhandledRequest: 'warn' })); afterEach(() => server.resetHandlers()); afterAll(() => server.close()); Hook What it does beforeAll Starts the server before any test runs afterEach Resets handlers to defaults between tests (so one test's overrides don't leak) afterAll Shuts down the server after all tests complete The onUnhandledRequest: 'warn' option logs a warning if your code makes a request no handler matches. In CI, switch this to 'error' so missed handlers fail the build: const onUnhandledRequest = process.env.CI ? 'error' : 'warn'; beforeAll(() => server.listen({ onUnhandledRequest })); 💡 If your tests use fake timers, flush pending timers in afterEach before resetting handlers. Otherwise an animation timer scheduled inside a component can fire after the next test starts and trigger spurious failures. Each handler is a function that matches a request method and URL, and returns a response. A basic handler for a REST API: import { http, HttpResponse } from 'msw'; const BASE_URL = 'https://api.example.com'; export const handlers = [ http.get(`${BASE_URL}/items`, () => { return HttpResponse.json([ { id: 1, name: 'Item One' }, { id: 2, name: 'Item Two' }, ]); }), http.get(`${BASE_URL}/items/:id`, ({ params }) => { const { id } = params; return HttpResponse.json({ id: Number(id), name: `Item ${id}` }); }), http.post(`${BASE_URL}/items`, async ({ request }) => { const body = await request.json(); return HttpResponse.json({ id: 3, ...body }, { status: 201 }); }), ]; Key things to notice: ✅ http.get, http.post, etc. match the HTTP method ✅ URL params (:id) are extracted automatically ✅ Request body is available via request.json() ✅ HttpResponse.json() returns typed JSON responses with status codes Inline response objects work for a sketch. They don't work in a real codebase: the same shapes show up in handlers, in component tests, and in Storybook stories, and you don't want to maintain three copies. Pull the fixture data into its own file: // src/test-utils/msw/mockData.ts export const mockItems = [ { id: 1, name: 'Item One', createdAt: '2026-01-01T00:00:00Z' }, { id: 2, name: 'Item Two', createdAt: '2026-01-02T00:00:00Z' }, ]; export const mockProfile = { id: 'user_1', name: 'Warren de Leon', email: 'hi@example.com', }; Handlers then read from mockData: import { http, HttpResponse } from 'msw'; import { mockItems, mockProfile } from './mockData'; export const handlers = [ http.get(`${BASE_URL}/items`, () => HttpResponse.json(mockItems)), http.get(`${BASE_URL}/me`, () => HttpResponse.json(mockProfile)), ]; Same fixtures get reused in component tests where you bypass MSW and pass data directly. One source of truth. Default success handlers are the starting point. But real apps need to handle failures too. This is where most MSW setups stop. Don't stop here. The bugs that actually reach production aren't the happy-path failures. They're the awkward ones: the 401 that comes back mid-session because a token expired five minutes ago, the 429 from a burst of refresh attempts after a brief network blip, the 422 with a different validation shape than your form expects, the 408 that should have been a retry but wasn't. None of those get caught if your error coverage is "what if the API returns 500?". I create separate handler sets for every error scenario the app needs to handle: // Success (default) export const handlers = [...apiHandlers, ...authHandlers]; // Server errors export const errorHandlers = [ http.get(`${BASE_URL}/items`, () => { return HttpResponse.json( { message: 'Internal server error' }, { status: 500 } ); }), ]; // Unauthorized (expired token) export const unauthorizedHandlers = [ http.get(`${BASE_URL}/items`, () => { return HttpResponse.json( { error: 'invalid_token', message: 'Token has expired' }, { status: 401 } ); }), ]; // Rate limiting export const rateLimitHandlers = [ http.post(`${BASE_URL}/auth/token`, () => { return HttpResponse.json( { error: 'too_many_requests', message: 'Try again in 60 seconds' }, { status: 429, headers: { 'Retry-After': '60' } } ); }), ]; // Timeout (never resolves) export const timeoutHandlers = [ http.get(`${BASE_URL}/items`, async () => { await new Promise(resolve => setTimeout(resolve, 60000)); return HttpResponse.json({}, { status: 408 }); }), ]; // Offline (network failure) export const offlineHandlers = [ http.get(`${BASE_URL}/items`, () => { return HttpResponse.error(); }), ]; In my project, I have 11 handler sets: Handler set Status What it tests handlers 200 Default success responses errorHandlers 500 Server error handling unauthorizedHandlers 401 Expired/invalid token flows forbiddenHandlers 403 Banned/suspended accounts conflictHandlers 409 Duplicate registration validationErrorHandlers 422 Form validation errors rateLimitHandlers 429 Rate limiting with Retry-After emailNotConfirmedHandlers 400 Email verification required storageErrorHandlers 413/404 File upload/delete errors timeoutHandlers 408 Network timeout simulation offlineHandlers Error Complete network failure Each set is exported and can be swapped in per test. 💡 Tip: The timeout handler uses await new Promise(resolve => setTimeout(resolve, 60000)) to simulate a request that never completes. Your code's request timeout will fire first, testing the timeout handling path. The default handlers run automatically (registered in setupServer). To test error scenarios, override them per test: import { server } from '@app/test-utils/msw/server'; import { errorHandlers, unauthorizedHandlers } from '@app/test-utils/msw/handlers'; describe('API error handling', () => { it('shows error message on server failure', async () => { server.use(...errorHandlers); // Render component, trigger fetch, assert error UI }); it('redirects to login on 401', async () => { server.use(...unauthorizedHandlers); // Render component, trigger fetch, assert redirect }); // No cleanup needed - afterEach in jest.setup resets handlers }); The spread (...errorHandlers) replaces matching handlers. Non-matching handlers from the default set remain active. After the test, server.resetHandlers() restores the defaults. MSW works best with a real Redux store, not a mocked one. The whole point is to test the actual integration: component → Redux thunk → HTTP request → MSW intercept → response → state update → UI update. // src/test-utils/renderWithProviders.tsx import React from 'react'; import { Provider } from 'react-redux'; import { combineReducers, configureStore } from '@reduxjs/toolkit'; import type { RenderOptions } from '@testing-library/react-native'; import { render } from '@testing-library/react-native'; import { itemsReducer } from '@app/features/Items'; import { authReducer } from '@app/features/Auth'; const rootReducer = combineReducers({ items: itemsReducer, auth: authReducer, }); type RootState = ReturnType<typeof rootReducer>; function createTestStore(preloadedState?: Partial<RootState>) { return configureStore({ reducer: rootReducer, preloadedState, middleware: getDefaultMiddleware => getDefaultMiddleware({ serializableCheck: false, immutableCheck: false, }), }); } type AppStore = ReturnType<typeof createTestStore>; interface ExtendedRenderOptions extends Omit<RenderOptions, 'wrapper'> { preloadedState?: Partial<RootState>; store?: AppStore; } export function renderWithProviders( ui: React.ReactElement, { preloadedState, store, ...options }: ExtendedRenderOptions = {}, ) { const createdStore = store ?? createTestStore(preloadedState); const Wrapper = ({ children }: { children: React.ReactNode }) => ( <Provider store={createdStore}>{children}</Provider> ); return { store: createdStore, ...render(ui, { wrapper: Wrapper, ...options }), }; } That covers Redux. Real apps usually need more: i18n, navigation, theming, toast/notification context. The wrapper is the right place to compose all of them. Add providers around {children}: const Wrapper = ({ children }: { children: React.ReactNode }) => ( <Provider store={createdStore}> <I18nextProvider i18n={i18n}> <ThemeProvider> <ToastProvider> {children} </ToastProvider> </ThemeProvider> </I18nextProvider> </Provider> ); If a screen uses react-navigation, wrap it in NavigationContainer and an in-memory navigator for the test. The principle is the same: every provider that wraps your app in App.tsx should wrap your component in renderWithProviders. Anything you forget is a difference between test environment and runtime, and those differences are where flaky tests live. Now your tests render with a real store, dispatch real thunks, and MSW handles the network: it('loads and displays items', async () => { // Default handlers return success response const { getByText } = renderWithProviders(<ItemList />); await waitFor(() => { expect(getByText('Item One')).toBeTruthy(); }); }); it('shows error state on failure', async () => { server.use(...errorHandlers); const { getByText } = renderWithProviders(<ItemList />); await waitFor(() => { expect(getByText('Something went wrong')).toBeTruthy(); }); }); No manual mocking of dispatch, selectors, or fetch. The entire stack is real except the network. Sometimes you need a one-off response that doesn't fit any handler set. Define it inline: it('handles unexpected response shape', async () => { server.use( http.get('https://api.example.com/items', () => { return HttpResponse.json({ unexpected: 'shape' }); }) ); // Test that the code handles malformed responses gracefully }); This is useful for edge cases like malformed JSON, missing fields, or unexpected status codes that don't warrant a full handler set. With everything wired up, a single test file run looks like this: yarn jest src/features/Items/__tests__/ItemList.rntl.tsx PASS src/features/Items/__tests__/ItemList.rntl.tsx ItemList ✓ loads and displays items (218 ms) ✓ shows error state on failure (94 ms) ✓ redirects to login on 401 (102 ms) ✓ surfaces rate-limit message (89 ms) Test Suites: 1 passed, 1 total Tests: 4 passed, 4 total If you see a warning like [MSW] Warning: captured a request without a matching request handler, that's onUnhandledRequest: 'warn' doing its job. Either add a handler for the URL or fix the request your code is making. If the suite hangs and never finishes, MSW is usually waiting on a request that never resolves. Most often this is a timeoutHandlers set that uses setTimeout(..., 60000) while the test environment still has real timers. Switch to fake timers in that test (jest.useFakeTimers() then jest.advanceTimersByTime(...)) or shorten the simulated delay. Handlers are matched in order. If two handlers match the same request, the first one wins. When you server.use(...overrides), the overrides are prepended, so they take priority over defaults. HttpResponse.error() simulates a network failure, not an HTTP error. The request never gets a response. Use this for offline/no-network scenarios. For HTTP errors (500, 401, etc.), use HttpResponse.json() with a status code. Async handlers need await. If your handler reads the request body (request.json()), the handler function must be async. Forgetting this causes the handler to return undefined instead of a response. Unhandled requests are silent by default. Always use onUnhandledRequest: 'warn' (or 'error' in CI) to catch missing handlers. A silent unhandled request means your test passes for the wrong reason. Response is not defined / TextEncoder is not defined means the polyfills file isn't loading. Check that setupFiles: ['<rootDir>/jest.polyfills.cjs'] is in your Jest config, that the file extension is .cjs (not .ts), and that the file path is correct relative to rootDir. SyntaxError: Cannot use import statement outside a module from node_modules/msw/ means MSW isn't being transformed. Add msw|until-async to the allow-list inside transformIgnorePatterns. Trailing slashes matter. http.get('/api/items') does not match a request to /api/items/. Match exactly what your code sends, or use a path pattern (http.get('/api/items*', ...)). Tests pass locally and fail in CI. Usually onUnhandledRequest: 'error' catching a request you didn't realise your code was making in the CI environment (often analytics or crash reporting). Either add a handler for it or strip those calls in test mode. project-root/ jest.config.cjs # Jest config (preset, setupFiles, setupFilesAfterEnv) jest.polyfills.cjs # TextEncoder, fetch, ReadableStream globals jest.setup.ts # Server lifecycle, custom matchers, global mocks src/ test-utils/ msw/ handlers.ts # All handler sets (success, error, 401, etc.) server.ts # setupServer with default handlers mockData.ts # Fixture data used by handlers renderWithProviders.tsx # Custom render with real store + providers index.ts # Barrel export The barrel export (index.ts) lets tests import common utilities from one place. For specific handler sets, import directly from the handlers file: import { server, renderWithProviders } from '@app/test-utils'; import { errorHandlers, unauthorizedHandlers } from '@app/test-utils/msw/handlers'; Yes. The setup is about 30 minutes. After that, every new test is simpler than the manual mock equivalent. You write server.use(...errorHandlers) instead of jest.fn().mockRejectedValue(new Error('Network error')). The handlers are reusable across every test file. And you're testing real integration behaviour, not mock behaviour. The 11 handler sets in my project cover every error path the app handles. When I add a new API endpoint, I add handlers for it once, and every test that touches that endpoint gets correct mocking for free. The same handler-set approach also pairs well with E2E tests, where Detox + Cucumber drives the user flows and a separate runtime-mocking layer controls the API responses, but those are topics for later posts. If writing the next test is harder than skipping it, your test infrastructure is the problem. The code examples in this post are from rn-warrendeleon, my personal React Native project. The full MSW setup, handler sets, and custom render wrapper are all in the repo.

Dev.to (React Native)
~3 min readMay 4, 2026

Building Seamless OTP Authentication in React Native: A Complete Guide to react-native-otp-auto-verify

Why This Topic Matters OTP (One-Time Password) verification is a critical security feature in modern mobile applications. Whether you're building a fintech app, healthcare platform, or any service requiring user authentication, implementing OTP verification efficiently can be the difference between a smooth user experience and frustrated users abandoning your app. The react-native-otp-auto-verify package solves a real pain point: automating OTP detection and verification without requiring manual user input. This is especially valuable for developers who want to reduce friction in their authentication flows. Automatic OTP Detection: The package automatically reads incoming SMS messages containing OTP codes, eliminating the need for users to manually copy and paste. Cross-Platform Compatibility: Works seamlessly on both iOS and Android, with native module integration that handles platform-specific quirks. Developer-Friendly API: Simple, intuitive methods that integrate smoothly into existing React Native projects. Security-First Design: Handles sensitive data appropriately without storing or logging OTP values unnecessarily. Begin by installing the package from npm: npm install react-native-otp-auto-verify # or yarn add react-native-otp-auto-verify For React Native projects using Expo, you may need to use expo-dev-client or eject depending on your setup. Here's a practical example of how to integrate OTP auto-verification into your authentication flow: import { RNOtpVerify } from 'react-native-otp-auto-verify'; const handleOtpVerification = async () => { try { const message = await RNOtpVerify.getOtp(); // Extract OTP from message const otp = message.match(/\d{6}/)[0]; console.log('OTP detected:', otp); // Verify with your backend verifyOtpWithBackend(otp); } catch (error) { console.error('OTP verification failed:', error); } }; 1. Automatic SMS Reading: The package listens for incoming SMS messages and extracts OTP codes automatically. 2. Timeout Handling: Implement proper timeout mechanisms to prevent indefinite waiting states. 3. Error Management: Graceful error handling for scenarios where SMS permissions are denied or messages don't arrive. 4. User Permissions: Proper handling of Android and iOS permission requests for SMS access. 5. Integration with UI: Seamlessly connect OTP verification with loading states, error messages, and success callbacks. E-commerce Applications: Verify user phone numbers during account creation Banking & Fintech: Secure transaction verification with OTP Healthcare Apps: Patient identity verification Social Platforms: Account security and two-factor authentication Challenge: Users not receiving SMS messages Solution: Implement a fallback mechanism with manual OTP input field Challenge: Permission denials on Android Solution: Request permissions gracefully and provide clear explanations to users Challenge: OTP timeout issues Solution: Set reasonable timeout durations and allow users to request new codes ✅ Always request permissions explicitly before attempting to read SMS Implement timeout mechanisms to prevent indefinite loading states Provide fallback options for manual OTP entry Test thoroughly on both iOS and Android devices Handle edge cases like multiple OTP messages arriving simultaneously Secure your implementation by validating OTP on the backend While other solutions exist, react-native-otp-auto-verify stands out because it: Requires minimal configuration Has active maintenance and community support Provides excellent documentation Works reliably across different Android and iOS versions Implementing react-native-otp-auto-verify significantly improves user experience by removing friction from the authentication process. The package is production-ready and trusted by numerous React Native developers building secure applications. Check out the GitHub repository for the latest updates and the npm package for detailed documentation. Pro Tip: Combine this with proper backend validation and rate limiting to create a robust, secure authentication system that users will appreciate! 🚀

Dev.to (React Native)
~4 min readMay 4, 2026

Securing Your React Native App with FreeRasp: A Practical Implementation Guide

Introduction Building a mobile application that handles sensitive financial data — crypto transactions, KYC verification, gift cards — means security is not an afterthought. It is a core deliverable. During the development of a cross-platform fintech application, one of the non-negotiables on the security checklist was runtime application self-protection (RASP). After evaluating our options, we integrated FreeRasp — an open-source mobile security library — into our React Native codebase. In this article, I'll walk you through what FreeRasp is, why we chose it, how we integrated it into a React Native project, and the key lessons learned along the way. FreeRasp is an open-source RASP (Runtime Application Self-Protection) SDK maintained by Talsec. It provides real-time threat detection for mobile applications by monitoring the environment in which your app is running. It detects threats such as: Rooted or jailbroken devices — devices where OS security boundaries have been removed Debugger attachment — active reverse engineering attempts Emulator detection — app running in an emulated environment (common in fraud scenarios) Tampering / repackaging — modified APKs or IPAs being distributed Unofficial stores — app installed from an untrusted source Hook frameworks — tools like Frida being used to intercept app behaviour Overlay attacks — malicious apps drawing over your UI to steal input For a fintech app handling real user funds and identity documents, these are not theoretical threats. We considered a few options: Option Pros Cons FreeRasp Open source, React Native support, active maintenance, no per-user cost Requires configuration per platform Appdome Comprehensive, no-code Expensive, vendor lock-in Custom RASP Full control Enormous dev effort, hard to maintain FreeRasp hit the sweet spot for our stage: production-grade security without the enterprise price tag. npm install freerasp-react-native # or yarn add freerasp-react-native For iOS, run pod install after: cd ios && pod install FreeRasp is configured once — typically in your app's entry point or a dedicated security module. Here is the configuration structure we used: import { useFreeRasp, setThreatListeners } from 'freerasp-react-native'; const config = { androidConfig: { packageName: 'com.yourapp.package', certificateHashes: ['your-certificate-hash'], supportedAlternativeStores: [], }, iosConfig: { appBundleId: 'com.yourapp.bundle', appTeamId: 'YOUR_TEAM_ID', }, watcherMail: 'security@yourcompany.com', isProd: true, }; Important: The certificateHashes field must match the SHA-256 hash of your app's signing certificate. A mismatch will trigger a tamper alert even in your own build. This is where the real implementation decision lies. FreeRasp detects threats and calls your handler — but what you do with that information is entirely your responsibility. const actions = { // Critical threats — terminate session or block access rootDetected: () => handleCriticalThreat('root'), debugDetected: () => handleCriticalThreat('debug'), hookDetected: () => handleCriticalThreat('hook'), // High severity — warn and limit functionality emulatorDetected: () => handleHighThreat('emulator'), tamperDetected: () => handleCriticalThreat('tamper'), // Medium severity — log and monitor unofficialStoreDetected: () => handleMediumThreat('unofficialStore'), deviceBindingDetected: () => handleMediumThreat('deviceBinding'), // Informational passcodeNotSet: () => promptUserToSetPasscode(), }; const handleCriticalThreat = (threatType) => { // Log the event to your backend logSecurityEvent(threatType, 'critical'); // Clear sensitive data from state clearSensitiveData(); // Navigate to a blocked screen navigationRef.current?.navigate('SecurityBlock', { reason: threatType }); }; We categorised threats into three tiers and responded accordingly: Tier 1 — Block immediately (root, hook, tamper, debug in prod) Tier 2 — Degrade gracefully (emulator, unofficial store) Tier 3 — Log and monitor (passcode not set, device binding) 1. Test certificate hashes early 2. Don't block users silently 3. False positives are real 4. Combine with backend validation 5. isProd matters isProd: true in your production builds. FreeRasp behaves differently in dev mode and will not enforce certain checks. We added this as a required CI check before any release. Integrating FreeRasp into our React Native fintech application significantly raised our security baseline without adding meaningful friction to the user experience. For any app handling financial data, identity documents, or crypto assets, runtime protection is no longer optional. The library is actively maintained, well-documented, and free — which makes it an easy recommendation for any React Native team working in regulated domains. If you have questions about the implementation or want to discuss your own RASP strategy, feel free to reach out or leave a comment below. Nathaniel Toju is a cross-platform mobile and backend engineer based in Lagos, Nigeria, building fintech and healthcare products. He specialises in React Native, NestJS, and secure application architecture.