Signal Hub logoSignal Hub

Angular news and articles

Angular
Updated just now
Date
Source

12 articles

Dev.to (Angular)
~7 min readMay 6, 2026

The Frankenstein Meeting Room: How to Stitch Angular, React, and Svelte Into One App

Part 1 of a series. The build follows in subsequent posts. Three frontend frameworks in the same business domain is the rule, not the exception. One team adopted Angular years ago. Another fell in love with React. The M&A team brought a Vue app along. The standard answer is Rewrite — years, millions, often failing. There is another answer: let them live together. This post walks through the architectural spec for a small but real demo that does exactly that — Angular, React, and Svelte inside a single workspace, sharing one business context. Call it Frankenstein-Driven Architecture. Heterogeneity in enterprise frontends isn't a temporary mess to be cleaned up. It's a permanent condition. Acquisitions bring new stacks. Teams pick what they know. Industry tides shift; once-favored frameworks fall out of fashion long before the apps written in them stop earning money. The rewrite-first culture treats this as a problem to be eliminated. Two years, ten engineers, one framework to rule them all. By the time the rewrite ships, the dominant framework has changed again, the original team has left, and the business questions whether any of it was worth doing. The alternative is to design with the heterogeneity instead of against it. Stop asking how do we make everything one framework? and start asking how do we let multiple frameworks share one product? The whole spec rests on one sentence: Remote owns capability. Host owns business context and persistence. Each remote is responsible for what it does best — drawing, diagramming, reporting, scheduling. The host is responsible for what the business is about — meetings, customers, orders, claims. Remotes do not own state. They do not own routing. They do not own the user. They render a capability when handed a context, and they emit changes when the user does something. In this demo, the Angular host owns the meeting context. The React remote owns whiteboarding via Excalidraw. The Svelte remote owns diagram editing via Mermaid. The principle scales: replace whiteboard with reporting, diagrams with scheduling, the structure stays the same. When the host runs Angular and a remote runs React, there is no shared component model, no shared hook system, no shared reactivity. There is no React-component-inside-Angular-template trick that survives contact with reality. So each remote is a complete, self-contained application — an island. What it exposes to the host is not a React component or a Svelte component, but a Custom Element that wraps the entire app and boots it on mount. class WhiteboardRemote extends HTMLElement { connectedCallback() { this.root = createRoot(this); this.root.render(<App ... />); } disconnectedCallback() { this.root?.unmount(); } } customElements.define('whiteboard-remote', WhiteboardRemote); The host then consumes the remote like any other DOM element: <whiteboard-remote></whiteboard-remote> Web Components are the boundary because Web Components are a browser standard. Angular, React, Svelte, Vue — all four know how to render and listen to a Custom Element. The browser, not the framework, owns the integration contract. If the host is the only orchestrator, communication runs through one channel: an event bus. No initial state via attributes. No properties that have to be set before mount. Remotes are dumb on mount — they know nothing until the bus tells them. Four events cover the entire cross-framework communication: context:request — Remote → Host, "I just mounted, what's the current context?" event:selected — Host → Remotes, "the user is now looking at meeting X, here's its data" drawing:changed — React → Host, "the whiteboard changed, here's the new payload" diagram:changed — Svelte → Host, "the diagram changed, here's the new source" The bus itself is fifteen lines of TypeScript wrapping a globalThis-pinned EventTarget. No library. No broker. The wrapper provides typed emit and on so neither end has to remember which payload belongs to which event. The flow for the most important interaction — the user clicks a meeting in the calendar and both remotes update — is one round-trip: The host is the hub, the remotes are spokes. Spokes never talk to each other directly. The plumbing that lets the host actually load the React and Svelte bundles at runtime is Native Federation v4 — Manfred Steyer's framework-agnostic, ESM- and import-map-native successor to Webpack Module Federation. Two adapters do the work. The Angular adapter (@angular-architects/native-federation-v4) wires the host: a dynamic-host schematic generates a two-phase bootstrap (init federation first, bootstrap Angular second), a federation.manifest.json listing remote URLs, and a builder that splits shared dependencies into separate chunks. The esbuild adapter (@softarc/native-federation-esbuild) builds the remotes: a small build.mjs script drives runEsBuildBuilder and produces a remoteEntry.json plus its bundle. No Vite involved — the official remote adapter is esbuild-based and framework-agnostic. The runtime is the Orchestrator (@softarc/native-federation-orchestrator), v4's recommended replacement for the classic runtime. It does semver-aware version resolution for shared dependencies, caches remoteEntry.json data across reloads, and handles share scopes for multi-team setups. The same machinery is what makes this pattern interesting for migration. A team running an Angular monolith can carve a new feature out as a federated remote in any framework — React, Svelte, whatever the team picks — without touching the existing app. Old code keeps shipping, new capabilities arrive as islands. There is no all-or-nothing rewrite gate. The demo is deliberately small. A meeting room app where the user picks a meeting from a calendar, and the meeting opens with two artifacts side by side: a whiteboard sketch (React + Excalidraw) and a sequence diagram (Svelte + Mermaid). Both are real, iconic open-source applications, embedded as full islands. Three columns. The Angular calendar (Schedule-X) on the left. The two remotes stacked in the middle. Meeting details and a live event-bus log on the right. Click a meeting, both remotes load that meeting's data. Draw on the whiteboard, the host persists. Switch to a different meeting, both remotes follow the context. Open DevTools and the Network tab shows three frameworks loaded — Angular, React, Svelte — talking through one event bus. The full spec is in the repo: https://github.com/lutzleonhardt/FrankensteinMeetingRoom/blob/main/specs/SPEC.md. Read it if you want the actual Meeting type, the MeetingService skeleton with stale-update guards, the bus.ts wrapper, the federation.config.mjs for both remotes, the workspace layout, and the milestones the build will follow. A spec rarely arrives clean on the first pass. Two corrections from this one are worth sharing because they were genuine surprises during the design conversation, not lessons from a textbook. The "Vite Adapter" doesn't exist. Going in, the assumption was that Vite-based remotes were the standard path — Vite is everyone's modern build tool, after all. Reading the actual Native Federation docs revealed that the official adapter is @softarc/native-federation-esbuild. Vite is not officially supported. The adapter is framework-agnostic and runs from a hand-written build.mjs, which initially feels backward but turns out to be cleaner: no Vite-Federation interop magic, no plugin ecosystem assumptions, just esbuild plus your framework's source-transform plugin. One channel, not two. The first instinct was to send initial meeting data via Custom Element properties (the standard Web Components idiom) and use the bus only for ongoing changes. Two channels, two mental models, two places to look when something doesn't render. The spec collapsed this into a single channel: the bus carries everything, including the initial context that a freshly-mounted remote requests via context:request. The remotes become dumber, the architecture clearer, and the workshop pitch tightens to one line: "the only thing crossing the framework boundary is a bus event." This post is part 1. The repo will host the spec and the build, milestone by milestone: M1 — Workspace scaffolded, Angular host with empty federation manifest running on :4200 M2 — Calendar, meeting service with persistence, three-column layout, event-bus log M3 — React Whiteboard remote — first federation stitching live M4 — Svelte Mermaid remote — both remotes federated, the money-shot becomes recordable M5 — Polish, README, optional CRUD niceties Each milestone produces a usable artifact you can stop at and demo. The next post will follow M1 + M2 — the host shell, why the two-phase bootstrap matters, and what „the host is also a remoteEntry.json" actually means. Repo: https://github.com/lutzleonhardt/FrankensteinMeetingRoom https://github.com/lutzleonhardt/FrankensteinMeetingRoom/blob/main/specs/SPEC.md If your enterprise frontend looks more like a museum than a monolith, this is the pattern that makes that a feature, not a problem.

Dev.to (Angular)
~5 min readMay 5, 2026

Ng-News 26/13: @Service, Native Federation, Angular Feature Lifecycles

Angular 22's proposed @Service decorator and the likely stabilization of the resource API family. Also in brief: Angular feature lifecycle stages, Native Federation v4 changes, and fresh Oxc Angular compiler benchmark context. Angular 22 preview: @Service and resource APIs The release of Angular 22 comes closer, and we are also getting more and more information about the changes. For example, we will get a new decorator for services, which will also be called @Service. What's the difference compared with the existing @Injectable decorator? By default, @Service will automatically provide the service in root, and you cannot use constructor-based dependency injection anymore, but have to go via the inject function. And another PR, which is still open, says the resource family, including httpResource, but also rxResource, will leave the experimental stage and become stable. So the developer preview gets skipped. One also has to say that we have had the resource APIs for almost 1.5 years now, so it is more than time. #68195 crisbeto posted on Apr 14, 2026 These changes introduce the new @Service decorator which is a more ergonomic alternative to @Injectable. The reason we're adding a new decorator is that @Injectable has been around since the beginning of Angular and it has a lot of baggage that adds unnecessary overhead for users that generally want to define a singleton service, available in their entire app. The key differences between @Service and @Injectable are: @Service is providedIn: 'root' by default. You can opt into providing the service yourself by setting autoProvided: false on it. @Service doesn't allow constructor-based injection, only the inject function. @Service doesn't support the complex type signature of @Injectable (useClass, useValue etc.). Instead it supports a single factory function. Example: import {Service} from '@angular/core'; import {HttpClient} from '@angular/common/http'; import {AuthService} from './auth'; @Service() export class PostService { private readonly httpClient = inject(HttpClient); private readonly authService = inject(AuthService); getUserPosts() { return this.httpClient.get('/api/posts/' + this.authService.userId); } } View on GitHub #68253 JeanMeche posted on Apr 16, 2026 The time has come. Note: #67382 introduced a breaking change where you could notice some sublte timing change on how value is set when using rxResource or a stream on a resource View on GitHub Alejandro Cuba Ruiz wrote on the ng-conf Medium publication about what he calls the five lifecycle stages of an Angular feature: experimental, developer preview, stable, deprecated, and removed. The article is not a secret internal document; it lines up with what we occasionally also hear from the Angular team. Experimental means the API can move under minors or patches, so the author argues to keep it out of production and to experiment in isolation. Developer preview is "we like the design, show us real-world pain" - safer than experimental, still not semver-frozen. Stable is the contract: breaking changes belong in majors, with migrations when possible. Deprecated gives you the familiar two-major-version runway, and removed is the point where the code is actually gone. He also mentions tooling outside the official docs, for example a "Can I Use Angular Features" grid from Gerome Grignon, which can be handy if you support more than one Angular version. Next to experimental, developer preview, and stable, there are also the deprecation and removal stages that Alejandro also covers. So quite a sound guide to help you decide if and when you want to use a certain feature. The 5 lifecycle stages of an Angular feature Native Federation is next to Module Federation one of the most used libraries when it comes to building microfrontends in Angular. Native Federation version 4 was released and brings a bunch of updates, whereas the most visible one is a website with a lot of documentation. The GitHub repository was also moved to a new organization, the code is now split into multiple repositories, and an orchestrator that manages the microfrontends takes or will take over the old runtime. In a former episode of ng-news, we've already covered Void Zero's experiments on writing the Angular compiler in the Oxidation Compiler, which is built on Rust. There is now an official article where they publish official numbers and also give some background information on their development process, which is of course heavily AI-based. They describe on the order of two months of work, mainly with Claude Code and Codex, under the steering of experienced engineers - not "the model wrote a compiler overnight," but agents doing large-scale porting and review work once the architecture and guardrails exist. The headline numbers are eye-catching: they report on the order of six times faster compiles than the Angular CLI. As always, be careful with those numbers, but they could be reasonable. Especially since they omitted some heavy tasks like template type-checking. They also admit that the non-existing type-check is a significant performance booster. The compiler is available as a Vite plugin, @oxc-angular/vite. VoidZero also writes that the repository will not be maintained long term. In parallel, they note that the Angular team has its own experiments with Oxc. <div class="color-secondary fs-s flex items-center"> <img alt="favicon" class="c-embed__favicon m-0 mr-2 radius-0" src="https://voidzero.dev/favicon.svg" loading="lazy" /> voidzero.dev </div> </div> </div> voidzero-dev/oxc-angular-compiler

Dev.to (Angular)
~12 min readMay 4, 2026

Build a Streaming Gemini Chat in Angular with Signals — Then Ship It on Cloud Run

If you have built a chat UI for a large language model in the last two years, you probably reached for RxJS, an OnPush component, an async pipe, and a BehaviorSubject per piece of state. It worked, but it was a lot of plumbing for what is fundamentally a very simple shape: one string that grows over time. Angular Signals collapse that plumbing into a single primitive. And it turns out that streaming Gemini responses with Signals is one of the cleanest, most satisfying pieces of code you can write in modern Angular today. In this tutorial we will build a working Google AI chat component, in roughly one hundred lines, that streams tokens from Gemini in real time, supports a stop button, and feels native on desktop and mobile. Then we will ship it safely on Cloud Run with a thin proxy, so you can drop a live, embedded demo into your post. A streaming LLM response is, mechanically, a sequence of small text deltas arriving over a fetch stream. Old-school Angular handled this with Subjects, async pipes, and a lot of trust that change detection would do the right thing. Signals reframe the problem. A signal<string>('') is just a value that you call .update() on. Each update notifies only the views that read that signal, and Angular 20 with zoneless change detection skips the whole-tree dirty check entirely. That means you can call .update() thirty times a second from inside a for await loop and your UI will not break a sweat. There is also a smaller, ergonomic win. With Signals the rendering rule is "whatever the signal is at this instant." Streaming chat is a value that is visibly mid-update, and Signals give you the perfect vocabulary for that — the in-flight token buffer is just another signal, alongside the committed message history. A single-page Angular app with one component. You type a question, hit send, and watch Gemini's answer stream in word by word. There is a stop button that cancels the stream, a running history of messages, and that is it. We will use Angular 20 standalone components, Signals, the new control flow (@for, @if), and the official @google/genai SDK. You can find the finished repo on GitHub at the link at the bottom of this post. You will need Node 20 or newer, the Angular CLI (npm i -g @angular/cli), and a Gemini API key from Google AI Studio. The free tier is more than enough to follow along. A note on the API key, because this matters: in the local version we read the key from an environment variable that gets bundled into the client. That is fine for local exploration. It is not fine for production. Anything in your bundle is visible to anyone who opens DevTools. We will fix this in the deploy section by adding a small proxy on Cloud Run — the key stays on the server, and the Angular code barely changes. Spin up a new Angular project with the CLI: ng new gemini-stream --standalone --routing=false --style=css --skip-tests cd gemini-stream npm i @google/genai Open src/environments/environment.ts (create it if the CLI did not) and add your key: export const environment = { geminiApiKey: 'YOUR_AI_STUDIO_KEY_HERE', }; Add the same file under environment.development.ts if you use a separate dev environment, and make sure .gitignore keeps these out of source control if you put a real key in. In src/app/app.config.ts, opt into zoneless change detection. By Angular 20 this is a stable provider, and it gives you the per-signal update path that makes streaming feel snappy: import { ApplicationConfig, provideZonelessChangeDetection } from '@angular/core'; export const appConfig: ApplicationConfig = { providers: [provideZonelessChangeDetection()], }; That is the entire setup. On to the interesting bits. Create src/app/gemini.service.ts. The job of this service is small: take a chat history, return an async iterable of text deltas, and let the caller stop early. import { Injectable } from '@angular/core'; import { GoogleGenAI } from '@google/genai'; import { environment } from '../environments/environment'; export type ChatRole = 'user' | 'model'; export interface ChatMessage { role: ChatRole; content: string; } @Injectable({ providedIn: 'root' }) export class GeminiService { private ai = new GoogleGenAI({ apiKey: environment.geminiApiKey }); async *stream( history: ChatMessage[], shouldStop: () => boolean = () => false, ): AsyncGenerator<string> { const response = await this.ai.models.generateContentStream({ model: 'gemini-2.5-flash', contents: history.map((m) => ({ role: m.role, parts: [{ text: m.content }], })), }); for await (const chunk of response) { if (shouldStop()) return; const text = chunk.text; if (text) yield text; } } } Three things worth pointing out here. First, generateContentStream returns an async iterable of chunks. Each chunk has a text getter that gives you the new tokens for that step. That is all the SDK asks of you. Second, we accept a shouldStop predicate instead of an AbortController. This keeps cancellation logic on our side, where it composes nicely with Signals — the predicate is going to read a signal, and the moment the user clicks Stop, the next iteration of the loop bails out. Third, the service yields strings, not chunks. By the time anything else in the app sees a delta, it is already plain text. That keeps our chat component free of any SDK-specific types. Now the chat component. Create src/app/chat.component.ts and start with the state. The whole point of this article is in this section, so read it slowly. import { ChangeDetectionStrategy, Component, computed, effect, inject, signal, viewChild, ElementRef, } from '@angular/core'; import { GeminiService, ChatMessage } from './gemini.service'; @Component({ selector: 'app-chat', standalone: true, changeDetection: ChangeDetectionStrategy.OnPush, template: `<!-- coming up next -->`, styles: [`/* coming up next */`], }) export class ChatComponent { private gemini = inject(GeminiService); readonly messages = signal<ChatMessage[]>([]); readonly draft = signal(''); readonly streaming = signal(''); readonly isStreaming = signal(false); readonly stopRequested = signal(false); readonly canSend = computed( () => this.draft().trim().length > 0 && !this.isStreaming(), ); private scroller = viewChild<ElementRef<HTMLDivElement>>('scroller'); constructor() { effect(() => { // Read the streaming buffer and message count to re-trigger on every update, // then scroll to the bottom on the next animation frame. this.streaming(); this.messages().length; const el = this.scroller()?.nativeElement; if (el) requestAnimationFrame(() => (el.scrollTop = el.scrollHeight)); }); } async send() { if (!this.canSend()) return; const userMessage: ChatMessage = { role: 'user', content: this.draft().trim() }; this.messages.update((m) => [...m, userMessage]); this.draft.set(''); this.streaming.set(''); this.isStreaming.set(true); this.stopRequested.set(false); try { for await (const delta of this.gemini.stream( this.messages(), () => this.stopRequested(), )) { this.streaming.update((s) => s + delta); } } catch (err) { this.streaming.update((s) => s + `\n\n_Error: ${(err as Error).message}_`); } finally { const final = this.streaming(); if (final) { this.messages.update((m) => [...m, { role: 'model', content: final }]); } this.streaming.set(''); this.isStreaming.set(false); } } stop() { this.stopRequested.set(true); } } Five signals carry the entire state of the chat. messages is the committed history. draft is what is in the textarea. streaming is the buffer for the in-flight assistant reply, separate from the history so we can render it differently. isStreaming and stopRequested are the control flags. Notice that canSend is a computed. We never write to it, we never subscribe to it; we just read it from the template and Angular figures out when it changes. That single line replaces the form-validation observable boilerplate you might be used to. The effect is doing the auto-scroll. By reading streaming() and messages().length inside the effect, we tell Angular: "rerun me whenever either of these changes." Then we scroll the chat container to the bottom on the next frame. This is the kind of small DOM concern that used to require AfterViewChecked and a flag; here it is six lines. The send method is where streaming meets state. We push the user message, clear the buffer, then iterate over the service's async generator and call .update() on the streaming signal for each delta. When the loop ends (or the user hits Stop, which makes shouldStop return true on the next iteration), we commit whatever was in the buffer to the message history and reset. Replace the placeholder template and styles in the same file: template: ` <div class="shell"> <div class="scroller" #scroller> @for (m of messages(); track $index) { <div class="msg {{ m.role }}">{{ m.content }}</div> } @if (isStreaming() && streaming()) { <div class="msg model streaming">{{ streaming() }}<span class="cursor"></span></div> } </div> <form class="composer" (submit)="$event.preventDefault(); send()"> <textarea rows="2" placeholder="Ask Gemini something..." [value]="draft()" (input)="draft.set($any($event.target).value)" (keydown.enter)="$event.preventDefault(); send()" ></textarea> @if (isStreaming()) { <button type="button" (click)="stop()">Stop</button> } @else { <button type="submit" [disabled]="!canSend()">Send</button> } </form> </div> `, styles: [` .shell { display: flex; flex-direction: column; height: 100dvh; max-width: 720px; margin: 0 auto; font-family: system-ui, sans-serif; } .scroller { flex: 1; overflow-y: auto; padding: 1rem; display: flex; flex-direction: column; gap: 0.75rem; } .msg { padding: 0.75rem 1rem; border-radius: 12px; white-space: pre-wrap; line-height: 1.5; max-width: 85%; } .msg.user { align-self: flex-end; background: #4285f4; color: white; } .msg.model { align-self: flex-start; background: #f1f3f4; color: #202124; } .cursor { display: inline-block; width: 0.5ch; background: currentColor; margin-left: 2px; animation: blink 1s steps(1) infinite; } @keyframes blink { 50% { opacity: 0; } } .composer { display: flex; gap: 0.5rem; padding: 1rem; border-top: 1px solid #eee; } textarea { flex: 1; resize: none; padding: 0.75rem; border-radius: 12px; border: 1px solid #ddd; font: inherit; } button { padding: 0 1.25rem; border-radius: 12px; border: none; background: #4285f4; color: white; font-weight: 600; cursor: pointer; } button:disabled { opacity: 0.5; cursor: not-allowed; } `] The new control flow (@for, @if, @else) makes this template read like a small story: render every committed message, then render the in-flight reply if there is one, then show Send or Stop based on whether we are mid-stream. The blinking cursor on the streaming bubble is a tiny detail that makes the whole thing feel alive. Wire the component into src/app/app.component.ts as the only thing rendered, run ng serve, and you should have a working streaming chat at http://localhost:4200. The local app calls Gemini directly with a key in the bundle. To ship it safely we need two small moves: a tiny server proxy that holds the key, and Cloud Run to host both the proxy and the static Angular build. Create server/index.ts at the project root: import express from 'express'; import { GoogleGenAI } from '@google/genai'; const app = express(); const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY! }); app.use(express.json({ limit: '4mb' })); app.use(express.static('dist/gemini-stream/browser')); app.post('/api/stream', async (req, res) => { res.setHeader('Content-Type', 'text/plain; charset=utf-8'); res.setHeader('Transfer-Encoding', 'chunked'); const stream = await ai.models.generateContentStream({ model: 'gemini-2.5-flash', contents: req.body.contents, }); for await (const chunk of stream) { if (chunk.text) res.write(chunk.text); } res.end(); }); app.listen(process.env.PORT || 8080); Update gemini.service.ts to read from the proxy with fetch instead of calling the SDK in the browser. The SDK and the API key never leave the server: async *stream(history: ChatMessage[], shouldStop = () => false) { const res = await fetch('/api/stream', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ contents: history.map((m) => ({ role: m.role, parts: [{ text: m.content }] })), }), }); const reader = res.body!.pipeThrough(new TextDecoderStream()).getReader(); while (true) { if (shouldStop()) { reader.cancel(); return; } const { value, done } = await reader.read(); if (done) return; if (value) yield value; } } This is the part I love about the Signals architecture: the component code does not change at all. The signals do not care that the bytes are coming from a Cloud Run service now instead of the SDK. Same loop, same streaming.update() call. Add a Dockerfile at the project root: FROM node:20-alpine AS build WORKDIR /app COPY package*.json ./ RUN npm ci COPY . . RUN npm run build && npx tsc -p server FROM node:20-alpine WORKDIR /app COPY --from=build /app/dist ./dist COPY --from=build /app/server/dist ./server COPY --from=build /app/node_modules ./node_modules COPY --from=build /app/package*.json ./ ENV NODE_ENV=production CMD ["node", "server/index.js"] Then ship it with one command — Cloud Run will build the container from source for you: gcloud run deploy gemini-stream \ --source . \ --region us-central1 \ --allow-unauthenticated \ --set-env-vars GEMINI_API_KEY=YOUR_AI_STUDIO_KEY You will get back a URL like https://gemini-stream-xxxxxx.us-central1.run.app. Test it in the browser, confirm the chat works end to end, and you are done. The fun part: dev.to has a first-class Cloud Run embed, so here you go: The whole thing — service, component, template, styles — comes in just over a hundred lines. Compare that to an equivalent app two years ago and you will notice what is missing: there is no Subject, no BehaviorSubject, no async pipe, no OnPush boilerplate that you have to think about, no manual subscription cleanup. Signals plus the new control flow plus zoneless change detection is genuinely a different programming model, and streaming AI is the application that shows it off best. A couple of small things to try next, in roughly increasing order of effort: Add a systemInstruction to the generateContentStream call to give your model a persona. The SDK accepts it as a sibling of contents on the proxy side. Switch from text-only input to multimodal: drop an image into the chat and forward it from the proxy as a parts entry of { inlineData: { mimeType, data } }. Gemini handles the rest. Prefer Firebase to Cloud Run? Firebase AI Logic gives you the same proxy pattern with less infra — install firebase and @firebase/ai, and the SDK shape stays almost identical. You give up the dev.to Cloud Run embed, but the Angular code is unchanged. Try the same UI against Chrome's Built-in AI (Gemini Nano running on-device, no key, no network). The Prompt API has its own streaming primitive that drops into the same Signal-based shell with almost no changes — and you get an offline-capable chat for free. If you take one thing away from this post, let it be that Signals were designed for values that change a lot, and an LLM stream is the canonical example of a value that changes a lot. The pieces fit so cleanly that the resulting code reads more like a description of the UI than like a program. Repo: https://github.com/TomWebwalker/gemini-stream-angular If you build something with this drop a link in the comments — I would love to see what people make of it.

Dev.to (Angular)
~5 min readMay 4, 2026

Angular: Better Loading Indicator Directive With CDK

In my last post I demonstrated how to make an Angular loading indicator without the CDK. In this post I will show you how to make an loading indicator using CDK that does not rely on observables and it supports promises and resources as well as observables. Fist lets create our component that will simply contain the backdrop and indicator loader-component.ts import { Component, ChangeDetectionStrategy } from '@angular/core'; @Component({ selector: 'app-loader', templateUrl: './loader-component.html', styleUrls: ['./loader-component.scss'], changeDetection: ChangeDetectionStrategy.OnPush }) export class LoaderComponent {} loader-component.html <div class="backdrop"> <div class="lds-ripple"> <div></div> <div></div> </div> </div> loader-component.scss .backdrop { width: 100%; height: 100%; display: flex; align-items: center; justify-content: center; position: absolute; background: rgba(160,160, 160, 0.3); } .lds-ripple { display: inline-block; position: relative; width: 80px; height: 80px; } .lds-ripple div { position: absolute; border: 4px solid black; opacity: 1; border-radius: 50%; animation: lds-ripple 1s cubic-bezier(0, 0.2, 0.8, 1) infinite; } .lds-ripple div:nth-child(2) { animation-delay: -0.5s; } @keyframes lds-ripple { 0% { top: 36px; left: 36px; width: 0; height: 0; opacity: 1; } 100% { top: 0; left: 0; width: 72px; height: 72px; opacity: 0; } } Now you need to create a directive that will accept a loader parameter that is either an Observable, Promise, or Resource and then it needs to display an overlay with the LoaderComponent in it that is positioned over the host element when it becomes visible loader.ts import { Directive, input, effect, Resource, inject, ElementRef, OnInit, signal, ComponentRef, OnDestroy } from '@angular/core'; import { Observable, Subscription } from 'rxjs'; import { OverlayRef, Overlay } from '@angular/cdk/overlay'; import { ComponentPortal } from '@angular/cdk/portal'; import { LoaderComponent } from "./loader-component/loader-component"; @Directive({ selector: '[loader]' }) export class Loader implements OnInit, OnDestroy { loader = input<Observable<any> | Promise<any> | Resource<any> | null>(null); observer: IntersectionObserver | null = null; private host = inject<ElementRef<HTMLElement>>(ElementRef).nativeElement; private subscription: Subscription | null = null; private isVisible = signal(false); private overlay = inject(Overlay); private overlayRef: OverlayRef | null = null; private isAsync() { return this.loader() instanceof Observable || this.loader() instanceof Promise; } constructor() { effect(() => { const loader = this.loader(); const isVisible = this.isVisible(); this.subscription?.unsubscribe(); if (loader) { if (isVisible) { if (!this.isAsync()) { let resource = loader as Resource<any>; if (resource.isLoading()) { this.showLoader(); } else { this.hideLoader(); } } else if (loader instanceof Observable && !this.subscritpion) { this.showLoader(); this.subscription = loader.subscribe({ next: () => this.hideLoader(), error: () => this.hideLoader(), complete: () => this.hideLoader() }); } else { this.showLoader(); (loader as Promise<any>).then(() => this.hideLoader(), () => this.hideLoader()); } } else { this.hideLoader(); } } else { this.subscription = null; this.hideLoader(); } }) } ngOnInit() { this.observer = new IntersectionObserver(([entry]) => { this.isVisible.set(entry.isIntersecting || entry.target.checkVisibility()); // checkVisivibility for when scrolled out of view but still visible }, { root: null, rootMargin: '0px', threshold: 0.1 }); this.observer.observe(this.host); } ngOnDestroy() { this.hideLoader(); this.observer?.disconnect(); this.observer = null; } private showLoader() { if (!this.overlayRef) { const positionStrategy = this.overlay .position() .flexibleConnectedTo(this.host) .withPositions([{ originX: 'start', originY: 'top', overlayX: 'start', overlayY: 'top' }]) .withFlexibleDimensions(false) .withPush(false) .withViewportMargin(8); this.overlayRef = this.overlay.create({ positionStrategy, scrollStrategy: this.overlay.scrollStrategies.reposition(), width: this.host.offsetWidth, height: this.host.offsetHeight, }); this.overlayRef.attach(new ComponentPortal(LoaderComponent)); } } private hideLoader() { this.subscription?.unsubscribe(); this.subscription = null; this.overlayRef?.dispose(); this.overlayRef = null; } } The constructor creates an effect that will run any time that the value of the loader changes or when the visibility of the element changes. Also if the loader is a Resource it will run every time it's isLoading signal changes. It also unsubscribes for the observer subscription if there is one to make sure you don't have a rogue subscription incase loader's value changed. If loader is null it will call hideLoader to ensure that the loader is hidden incase it was shown when the value changed to null In the effect it checks to see if the host element is visible an if it is not then hideLoader is called. If it is visible will check to see what type it is and execute one of the following actions based off the type. If it is a resource it will show the loader if it's isLoading signal is true and hide the loader when it is false. If it is an observable it will call showLoader and subscribe to it and call hideLoader on next, error, and complete. If it is a promise it will call showLoader. It will call hideLoader when the promise resolves and on error ngOnInit triggers an IntersectionObserver to check the host element's visibility. It will trigger every time the element's visibility in the view port changes. showLoader creates an overlay that is positioned at the top left corner of the host element. It also resizes the overlay to the the host element's width and height. Then it creates component portal constructed with LoaderComponent and attaches it to the overlay. You need to have the width and height of your loader component set to 100% so it will stretch over the overlay. hideLoader will unsubscribe from the observable's subscription if there is one and tears down the overlay. ngOnDestroy calls hideLoader and tears down the intersection observer. A cold observable is an observable that get executed every time it get's subscribes to like the one from Angular's http client or the of rxjs operator. If you pass a cold observable to the Loader directive it will execute twice. To prevent this behavior pipe your observable through shareReplay. let source = this.http.get<ResponseType>('https://api.your-site.com/your/url').pipe(shareReplay(1)); this.loader = source; source.subscribe({ next: (res) => { // ... do stuff ... } });

Dev.to (Angular)
~4 min readMay 4, 2026

We Rewrote Our Angular 18 App in React 20 and Increased Developer Velocity by 40%

We Rewrote Our Angular 18 App in React 20 and Increased Developer Velocity by 40% Last quarter, our engineering team made the bold call to rewrite our 3-year-old Angular 18 production application in React 20. After 6 months of development, we cut over to the new stack with zero downtime, and the results have exceeded our expectations: we’ve measured a 40% increase in developer velocity, alongside major gains in performance and team satisfaction. Our Angular 18 app had grown to over 400 components, with a mix of NgRx for state management, custom RxJS operators, and a complex dependency injection tree. While Angular served us well early on, we hit several scaling pain points: Build times ballooned to 8+ minutes for full production builds, slowing CI/CD pipelines and developer feedback loops. New hires took 3+ weeks to ramp up on Angular’s opinionated structure, RxJS reactive patterns, and Ivy compiler quirks. Bundle sizes grew to 2.1MB for our main entry point, leading to slow initial load times for users on slower networks. Custom directive and pipe maintenance became a bottleneck, with tight coupling to Angular’s internal APIs. We evaluated several options, including upgrading to Angular 19, but ultimately chose React 20 for its flexible unopinionated structure, mature ecosystem, and new React 20 features including concurrent rendering, Server Components, and improved Suspense support. We opted for a phased full rewrite rather than incremental micro frontend adoption, as our app’s tight coupling made incremental changes risky. Our 6-month timeline broke into three phases: We mapped all 400+ Angular components to React equivalents, documented state management flows, and identified 12 custom Angular directives that needed React porting. We standardized on TypeScript strict mode for both stacks, which minimized type migration work. We also selected supporting libraries: Redux Toolkit for state management (replacing NgRx), React Router 7 for routing, and React Testing Library for unit tests. We built React equivalents of all core components first, including our design system, auth flows, and dashboard framework. We used custom codemods to auto-convert 60% of simple Angular components to React, then manually ported complex logic. To avoid disrupting ongoing feature work, we kept the Angular app in maintenance mode, with new features built in React behind feature flags. We ran parallel load tests on both apps, fixed 14 critical bugs in the React build, and used a weighted traffic shift (10% → 50% → 100%) to cut over with zero downtime. We kept the Angular codebase in read-only mode for 2 weeks post-cutover as a fallback, but never needed it. We measured developer velocity using three core metrics: story points completed per sprint, PR cycle time, and build time. The results after 3 months of running on React 20: Developer velocity increased by 40%: our team of 12 engineers went from completing 85 story points per sprint to 119. PR cycle time dropped from 3.2 days to 1.4 days, thanks to simpler component logic and faster test runs. Full production build times fell from 8.1 minutes to 2.3 minutes, a 71% reduction. New hire ramp-up time decreased from 3 weeks to 1 week, as React’s simpler mental model and larger community resources eased onboarding. Main bundle size shrank by 32% to 1.4MB, cutting initial load time by 27% for our global user base. Rewriting a production app is never without risk. Here are our key takeaways: Define clear success metrics upfront: we tied the rewrite to velocity and performance goals, which helped secure stakeholder buy-in and kept the team aligned. React 20’s Server Components delivered unexpected wins: we eliminated client-side rendering for 60% of our static pages, improving SEO and reducing client-side JS load. Don’t underestimate state management migration: porting NgRx flows to Redux Toolkit took 3x longer than we estimated, as we had to unwind complex RxJS observable chains. Concurrent rendering improved UX for heavy interactions: our data table with 10k+ rows now renders without blocking the main thread, a common pain point in our Angular build. The rewrite was a significant investment, but the 40% velocity gain has already paid back the development time in 4 months. Our team reports higher job satisfaction, we’re shipping features faster, and our users are getting a faster, more reliable app. If your team is hitting similar scaling pain points with Angular, React 20 is a compelling alternative worth evaluating.

Dev.to (Angular)
~9 min readMay 4, 2026

White Labeling in Angular: One Codebase, Multiple Clients

White labeling is more common than you might think. When developing software, you often need to deploy the same application for multiple clients, each requiring their own customization: unique color palettes, logos, or specific variants for a link. Without a proper strategy, you might be tempted to simply clone the existing repository and implement client-specific changes on demand. However, this approach has a major drawback: maintenance hell. Every time a feature is added, or a bug is fixed, you must manually propagate that change across every single clone. This might be manageable for two or three instances, but it quickly becomes impossible to maintain once you reach a dozen clients or face a complex architectural shift. In this article, we will explore how to create a white labelled Angular app that supports multiple targets efficiently. We will start by defining white labeling, then discuss how to design an Angular app around this concept, and finally, how to leverage Angular’s tools to achieve this in a scalable way. If you're not familiar with the term, white labeling is about creating a generic solution that you can publish multiple times under different brands. In practice, you build your product once, then deploy it many times with different logos, colors or text. Think of it like a soda sold under different packaging: the liquid remains the same, but the product appears distinct. The main benefit of this approach is efficiency: you develop and maintain one application instead of many, which drastically reduces the cost of onboarding new clients or updating existing ones. Even more important: you definitely don't want the assets of one brand leaking into the build of another. Any software that needs to serve multiple brands from a single codebase can benefit from this: mobile apps, web services, desktop applications. The principle is technology-agnostic. Here is our next successful SaaS we will be adapting as a white-labelled app For now it's rather empty, but we can already identify three main areas we will want to customize: the stylesheet (what if our brand uses a specific color?), the assets (a brand will likely have its own logo), and configuration keys such as the brand name. In this case, let's work towards setting up a build tailored for Angular. Ideally, we would have a centralized place where all our build-specific files would live, something like: . ├── public/ ← default assets │ ├── favicon.ico │ └── main.png ├── src/ ← core application └── targets/ ← defines all the brands └── angular/ ← contains the angular build specific files Our goal is to run this: pnpm build --configuration angular This should generate a bundle that contains the core of the application, but with everything defined in targets/angular/ overriding the default content. Since that will impact the build step, our journey will start in the angular.json file, by defining the new angular target: { ... "projects": { "architect": { "build": { "builder": "@angular/build:application", "options": { ... }, "configurations": { "production": { ... }, "development": { ... }, "angular": { } } } } } } } That alone won't be enough, but we can already execute the command without an error: The fastest way to recognize a brand at a glance is the logo, and Angular has a great one. Following our initial goal, we will create it in targets/angular/assets/main.png. Using the same name across targets is a good convention that makes it easy to identify which file is used where without too many variants everywhere. . ├── public/ │ ├── favicon.ico │ └── main.png ├── src/ └── targets/ └── angular/ └── assets/ └── main.png By using the same name, we can take advantage of how the Angular architect handles the assets. In the angular.json, the assets array defines a set of targets that will be copied to the output directory. If two entries share the same name, the last one will override the first. In our case, we can instruct it to first copy the content of public/ with all default assets for our app, and then copy the content of targets/angular/assets to override those defaults: { "build": { "configuration": { "angular": { "assets": [ { "glob": "**/*", "input": "public" }, { "glob": "**/*", "input": "targets/angular/assets" } ] } } } } With that change, rebuilding our app outputs a new bundle that uses the assets we defined: Great! But now the text feels a bit off, let's fix that. I'm not very good with design and CSS, but that plain black text alongside the Angular logo looks a bit aggressive, it would be nice to have some purple in there. In our example app, we have the following styles: /* 📂 theme.css */ :root { --brand-primary: black; } /* 📂 styles.css */ html { color: var(--brand-primary); } These styles are referenced in angular.json under architect.build.options: { "build": { "configuration": { "options": { ... "styles": ["src/styles.css", "src/theme.css"] } } } } For our angular configuration, we do not want to change the base styles.css which will probably handle a lot of customization, specifics, and even TailwindCSS layers maybe. However, overriding the CSS variables is something we can do without causing any harm. For that, let's first design our own theme in targets/angular/styles/theme.css: . ├── public/ ├── src/ └── targets/ └── angular/ ├── assets/ │ └── main.png └── styles/ └── theme.css We can now add some purple in there: :root { --brand-primary: purple; } Now that everything is in place, we can simply instruct Angular to use this theme file when we are building the build configuration: { "build": { "configuration": { "angular": { "assets": [...], "styles": ["src/styles.css", "targets/angular/styles/theme.css"] ] } } } } Let's rebuild our app, and marvel at our design decision: The design looks fine (right?) but the content is not quite there yet: we are definitely not publishing for white label, yet the title says otherwise. Let's look at how the component is defined: // 📂 src/app/app.ts import { ChangeDetectionStrategy, Component } from '@angular/core'; import { environment } from '../environment'; @Component({ selector: 'app-root', template: ` <h1>Welcome to {{ targetName }}</h1> <img src="main.png" alt="brand logo" height="80px" width="auto" /> `, changeDetection: ChangeDetectionStrategy.OnPush, }) export class App { protected readonly targetName = environment.targetName; } The name is defined neither by the CSS nor by the assets, but by a value read from an environment.ts file: // 📂 environment.ts export const environment = { targetName: 'white label', }; For this use case, we can take advantage of another option of the Angular architect's target: fileReplacements. This property allows you to define a collection of file replacements, each mapping a source path to the file that should replace it. In our case, let's replace this environment file with our own: . ├── public/ │ ├── favicon.ico │ └── main.png ├── src/ └── targets/ └── angular/ ├── assets/ │ └── main.png ├── overrides/ │ └── environment.ts └── styles/ └── theme.css // 📂 targets/angular/overrides/environment.ts export const environment = { targetName: 'Angular', }; We can then instruct Angular to use this file instead of the default one: { "build": { "configuration": { "angular": { "assets": [...], "styles": ["src/styles.css", "targets/angular/styles/theme.css"], "fileReplacements": [ { "replace": "src/environment.ts", "with": "targets/angular/overrides/environment.ts" } ] } } } } Finally, rebuilding our app one more time will generate an output rendered as: 📝 Note fileReplacements array too. That's it! After a few steps we successfully adapted the core application for a specific build target without touching the application itself. Let's ensure everything is working by adding a new target: . ├── public/ │ ├── favicon.ico │ └── main.png ├── src/ └── targets/ ├── angular/ │ ├── assets/ │ │ └── main.png │ ├── overrides/ │ │ └── environment.ts │ └── styles/ │ └── theme.css └── red-corp/ ├── assets/ │ └── main.png ├── overrides/ │ └── environment.ts └── styles/ └── theme.css And creating its associated build target: { "build": { "configuration": { "angular": { "assets": [...], "styles": ["src/styles.css", "targets/angular/styles/theme.css"], "fileReplacements": [...] }, + "red-corp": { + "assets": [...], + "styles": ["src/styles.css", "targets/red-corp/styles/theme.css"], + "fileReplacements": [ + { + "replace": "src/environment.ts", + "with": "targets/red-corp/overrides/environment.ts" + } + ] + } } } } Finally, we can run pnpm ng b --configuration red-corp, which will generate the following output that will serve: Congrats! Despite working well, the addition of a new target can be a bit tedious and error prone if done manually. Fortunately, Angular has a concept designed for that exact purpose: schematics: A schematic is a template-based code generator that supports complex logic. It is a set of instructions for transforming a software project by generating or modifying code. Schematics are packaged into collections and installed with npm. That sounds like something we are already doing: creating a bunch of folders and authoring angular.json. Since this is not the focus of the article, I won't dive into the implementation details here, but you are welcome to browse the article example's sources to see how it's done. Adding a new target is now a breeze: In this article, we explored how to build a white-labelled Angular application that supports multiple clients from a single codebase. We started by defining what white labeling is and why a naive approach leads to maintenance problems at scale. We then worked through a concrete example, progressively introducing Angular's build configuration to override assets, styles and TypeScript files on a per-target basis. Finally, we saw how schematics can remove the remaining manual steps, making the addition of a new client a matter of running a single command. The result is a setup where the core application remains untouched regardless of how many brands you support, and where onboarding a new client is just a matter of dropping files into a new folder and wiring up a configuration. If you would like to play around with the example, feel free to browse the sources on GitHub! Photo by Andrzej Gdula on Unsplash

Dev.to (Angular)
~5 min readMay 4, 2026

Step-by-Step: Build a Progressive Web App with Angular 18 and Workbox 7.0

Step-by-Step: Build a Progressive Web App with Angular 18 and Workbox 7.0 Progressive Web Apps (PWAs) combine the reach of web apps with the functionality of native apps, including offline access, push notifications, and installability. Angular 18 streamlines PWA development, and pairing it with Workbox 7.0 gives you granular control over service worker caching and behavior. This guide walks you through building a production-ready PWA from scratch. Node.js 18.19+ installed (required for Angular 18) Angular CLI 18+ installed globally: npm install -g @angular/cli@18 Basic familiarity with Angular components, services, and CLI commands Workbox CLI 7.0 (we'll install this later) Open your terminal and run the following command to scaffold a new Angular project: ng new angular-pwa-demo --routing --style=css cd angular-pwa-demo This creates a project with routing enabled and CSS for styling. You can adjust the style flag to use SCSS or another preprocessor if preferred. First, add the official Angular PWA package, which integrates Workbox with Angular's build pipeline: ng add @angular/pwa@18 This command automatically configures your project to use service workers, adds a default ngsw-config.json (Workbox configuration file), and updates your angular.json to generate service worker assets during build. Next, install Workbox 7.0 and the Workbox CLI as dev dependencies to customize your service worker: npm install workbox@7.0 workbox-cli@7.0 --save-dev Verify the installed Workbox version by checking your package.json to ensure it's 7.0.x. Open the ngsw-config.json file in your project root. This is the Workbox configuration file that Angular's service worker uses. Update it to use Workbox 7.0 features and define your caching strategies: { "index": "/index.html", "assetGroups": [ { "name": "app", "installMode": "prefetch", "resources": { "files": [ "/favicon.ico", "/index.html", "/*.css", "/*.js" ] } }, { "name": "assets", "installMode": "lazy", "updateMode": "prefetch", "resources": { "files": [ "/assets/**", "/*.(eot|svg|cur|jpg|jpeg|png|gif|webp|typescript|ttf|woff|woff2|ani)" ] } } ], "dataGroups": [ { "name": "api-freshness", "urls": ["/api/**"], "cacheConfig": { "strategy": "freshness", "maxSize": 100, "maxAge": "1h", "timeout": "5s" } } ] } This configuration prefetches core app assets, lazily loads and caches static assets, and uses a freshness strategy for API calls (serves cached data if the network times out after 5 seconds). If you need advanced Workbox 7.0 features not covered by ngsw-config.json, create a custom service worker file. First, create a src/custom-sw.js file: import { precacheAndRoute } from 'workbox-precaching'; import { registerRoute } from 'workbox-routing'; import { StaleWhileRevalidate } from 'workbox-strategies'; // Precache assets injected by Angular's build process precacheAndRoute(self.__WB_MANIFEST); // Cache images with stale-while-revalidate strategy registerRoute( ({ request }) => request.destination === 'image', new StaleWhileRevalidate({ cacheName: 'image-cache', }) ); Then update your angular.json to include this custom service worker in the build output. Add the following to the architect.build.options section: "serviceWorker": true, "ngswConfigPath": "ngsw-config.json", "customServiceWorker": "src/custom-sw.js" Note: Angular 18's PWA support may require additional configuration for custom service workers; refer to the official Angular PWA documentation for compatibility details. To test offline support, add a simple component that fetches data from an API. Generate a new component: ng generate component offline-demo Update the component's TypeScript file to fetch data from a public API (e.g., JSONPlaceholder): import { Component, OnInit } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { Observable } from 'rxjs'; @Component({ selector: 'app-offline-demo', template: ` Post Title: {{ data.title }} {{ data.body }} Loading... ` }) export class OfflineDemoComponent implements OnInit { data$!: Observable; constructor(private http: HttpClient) {} ngOnInit() { this.data$ = this.http.get('https://jsonplaceholder.typicode.com/posts/1'); } } Add this component to your app's routing module to access it via a URL. First, build your Angular project for production: ng build This generates a dist/angular-pwa-demo folder with your production-ready app and service worker files. To test the PWA, serve the build output using a local HTTP server (required for service workers to work, as they don't run on file:// protocol): npx http-server dist/angular-pwa-demo -p 8080 Open Chrome and navigate to http://localhost:8080. Open Chrome DevTools, go to the Application tab, then Service Workers. You should see your service worker registered and activated. Test offline support: Go to the Network tab, check "Offline", then refresh the page. Your app should still load, and the offline-demo component should display cached API data (if you visited the route while online first). Run a Lighthouse audit (Lighthouse tab in DevTools) to verify PWA compliance: you should see checks for service worker registration, offline support, and installability pass. PWAs require HTTPS to work in production (except localhost). Deploy your dist/angular-pwa-demo folder to a static hosting provider that supports HTTPS, such as Firebase Hosting, Netlify, or Vercel. Most providers automatically handle service worker headers, but verify that ngsw.json and worker-basic.min.js (or your custom service worker) are served correctly. You've now built a fully functional PWA with Angular 18 and Workbox 7.0, with offline support, caching strategies, and PWA compliance. You can extend this further by adding push notifications, background sync, or custom install prompts. Workbox 7.0's extensive plugin ecosystem and Angular's built-in PWA tooling make it easy to scale your PWA as your app grows.

Dev.to (Angular)
~10 min readMay 3, 2026

Your Abstractions Are Lying to You | Every File You Create Is a Debt

There's a book from 2018 called A Philosophy of Software Design. John Ousterhout wrote it. He spent his career building operating systems and distributed systems — Tcl, RAMCloud, Raft consensus. Not exactly frontend territory. But somewhere in that book, he describes a failure mode that I've seen in more Angular codebases than I can count. He calls it a shallow module. Every abstraction you create — a service, a directive, a component — has two dimensions. Interface: the surface exposed to the caller. How much does someone need to know to use this thing? Implementation: the work hidden inside. How much complexity does it actually absorb? A deep module has a narrow interface and a powerful implementation. You interact with it through a small surface, and it does a lot for you on the other side of that surface. A shallow module has an interface that costs roughly as much as what it gives back. You have to know a lot to use it, and it doesn't hide much in return. Ousterhout's canonical example is Unix file I/O. Five calls — open, read, write, close, seek — hide decades of complexity: file systems, permissions, buffering, disk abstraction, OS scheduling. You don't know any of that exists. You just call read. That's a deep module. Now think about the last service you wrote. Backend engineers are often forced into depth because they're hiding genuinely complex things — databases, networks, distributed state, file systems. The complexity exists before they write a single line of abstraction. Frontend developers are different. Nobody forced you to create that service. Nobody required that directive. You chose to add a layer. Which means the question of whether it justifies its existence is sharper — and the failure mode of shallow abstraction is far more common. You can write an entire Angular application full of files that follow every convention, satisfy every linter, pass every code review — and still have an app where every abstraction is lying to you. Where every layer pretends to be hiding something but is really just renaming what's below it. Let me show you what that looks like. And what the alternative looks like. Here's the shallow version. You've seen it. You've probably written it. @Injectable({ providedIn: 'root' }) export class UserService { constructor(private http: HttpClient) {} getUser(id: string) { return this.http.get(`/api/users/${id}`); } updateUser(id: string, payload: Partial<User>) { return this.http.put(`/api/users/${id}`, payload); } } Looks clean. But now look at the component that uses it. this.userService.getUser(id).subscribe({ next: (user) => { this.user = user; this.loading = false; }, error: (err) => { this.loading = false; this.error = 'Something went wrong'; console.error(err); } }); The component is managing loading state, error state, type casting, and retry logic. The service renamed http.get and nothing else. Every component that calls this service will write this same ceremony independently. That's a shallow service. The interface costs you an import and a constructor injection. The implementation gives you almost nothing in return. Now here's what a service looks like when it earns its existence. @Injectable({ providedIn: 'root' }) export class UserService { private http = inject(HttpClient); private cache = new Map<string, User>(); getUser(id: string) { // rxResource wraps the HTTP call and gives you .value(), .isLoading(), // and .error() as signals — no manual state management needed return rxResource({ loader: () => { if (this.cache.has(id)) { return of(this.cache.get(id)!); } return this.http.get<User>(`/api/users/${id}`).pipe( retry(2), tap(user => this.cache.set(id, user)) ); }, defaultValue: null, }); } updateUser(id: string, payload: Partial<User>) { return rxResource({ loader: () => this.http.put<User>(`/api/users/${id}`, payload).pipe( tap(updated => this.cache.set(id, updated)) ), defaultValue: null, }); } } The caller — a standalone component, no async pipe, no subscription: @Component({ selector: 'app-user-profile', template: ` @if (userResource.isLoading()) { <p>Loading...</p> } @if (userResource.error()) { <p>{{ normalizeError(userResource.error()) }}</p> } @if (userResource.value(); as user) { <h2>{{ user.name }}</h2> <p>{{ user.email }}</p> } ` }) export class UserProfileComponent { private userService = inject(UserService); private userId = inject(ActivatedRoute).snapshot.params['id']; userResource = this.userService.getUser(this.userId); normalizeError(err: unknown): string { if (err instanceof HttpErrorResponse) { if (err.status === 404) return 'User not found.'; if (err.status === 403) return 'You don\'t have permission to do this.'; if (err.status >= 500) return 'Server error. Please try again later.'; } return 'An unexpected error occurred.'; } } The component doesn't subscribe. Doesn't manage loading state. Doesn't catch errors manually. rxResource gives it .isLoading(), .value(), and .error() as signals — and Angular's new control flow (@if) reacts to them directly. Notice that normalizeError lives in the component here — and that's intentional. The service absorbs infrastructure complexity: retries, caching, HTTP mechanics. Error presentation — what a 404 means to the user — is a UI concern. Each layer owns what belongs to it. When the backend team changes their error format, you fix it in one place. Every component heals automatically. That's what depth looks like. The interface is a single method call. The implementation absorbs retries, caching, Observable-to-Signal bridging, and loading state — none of which the caller ever sees. The shallow version: @Directive({ selector: '[appHighlight]' }) export class HighlightDirective { constructor(private el: ElementRef) {} @HostListener('mouseenter') onMouseEnter() { this.el.nativeElement.classList.add('highlighted'); } @HostListener('mouseleave') onMouseLeave() { this.el.nativeElement.classList.remove('highlighted'); } } A directive that toggles one class on hover. You could replace this with a CSS :hover rule. It exists as a file, needs to be declared, imported, and understood — for two lines of functionality. Shallow. A directive becomes deep when it hides something the caller genuinely cannot do cheaply: event coordination, DOM lifecycle, cleanup contracts, timing logic. Here's a real example — click-outside detection. @Directive({ selector: '[appClickOutside]', standalone: true }) export class ClickOutsideDirective { readonly clickOutside = output<void>(); readonly enabled = input<boolean>(true); private el = inject(ElementRef); private renderer = inject(Renderer2); private destroyRef = inject(DestroyRef); constructor() { // setTimeout defers listener attachment past the click that opened this element const timer = setTimeout(() => { const unlisten = this.renderer.listen('document', 'click', (event: MouseEvent) => { if (!this.enabled()) return; const clickedInside = this.el.nativeElement.contains(event.target); if (!clickedInside) this.clickOutside.emit(); }); // DestroyRef replaces ngOnDestroy — cleanup is co-located with setup this.destroyRef.onDestroy(() => unlisten()); }); // Clean up the timer itself if directive is destroyed before it fires this.destroyRef.onDestroy(() => clearTimeout(timer)); } } Usage: <div class="dropdown" appClickOutside (clickOutside)="closeDropdown()"> ... </div> The component writing this has no idea about: The setTimeout trick that prevents the originating click from immediately closing the element it just opened The fact that Renderer2.listen returns an unlisten function that must be called manually The DestroyRef cleanup that prevents memory leaks and stale listeners — and that the timer itself needs clearing if the directive is destroyed before it fires The contains() check that correctly handles clicks on child elements The fact that enabled is a signal — this.enabled() — so it reads the latest value reactively inside the listener Every component that needs click-outside behavior gets all of that for free. Without this directive, each one would reimplement it — and probably miss the setTimeout trick, skip the cleanup, or break on nested clicks. Remove the directive and the caller gets substantially more complex. That's depth. The smart/dumb component pattern is one of the most useful ideas in frontend architecture. It's also one of the most commonly misapplied ones. The failure mode: the smart component passes so many @Input() properties down that the dumb component becomes a post box — receiving state it didn't produce, relaying events it didn't originate, and hiding nothing. // The dumb component that isn't doing anything dumb @Component({ selector: 'app-user-profile-card', template: `...` }) export class UserProfileCardComponent { @Input() userId!: string; @Input() user: User | null = null; @Input() loading = false; @Input() error: string | null = null; @Input() isCurrentUser = false; @Input() canEdit = false; @Input() editMode = false; @Output() edit = new EventEmitter<void>(); @Output() save = new EventEmitter<Partial<User>>(); @Output() cancel = new EventEmitter<void>(); } Seven inputs. Three outputs. What does this component own? Nothing. It's a template with plumbing. Remove it and paste the template into the smart component — nothing meaningful changes. The editMode flag is the clearest symptom. That's pure UI state. It has nothing to do with routing, services, or the outside world. The smart component has no business owning it. But because it's passed down, the smart component now manages three methods — onEdit, onCancel, onSave — that exist purely to shuttle state back up from a component that should have owned it locally. Here's what the split looks like when the depth is in the right place. Smart component: @Component({ selector: 'app-user-profile-page', standalone: true, template: ` <app-user-profile-card [userResource]="userResource" [permissions]="permissions()" (save)="onSave($event)" /> ` }) export class UserProfilePageComponent { private route = inject(ActivatedRoute); private userService = inject(UserService); private authService = inject(AuthService); private userId = this.route.snapshot.params['id']; // rxResource gives the card .isLoading(), .value(), .error() as signals userResource = this.userService.getUser(this.userId); // permissions is a signal — no async pipe, no subscription permissions = this.authService.permissionsFor(this.userId); onSave(payload: Partial<User>) { this.userService.updateUser(this.userId, payload); } } Dumb component: @Component({ selector: 'app-user-profile-card', standalone: true, template: ` @if (userResource.isLoading()) { <p>Loading...</p> } @if (userResource.error()) { <p>Something went wrong.</p> } @if (userResource.value(); as user) { <h2>{{ user.name }}</h2> <p>{{ user.email }}</p> @if (permissions()?.canEdit && !editMode()) { <button (click)="editMode.set(true)">Edit</button> } @if (editMode()) { <app-user-edit-form [user]="user" (save)="handleSave($event)" (cancel)="editMode.set(false)" /> } } `}) export class UserProfileCardComponent { readonly userResource = input.required<ResourceRef<User>>(); readonly permissions = input<{ isCurrentUser: boolean; canEdit: boolean } | null>(null); readonly save = output<Partial<User>>(); // editMode is local signal state — the smart component never touches it editMode = signal(false); handleSave(payload: Partial<User>) { this.save.emit(payload); this.editMode.set(false); } } Seven inputs became two. Three outputs became one. editMode moved to where it belongs — inside the component whose job is to manage that interaction. The dumb component got simpler at the interface but richer in behavior. It now owns its local state, manages its own transitions, and cleans up after itself. Remove it now and the smart component would have to absorb all of that. Which means it was actually hiding something. Every time you create an abstraction, ask one question: If I delete this, does the caller get meaningfully more complex? If yes — the abstraction was earning its depth. Keep it. If no — it was a layer without a reason. The complexity didn't get hidden. It got deferred upward, to be rediscovered by every caller, independently, forever. This isn't about how many files you have or how well you follow conventions. A codebase can be perfectly structured and still full of abstractions that are lying about how much work they're doing. Ousterhout wrote about this in the context of operating systems. But the failure mode is identical in a five-file Angular feature module. The scale is different. The pattern is the same. This post is part of an ongoing series called Don't Fight the Framework, Embrace It — about what it actually means to understand Angular at the level where you stop guessing and start knowing. That posts covered the rendering engine, Ivy's locality principle, LView/TView internals, and how change detection actually works under the hood. This one is different. It's not about Angular internals. It's about a design principle old enough to predate Angular by decades — and why it applies to your codebase more precisely than most Angular-specific advice does. The internals tell you how the framework works. This tells you whether what you're building on top of it is worth building at all.

Dev.to (Angular)
~8 min readMay 2, 2026

Why I Built ng-prism — An Angular-Native Storybook Alternative With Zero Story Files

TL;DR: ng-prism lets you showcase Angular components by adding a single decorator to the component class itself. No story files, no parallel file tree, no framework mismatch. Just Angular. If you've ever maintained a Storybook setup for an Angular component library, you know the drill: for every component you write, you also write a .stories.ts file. Then you keep both in sync. Then someone renames an input, the stories break silently, and nobody notices until the designer opens Storybook two sprints later. Storybook is a fantastic tool — but it was born in the React ecosystem. Angular support has always been a second-class citizen. The CSF format doesn't feel natural in Angular. The iframe rendering breaks MatDialog, CDK overlays, and portals. The webpack/Vite configuration is yet another build system you have to understand alongside the Angular CLI. I wanted something different. Something that feels like Angular because it is Angular. ng-prism is a lightweight component showcase tool built from the ground up for Angular. The core idea is radical in its simplicity: you annotate your component with a @Showcase decorator, and ng-prism discovers it at build time via the TypeScript Compiler API. No story files. No parallel file tree. The documentation lives where the code lives. import { Showcase } from '@ng-prism/core'; import { Component, input, output } from '@angular/core'; @Showcase({ title: 'Button', category: 'Atoms', description: 'The primary action button.', variants: [ { name: 'Primary', inputs: { label: 'Save', variant: 'primary' } }, { name: 'Danger', inputs: { label: 'Delete', variant: 'danger' } }, { name: 'Ghost', inputs: { label: 'Cancel', variant: 'ghost' } } ], }) @Component({ selector: 'lib-button', standalone: true, template: `<button [class]="variant()">{{ label() }}</button>` }) export class ButtonComponent { label = input.required<string>(); variant = input<'primary' | 'danger' | 'ghost'>('primary'); clicked = output<void>(); } That's it. Run ng run my-lib:prism, and you get a fully interactive styleguide with variant tabs, a live controls panel, event logging, and code snippets — all extracted from your actual component at build time. ng-prism doesn't guess your component's API. It reads it. At build time, a custom Angular Builder kicks off a pipeline: TypeScript Compiler API scanner — Parses your library's entry point, walks the AST, and extracts every component annotated with @Showcase. It reads input() and output() signal declarations, infers types, detects defaults, and builds a complete component manifest. Plugin hooks — Registered plugins can enrich the scanned data (e.g., extracting JSDoc comments, injecting Figma URLs from metadata). Runtime manifest generation — The pipeline produces a TypeScript file with real import statements pointing to your actual component classes. No JSON serialization, no runtime reflection. Angular Dev Server — The builder delegates to @angular-devkit/architect, so you get the Angular dev server you already know — HMR, source maps, the works. The result: your styleguide is a regular Angular app. No iframe. Components render in the same document context. MatDialog? Works. CDK Overlay? Works. CSS custom properties from a parent theme? Inherited naturally. ng-prism was built for Angular 21+ and the signal API. It understands input(), input.required(), and output() natively. The controls panel automatically generates the right control widget for each input type: string → text field boolean → toggle number → number input Union types like 'primary' | 'danger' → dropdown Complex types → JSON editor When you change a value in the controls panel, it flows through Angular's signal system. There's no @Input() decorator support — and that's by design. If you're starting a new component library in 2026, you should be using signals. Accessibility isn't a plugin in ng-prism — it's a core feature. The built-in A11y panel provides four perspectives on every component: Violations — axe-core audit with a visual score ring, sorted by impact severity Keyboard Navigation — Tab order visualization with overlay indicators ARIA Tree — The accessibility tree as the browser sees it Screen Reader Simulation — Step through your component the way a screen reader would, with play/pause navigation Switch to "Screen Reader" perspective in the toolbar, and the canvas dims while SR annotations overlay your component. It's a first-class development tool, not an afterthought. ng-prism follows a Vite-style plugin model. A plugin is a plain object with optional hooks for both build time and runtime: import type { NgPrismPlugin } from '@ng-prism/core/plugin'; export function myPlugin(): NgPrismPlugin { return { name: 'my-plugin', // Build-time: enrich scanned component data onComponentScanned(component) { component.meta = { ...component.meta, myData: extractSomething(component) }; }, // Runtime: add a panel to the UI panels: [{ id: 'my-panel', label: 'My Panel', loadComponent: () => import('./my-panel.component.js').then(m => m.MyPanelComponent), position: 'bottom', placement: 'addon', }], }; } @ng-prism/plugin-jsdoc — Extracts JSDoc comments at build time and generates structured API documentation, including parameter tables @ng-prism/plugin-figma — Embeds Figma designs as interactive iframes to enable direct visual comparison with components @ng-prism/plugin-box-model — Overlays CSS box model dimensions directly on rendered components for layout inspection @ng-prism/plugin-perf — Profiles initial render and re-render performance using the browser Performance API @ng-prism/plugin-coverage — Displays per-component test coverage based on Istanbul/V8 reports Plugins lazy-load their components, so they don't bloat your initial bundle. ng add @ng-prism/core The schematic asks which library you want to showcase, creates a prism app project, wires up the builder targets in angular.json, and generates a config file. Then: ng run my-lib:prism Your styleguide is running at localhost:4200. For the config, you get a typed prism.config.ts: import { defineConfig } from '@ng-prism/core'; import { jsDocPlugin } from '@ng-prism/plugin-jsdoc'; import { figmaPlugin } from '@ng-prism/plugin-figma'; export default defineConfig({ plugins: [ jsDocPlugin(), figmaPlugin() ], theme: { '--prism-primary': '#6366f1', '--prism-primary-from': '#6366f1', '--prism-primary-to': '#8b5cf6' // ... } }); The serve builder watches your library sources and config file. Change a component, add a @Showcase decorator, modify an input — the manifest regenerates, and the Angular dev server picks up the change. No restart needed. Not everything fits neatly into a per-component showcase. Sometimes you need a "Patterns" page showing how multiple components compose, or a color token overview. ng-prism supports Component Pages — free-form Angular components registered alongside your showcased components. They're defined in your main.ts via providePrism(): import { PrismShellComponent, providePrism, componentPage } from '@ng-prism/core'; import { ButtonPatternsPageComponent } from './pages/button-patterns.page.js'; bootstrapApplication(PrismShellComponent, { providers: [ providePrism(manifest, config, { componentPages: [ componentPage({ title: 'Button Patterns', category: 'Atoms', component: ButtonPatternsPageComponent, }), ], }), ], }); For static pages that don't need Angular components, use Custom Pages directly in prism.config.ts: export default defineConfig({ pages: [ { type: 'custom', title: 'Changelog', category: 'Meta', data: { version: '2.1.0' } }, ], }); Both appear in the sidebar navigation alongside regular components. Real-world components use <ng-content>. ng-prism handles this with a content property on variants: @Showcase({ title: 'Card', variants: [{ name: 'With Header', content: { '[card-header]': '<h3>Title</h3>', 'default': '<p>Body content</p>' } }] }) Need to showcase a directive instead of a component? Use the host property to specify what element it attaches to — either a plain HTML string or another Angular component. Storybook is mature, battle-tested, and has a massive ecosystem. If it works for your team, keep using it. But if you've felt the friction of: Maintaining a parallel .stories.ts file tree that drifts out of sync Fighting iframe restrictions when your component uses overlays or portals Configuring a separate build system (webpack/Vite) alongside the Angular CLI Wrapping Angular-specific patterns (dependency injection, signals) in framework-agnostic abstractions …then ng-prism might be worth a look. It doesn't try to be framework-agnostic. It's Angular, all the way down. ng-prism is open source under the MIT license and follows Angular's versioning: @ng-prism/core@21.x targets Angular 21. The current release is v21.6.1. Why already v21.6.1? Because we already use it in my company for our component library, so it's already battle-tested. What's coming next: More official plugins CI integration — Manifest validation in your pipeline (e.g. ensuring every public component has a @Showcase decorator) Design token documentation — Automatic token overview extracted from SCSS / CSS custom properties Export — Share your component catalog as a PDF or static HTML page Get Started 🔗 Demo 📖 Docs 📦 npm install @ng-prism/core / ng-prism ng-prism Lightweight, Angular-native component showcase tool. Annotate components with @Showcase — no separate story files needed. Live Demo · Documentation Features Zero-config discovery — TypeScript Compiler API scans your library at build time Signal-native — works with input() / output() signals Directive support — showcase directives with configurable host elements Plugin architecture — JSDoc, A11y, Figma, Performance, Box Model, Coverage Live Controls — auto-generated input controls with type-aware editors Code Snippets — live-updating Angular template snippets per variant Component Pages — free-form demo pages for complex components Deep-linking — URL state sync for sharing specific component/variant/view Themeable — full CSS custom property system, replaceable UI sections Quick Start 1. Install npm install @ng-prism/core 2. Add @Showcase to a component import { Component, input, output } from '@angular/core'; import { Showcase } from '@ng-prism/core'; @Showcase({ title: 'Button', category: 'Atoms', description … View on GitHub </p> If you're building an Angular component library and want your showcase to feel like Angular — give ng-prism a try. Star the repo if you find it useful, and feel free to open issues or contribute plugins.

Dev.to (Angular)
~3 min readMay 2, 2026

Why You Should Avoid Angular 18 for New Projects in 2026: A Developer Survey

Why You Should Avoid Angular 18 for New Projects in 2026: A Developer Survey As we move into 2026, the front-end framework landscape continues to evolve rapidly. A recent global survey of 2,500+ developers conducted in Q4 2025 sheds light on shifting sentiments toward Angular 18, released in late 2024. While Angular remains a staple in enterprise environments, the survey data reveals compelling reasons why teams should think twice before adopting Angular 18 for new projects this year. The survey, which polled front-end developers, tech leads, and engineering managers across 40+ countries, found that only 18% of respondents would recommend Angular 18 for new projects in 2026, down from 34% for Angular 17 in 2025. Below are the top-cited pain points: 62% of survey respondents highlighted Angular’s persistent steep learning curve as a major barrier. Unlike React or Vue, which have more approachable entry points, Angular 18’s reliance on TypeScript, RxJS, and complex dependency injection systems requires significant upskilling. Compounding this, 47% of hiring managers reported difficulty finding junior to mid-level Angular developers in 2025, a gap expected to widen in 2026 as more talent shifts to newer, simpler frameworks. Angular 18’s ecosystem has struggled to keep pace with competitors. 58% of developers noted that popular third-party libraries (e.g., state management, UI component kits) either lack Angular 18 compatibility or receive slower updates than React/Vue counterparts. The survey also found that 41% of teams encountered breaking changes when upgrading from Angular 17 to 18, with 22% reporting critical production issues post-upgrade. Despite Google’s efforts to optimize Angular’s performance, 53% of respondents reported larger bundle sizes for Angular 18 apps compared to equivalent React or Vue apps. For performance-critical applications (e.g., e-commerce, SaaS dashboards), 49% of developers noted slower initial load times, a key metric for user retention. While Angular 18 introduced partial hydration improvements, 61% of survey participants said these updates fell short of matching the performance of frameworks like Astro or Svelte. Google’s shifting priorities for Angular have sown doubt among developers. 39% of respondents cited uncertainty around Angular’s roadmap, particularly after Google’s increased focus on Flutter and internal AI tools in 2025. While Angular has a 6-month release cycle and 18-month LTS window, 34% of enterprise developers expressed concern that LTS support for Angular 18 may be cut short if Google pivots resources away from the framework. The survey found that 72% of teams moving away from Angular 18 are adopting one of three alternatives: React 19+: Preferred for its flexible ecosystem, large talent pool, and strong support for server components and concurrent rendering. Vue 4: Gaining traction for its gentle learning curve, incremental adoption model, and improved TypeScript support. Svelte 5: Rising in popularity for its compile-time approach, tiny bundle sizes, and exceptional performance for lightweight applications. The survey notes that Angular 18 still makes sense for two specific use cases: (1) Large enterprise teams already deeply invested in the Angular ecosystem, where migration costs outweigh the benefits of switching, and (2) Projects requiring tight integration with Google Cloud or Firebase services, where Angular’s first-party tooling provides an advantage. For all other new projects, 82% of survey respondents recommend evaluating alternatives first. While Angular 18 is not a "bad" framework, the 2026 developer survey makes clear that it is no longer the default choice for new projects. With talent shortages, ecosystem lag, and stronger alternatives available, teams building new applications in 2026 are better served by more flexible, performant frameworks. As one survey respondent put it: "Angular was great for 2020, but the landscape has moved on. Don’t get left behind."

Dev.to (Angular)
~5 min readMay 2, 2026

The CLAUDE.md Rules Every Angular Developer Needs in 2026

The first time I let Claude Code touch a clean Angular 19 codebase without a CLAUDE.md at the repo root, the assistant scaffolded a feature using @NgModule, *ngIf, constructor @Input() decorators, and a BehaviorSubject field for state. Tests passed. Compiler was green. It was the wrong Angular. The training data for every frontier model is a decade of Angular evolution stacked on top of itself: NgModule apps, the standalone preview, signal-based authoring, the new control flow, the zoneless story. Without explicit guidance, an AI picks whichever pattern was statistically loudest in its training set. A CLAUDE.md file at the repo root collapses that probability cloud into one shape — your shape. Here are the seven rules I port across every Angular 17+ project I touch in 2026. Full template here: CLAUDE.md Angular Edition gist. NgModule Components, directives, pipes are standalone. NEVER create a new @NgModule. Bootstrap with bootstrapApplication(AppComponent, { providers: [...] }). If a task touches a legacy NgModule-registered component, migrate it to standalone in the same PR. Why this matters for Angular: the model defaults to whatever 80% of its training data shows, which is the NgModule era. Naming the migration policy ("touch it, migrate it") prevents a long tail of mixed-paradigm PRs. Lazy routes use loadChildren: () => import('./feature/feature.routes').then(m => m.FEATURE_ROUTES) — never the legacy loadModule pattern. @if, @for, @switch Templates use @if, @for (with `track`), @switch, and @empty. NEVER mix *ngIf / *ngFor / *ngSwitch with the new syntax in one template. Why this matters for Angular: the new control flow is compile-time optimized, narrows types correctly inside @if (user(); as user), and produces smaller bundles by dropping NgIf / NgForOf from CommonModule. @for requires a track expression at compile time — without it Angular throws, which is the right default. AI assistants that haven't seen track enough will fall back to *ngFor="let u of users" and silently regress your performance. BehaviorSubject last Component state is signal(). Derived state is computed(). Side effects are effect(). Inputs are input() / input.required(). Outputs are output(). Two-way binding is model(). NEVER use a private BehaviorSubject + asObservable() getter for component-local state. Why this matters for Angular: signals are the language now. The BehaviorSubject field with a public asObservable() getter was the workaround for the years before Angular had primitive reactivity. Without this rule the model will generate that pattern by default because it dominates the training data. input.required<string>() also catches the missing-input bug at compile time, which @Input() name!: string only catches at runtime when the parent forgets to bind it. Dumb components: take input(), emit output(), zero injected services, ChangeDetectionStrategy.OnPush, droppable into Storybook with no DI mocks. Smart components: inject services with inject(), orchestrate state, pass data down to dumb components. Folder convention: pages/*.page.ts (smart) and components/*.component.ts (dumb). Why this matters for Angular: AI assistants love to "just inject the service here" because it minimizes lines added. That puts HTTP calls into a dumb card component and makes it untestable in isolation. The folder naming (*.page.ts vs *.component.ts) is a contract that's enforced by code review and visible in every import statement. inject() and providedIn: 'root' Use inject(UsersService) in constructors and factory functions. @Injectable({ providedIn: 'root' }) for singleton services so they tree-shake. NEVER list singletons in AppComponent's providers array. NEVER `new UserService()` in business code. Use takeUntilDestroyed(inject(DestroyRef)) to clean up subscriptions tied to component lifecycle. Why this matters for Angular: providedIn: 'root' is tree-shakable; listing services in AppComponent.providers defeats it and quietly bloats the initial bundle. takeUntilDestroyed() replaces the old private destroy$ = new Subject<void>(); ngOnDestroy() { this.destroy$.next() } boilerplate that an AI will keep generating because it dominates Stack Overflow answers from 2019–2022. NonNullableFormBuilder Forms are reactive and typed via NonNullableFormBuilder. NEVER use template-driven [(ngModel)] forms beyond a 5-line prototype. NEVER use the legacy FormBuilder — its types include null for every field. Why this matters for Angular: typed forms are the difference between form.value.email: string (with NonNullableFormBuilder) and form.value.email: string | null | undefined (with FormBuilder). The latter forces ! non-null assertions across every form handler downstream, which compounds into a typing mess. The AI will pick the legacy FormBuilder because it's the most-documented variant on the web — explicit in CLAUDE.md, explicit in code review. Every interactive element has a discernible name (aria-label or visible text). Buttons are <button type="button">, never <div (click)>. Form labels are <label for> + <input id>; placeholder is NOT a label. axe-core runs in e2e CI and fails the build on critical violations. Color contrast checked at design time, verified in CI. Why this matters for Angular: AI assistants generate <div (click)="onClick()"> constantly because it works in dev, looks fine, and passes type checking. It fails for keyboard users (divs aren't focusable), for screen reader users (not announced as actionable), and silently lowers your accessibility score until someone runs Lighthouse. Putting axe-core in the e2e pipeline means a regression breaks the build, not Q4 OKRs. Drop the Angular CLAUDE.md template at the root of your Angular 17+ project, commit it, and watch the AI's suggestions converge on the codebase you actually want. The full Rules Pack covering 27+ stacks (Next.js, Vue 3, Svelte, FastAPI, Django, Go, Rust, NestJS, Angular, and more) lives at oliviacraftlat.gumroad.com/l/skdgt. The signal here isn't "AI is bad at Angular." It's that AI is excellent at Angular when you tell it which Angular you mean. The rule is the configuration; without it, the assistant guesses, and statistical guesses average to 2018.

Dev.to (Angular)
~23 min readMay 2, 2026

Building Dynamic Audio with Emotion & Pace: Gemini 3.1 Flash TTS, Angular & Firebase Cloud Functions [GDE]

Google released the Gemini 3.1 Flash TTS Preview model for AI audio generation in the Gemini API, Gemini in Vertex AI, and Gemini AI Studio. This model introduces a new Audio tags feature to exhibit expressive human emotion, pace, and style. This application explores Firebase AI Logic to analyze an uploaded image to generate recommendations, description, alternative tags, and an obscure fact. The obscure fact is sent to a Firebase Cloud Function to generate an audio using a Gemini TTS model. The Cloud Function returns the stream to an Angular application that converts it to a Blob URL object. An audio player sets the URL to the source that users can click the Play button to play the stream. In this blog post, I migrate my application to use the Gemini 3.1 Flash TTS Preview model and create a signal form in Angular to input a scene, emotion, and pace. Then, the Angular application provides the form values and the obscure fact to the Firebase Cloud Function to generate an expressive voice using the GenAI TypeScript SDK. The technical stack of the project: Angular 21: The latest version as of May 2026. Node.js LTS: The LTS version as of May 2026. Firebase Remote Config: To manage dynamic parameters. Firebase Cloud Functions: To generate an expressive human voice when called by the frontend. Firebase Local Emulator Suite: To test the functions locally at http://localhost:5001. Gemini in Vertex AI: To generate videos and store them in Firebase Cloud Storage. The public Google AI Studio API is restricted in my region (Hong Kong). However, Vertex AI (Google Cloud) offers enterprise access that works reliably here, so I chose Vertex AI for this demo. npm i -g firebase-tools Install firebase-tools globally using npm. firebase logout firebase login Log out of Firebase and log in again to perform proper Firebase authentication. firebase init Execute firebase init and follow the prompts to set up Firebase Cloud Functions, the Firebase Local Emulator Suite, Firebase Cloud Storage, and Firebase Remote Config. If you have an existing project or multiple projects, you can specify the project ID on the command line. firebase init --project <PROJECT_ID> In both cases, the Firebase CLI automatically installs the firebase-admin and firebase-functions dependencies. After completing the setup steps, the Firebase tools generate the functions emulator, functions, a storage rules file, remote config templates, and configuration files such as .firebaserc and firebase.json. Angular dependency npm i firebase The Angular application requires the firebase dependency to initialize a Firebase app, load remote config, and invoke the Firebase Cloud Functions to generate videos. Firebase dependencies npm i @cfworker/json-schema @google/genai @modelcontextprotocol/sdk Install the above dependencies to access Gemini in Vertex AI. @google/genai depends on @cfworker/json-schema and @modelcontextprotocol/sdk. Without these, the Cloud Functions cannot start. With our project configured, let's look at how the frontend and backend communicate. A user uploads an image in an Angular application and prompts the Gemini 3.1 Flash Lite Preview model to generate a few recommendations for improving the image, a description, and alternative tags. The user also uses the same model and the Google Search tool to find an obscure fact related to the image. A user inputs a scene, an emotion, and a pace in an experimental signal form. When a user clicks the generate audio button, the Angular application sends the form values and the obscure fact to the Firebase Cloud Function to generate an expressive voice using the GenAI TypeScript SDK and Gemini 3.1 Flash TTS Preview model. The model can only accept text inputs and generate audio outputs. The context window is 32K tokens TTS does not support streaming. The supported languages can be found in https://ai.google.dev/gemini-api/docs/speech-generation#languages. My mother tongue, Cantonese, is currently unsupported. Defining the environment variables in the Firebase project ensures the functions know the region of the Google Cloud project, the Firebase Cloud Function location, and the required TTS model. .env.example GOOGLE_CLOUD_LOCATION="global" GOOGLE_FUNCTION_LOCATION="asia-east2" GEMINI_TTS_MODEL_NAME="gemini-3.1-flash-tts-preview" WHITELIST="http://localhost:4200" REFERER="http://localhost:4200/" Variable Description GOOGLE_CLOUD_LOCATION The region of the Google Cloud project. I chose global so that the Firebase project has access to the newest Gemini 3.1 Flash TTS preview model. GOOGLE_FUNCTION_LOCATION The region of the Firebase Cloud Functions. I chose asia-east2 because this is the region where I live. WHITELIST Requests must come from http://localhost:4200. REFERER Requests originate from http://localhost:4200/. http://localhost:4200 is the host and port number of my local Angular application. Before the Cloud Function proceeds with any AI calls, it is critical to ensure that all necessary environment variables are present. I implemented an AUDIO_CONFIG IIFE (Immediately Invoked Function Expression) to validate environment variables like the TTS model name, Google Cloud Project ID, and location. import logger from "firebase-functions/logger"; export function validate(value: string | undefined, fieldName: string, missingKeys: string[]) { const err = `${fieldName} is missing.`; if (!value) { logger.error(err); missingKeys.push(fieldName); return ""; } return value; } export const AUDIO_CONFIG = (() => { logger.info("AUDIO_CONFIG initialization: Loading environment variables and validating configuration..."); const env = process.env; const missingKeys: string[] = []; const location = validate(env.GOOGLE_CLOUD_LOCATION, "Vertex Location", missingKeys); const model = validate(env.GEMINI_TTS_MODEL_NAME, "Gemini TTS Model Name", missingKeys); const project = validate(env.GCLOUD_PROJECT, "Google Cloud Project", missingKeys); if (missingKeys.length > 0) { throw new HttpsError("failed-precondition", `Missing environment variables: ${missingKeys.join(", ")}`); } return { genAIOptions: { project, location, vertexai: true, }, model, }; })(); I am using Node 24 as of May 2026. Since Node 20, we can use the built-in process.loadEnvFile function that loads environment variables from the .env file. In env.ts, the try-catch block attempts to load the environment variables from the .env file. try { process.loadEnvFile(); } catch { // Ignore error if .env file is not found (e.g., in production where env vars are set by the platform) } In src/index.ts, the first statement imports the env.ts before importing other files and libraries. import "./env"; ... other import statements ... If you are using a Node version that does not support process.loadEnvfile, the alternative is to install dotenv to load the environment variables. npm i dotenv import dotenv from "dotenv"; dotenv.config(); Firebase provides the GCLOUD_PROJECT variable, so it is not defined in the .env file. When the missingKeys array is not empty, AUDIO_CONFIG throws an error that lists all the missing variable names. If the validation is successful, the genAIOptions and model are returned. The genAIOptions is used to initialize the GoogleGenAI and model is the selected TTS model name. The Cloud Function sanitizes the scene and transcript before composing the audio prompt. The sanitizeScene function accepts the scene by escaping the newline character ('\n') with the '\\n'. The newline character creates a blank line and often signals the end of a block. The sanitization effectively flattens the scene into one continuous line of data and the LLM's Markdown parser recognizes it as a single, safe paragraph. The sanitization also removes all Markdown headers that are injected into the scene. function sanitizeScene(text: string): string { return (text || "").trim().replace(/\r?\n/g, "\\n").replace(/^[#\s]+/gm, ""); } The sanitizeTranscript function accepts the transcript by removing all Markdown headers and triple quotes that are injected into it. function sanitizeTranscript(text: string): string { return (text || "").trim().replace(/^#+/gm, "").replace(/"""/g, '"'); } The AudioPrompt interface encapsulates the scene, emotion, pace, transcript, and voice option to set the location, audio tags, text, and persona of the audio. export type AudioPrompt = { scene: string; emotion: string; pace: string; transcript: string; voiceOption: string; } The SCENE_DICTIONARY is an array of scenes. When the user does not provide a scene, a scene is randomly selected from the array. export const SCENE_DICTIONARY = [ "A dimly lit, dusty library filled with ancient leather-bound books.\n" + "The air is thick with history. A scholarly archivist is leaning closely into a warm, vintage ribbon microphone.\n" + "They speak with an infectious, hushed intensity, eager to share a forgotten secret they just uncovered in a decaying manuscript.", "It is 10:00 PM in a glass-walled studio overlooking the moonlit London skyline, but inside, it is blindingly bright.\n" + "The red 'ON AIR' tally light is blazing. The speaker is standing up, bouncing on the balls of their heels to the rhythm of a thumping backing track.\n" + "It is a chaotic, caffeine-fueled cockpit designed to wake up an entire nation.", "A meticulously sound-treated bedroom in a suburban home.\n" + "The space is deadened by plush velvet curtains and a heavy rug, creating an intimate, close-up acoustic environment.\n" + "The speaker delivers the information like a trusted friend sharing an inside joke.", "A high-tech, minimalist laboratory humming with servers.\n" + "Crisp, clean acoustics reflect off glass and steel.\n" + "A brilliant but eccentric scientist is pacing back and forth, speaking rapidly and enthusiastically into a headset microphone, excited to explain a complex phenomenon.", ]; I define a buildAudioPrompt function to construct the advanced audio prompt. [<emotion>]. When a pace is defined, the tag is [<pace>]. The combined audio tag is [<emotion>] [<pace>]<a space> to create a proper token boundary. The insertAudioTagsToTranscript uses a regular expression to split the transcript into lines, inserts the combined audio tag before each line, and then joins them with an empty string. The buildAudioPrompt concatenates the scene and the expressive transcript into a string before returning it. import { SCENE_DICTIONARY } from './constants/scenes.const'; import { AudioPrompt } from './types/audio-prompt.type'; function makeTag(value: string) { const trimmedValue = value.trim(); return trimmedValue ? `[${trimmedValue}] ` : ""; } function insertAudioTagsToTranscript({ transcript, pace, emotion }: AudioPrompt): string { const audioTags = `${makeTag(emotion)}${makeTag(pace)}`; const cleanedTranscript = sanitizeTranscript(transcript); const parts = cleanedTranscript.split(/(?<!\b(?:Mr|Mrs|Ms|Dr|St|i\.e|e\.g))([.!?\n\r]+[”"’']*\s*)/); return parts .map((text, i, arr) => { if (i % 2 !== 0) { return ""; // Skip delimiters, they are appended to the text blocks } const delimiter = arr[i + 1] || ""; return text.trim() ? `${audioTags}${text.trim()}${delimiter}` : delimiter; }) .join(""); } export function buildAudioPrompt(data: AudioPrompt): string { const randomIndex = Math.floor(Math.random() * SCENE_DICTIONARY.length); const selectedScene = SCENE_DICTIONARY[randomIndex]; const trimmedScene = (data.scene || "").trim() || selectedScene; const escapedScene = sanitizeScene(trimmedScene); const transcript = insertAudioTagsToTranscript(data); return `## Scene: ${escapedScene} ## Transcript: """ ${transcript} """ `; } The output of the prompt looks like: ## Scene: <scene> ## Transcript: [<emotion>] [<pace>] <sentence 1>[<emotion>] [<pace>] <sentence 2>...[<emotion>] [<pace>] <sentence N> The createVoiceConfig function constructs an instance of GenerateContentConfig that outputs a speech narrated by the given voice name. import { GenerateContentConfig } from "@google/genai"; export function createVoiceConfig(voiceName = "Kore"): GenerateContentConfig { return { responseModalities: ["audio"], speechConfig: { voiceConfig: { prebuiltVoiceConfig: { voiceName, }, }, }, }; } const splitList = (whitelist?: string) => (whitelist || "").split(",").map((origin) => origin.trim()); export const whitelist = splitList(process.env.WHITELIST); export const cors = whitelist.length > 0 ? whitelist : true; export const refererList = splitList(process.env.REFERER); All Cloud Functions enforce App Check, CORS, and a timeout period of 600 seconds. If WHITELIST is unspecified, CORS defaults to true. While acceptable in a demo environment, configure CORS to a specific domain or false in production to prevent unauthorized access. The readFact cloud function delegates to readFactStreamFunction when isStreaming is true. Otherwise, it is delegated to readFactFunction. The readFactFunction function returns a Promise<string> that is the encoded base64 string. The readFactStreamFunction functions returns a Promise<number[] | undefined> that represents a buffer of WAV header bytes. import { onCall } from "firebase-functions/v2/https"; import { cors } from "../auth"; import { buildAudioPrompt } from './audio-prompt'; import { readFactFunction, readFactFunctionStream } from "./read-fact"; import { createVoiceConfig } from './voice-config'; const options = { cors, enforceAppCheck: true, timeoutSeconds: 600, }; export const readFact = onCall(options, (request, response) => { const { data, acceptsStreaming } = request; const isStreaming = acceptsStreaming && !!response; const prompt = buildAudioPrompt(data); const voiceOption = createVoiceConfig(data.voiceOption); return isStreaming ? readFactStreamFunction(prompt, voiceOption, response) : readFactFunction(prompt, voiceOption); }); The withAIAudio function is a high-order function that calls the callback to generate an audio stream. async function withAIAudio(callback: (ai: GoogleGenAI, model: string) => Promise<string | number[] | undefined>) { try { const variables = AUDIO_CONFIG; if (!variables) { return ""; } const { genAIOptions, model } = variables; const ai = new GoogleGenAI(genAIOptions); return await callback(ai, model); } catch (e) { if (e instanceof HttpsError) { throw e; } throw new HttpsError("internal", "An internal error occurred while setting up the AI client.", { originalError: (e as Error).message, }); } } generateAudio is a callback function that uses the Gemini 3.1 Flash TTS Preview model to generate a response. getBase64DataUrl invokes extractInlineAudioData to extract the raw data and the mime type from the response. The encodeBase64String function first converts the raw data to WAV format, then encodes it to base64 format, and finally returns the base64 string. The createAudioParams function constructs a parameter with the Gemini TTS model, the audio prompt, and the speech configuration. async function generateAudio(aiTTS: AIAudio, prompt: string, voiceOption: GenerateContentConfig) { try { const { ai, model } = aiTTS; const response = await ai.models.generateContent(createAudioParams(model, prompt, voiceOption)); return getBase64DataUrl(response); } catch (error) { console.error(error); throw error; } } function createAudioParams(model: string, prompt: string, config?: GenerateContentConfig) { return { model, contents: [ { role: "user", parts: [ { text: prompt, }, ], }, ], config, }; } function extractInlineAudioData(response: GenerateContentResponse): { rawData: string | undefined; mimeType: string | undefined; } { const { data: rawData, mimeType } = response.candidates?.[0]?.content?.parts?.[0]?.inlineData ?? {}; return { rawData, mimeType }; } function getBase64DataUrl(response: GenerateContentResponse) { const { rawData, mimeType } = extractInlineAudioData(response); if (!rawData || !mimeType) { throw new Error("Audio generation failed: No audio data received."); } return encodeBase64String({ rawData, mimeType }); } export function encodeBase64String({ rawData, mimeType }: RawAudioData) { const wavBuffer = convertToWav(rawData, mimeType); const base64Data = wavBuffer.toString("base64"); return `data:audio/wav;base64,${base64Data}`; } generateAudioStream is a callback function that uses the Gemini 3.1 Flash TTS Preview model to stream a list of audio chunks. The chunks are iterated so that each chunk is passed to the extractInlineAudioData function to extract the raw data and the mime type. The function converts the chunk's raw data into a buffer and sends it to the client; the byte length accumulates to determine the total size of all chunks. After all the chunks are sent to the client, the createWavHeader function uses the total byte length and the audio options to construct a WAV header and returns it. async function generateAudioStream( aiTTS: AIAudio, prompt: string, voiceOption: GenerateContentConfig, response: CallableResponse<unknown>, ): Promise<number[] | undefined> { try { const { ai, model } = aiTTS; const chunks = await ai.models.generateContentStream(createAudioParams(model, prompt, voiceOption)); let byteLength = 0; let options: WavConversionOptions | undefined = undefined; for await (const chunk of chunks) { const { rawData, mimeType } = extractInlineAudioData(chunk); if (!options && mimeType) { options = parseMimeType(mimeType); response.sendChunk({ type: "metadata", payload: { sampleRate: options.sampleRate, }, }); } if (rawData && mimeType) { const buffer = Buffer.from(rawData, "base64"); byteLength = byteLength + buffer.length; response.sendChunk({ type: "data", payload: { buffer, }, }); } } if (options && byteLength > 0) { const header = createWavHeader(byteLength, options); return [...header]; } return undefined; } catch (error) { console.error(error); throw error; } } The readFactFunction invokes the withAIAudio high-order function to generate a base64-encoded string. The readFactStreamFunction function calls the withAIAudio high-order function to write chunks to the response body and send them to the client. Then, the generateAudioStream function returns the bytes of the WAV header. export async function readFactFunction(prompt: string, voiceOption: GenerateContentConfig) { return withAIAudio((ai, model) => generateAudio({ ai, model }, prompt, voiceOption)); } export async function readFactStreamFunction(prompt: string, voiceOption: GenerateContentConfig, response: CallableResponse<unknown>) { return withAIAudio((ai, model) => generateAudioStream({ ai, model }, prompt, voiceOption, response)); } I implemented a FIREBASE_APP_CONFIG IIFE (Immediately Invoked Function Expression) to run once to validate the environment variables of the Firebase app. export const FIREBASE_APP_CONFIG = (() => { const env = process.env; const missingKeys: string[] = []; const apiKey = validate(env.APP_API_KEY, "API Key", missingKeys); const appId = validate(env.APP_ID, "App Id", missingKeys); const messagingSenderId = validate(env.APP_MESSAGING_SENDER_ID, "Messaging Sender ID", missingKeys); const recaptchaSiteKey = validate(env.RECAPTCHA_ENTERPRISE_SITE_KEY, "Recaptcha site key", missingKeys); const projectId = validate(env.GCLOUD_PROJECT, "Project ID", missingKeys); if (missingKeys.length > 0) { throw new Error(`Missing environment variables: ${missingKeys.join(", ")}`); } return { app: { apiKey, appId, projectId, messagingSenderId, authDomain: `${projectId}.firebaseapp.com`, storageBucket: `${projectId}.firebasestorage.app`, }, recaptchaSiteKey, }; })(); The getFirebaseConfig function caches the FIREBASE_APP_CONFIG for an hour before returning it to the Angular application. The Angular application receives the Firebase app configuration and reCAPTCHA site key from the Cloud Function to initialize Firebase AI Logic and protect resources from unauthorized access and abuse. export const getFirebaseConfig = onRequest({ cors }, (request, response) => { if (!validateRequest(request, response)) { return; } try { response.set("Cache-Control", "public, max-age=3600, s-maxage=3600"); response.json(FIREBASE_APP_CONFIG); } catch (err) { console.error(err); response.status(500).send("Internal Server Error"); } }); For local development, I used the Firebase Local Emulator Suite to save cost and time. In the bootstrapFirebase process, the application calls connectFunctionsEmulator to link to the Cloud Functions running at http://localhost:5001. The port number defaulted to 5001 when firebase init was executed. function connectEmulators(functions: Functions, remoteConfig: RemoteConfig) { if (location.hostname === 'localhost') { const host = getValue(remoteConfig, 'functionEmulatorHost').asString(); const port = getValue(remoteConfig, 'functionEmulatorPort').asNumber(); connectFunctionsEmulator(functions, host, port); } } loadFirebaseConfig is a helper function that makes request to the Cloud function to obtain the Firebase App configuration and the reCAPTCHA site key. { "getFirebaseConfigUrl": "http://127.0.0.1:5001/vertexai-firebase-6a64f/us-central1/getFirebaseConfig" } export type FirebaseConfigResponse = { app: FirebaseOptions; recaptchaSiteKey: string } import { HttpClient } from '@angular/common/http'; import { inject } from '@angular/core'; import { catchError, lastValueFrom, throwError } from 'rxjs'; import config from '../../public/config.json'; import { FirebaseConfigResponse } from './ai/types/firebase-config.type'; async function loadFirebaseConfig() { const httpService = inject(HttpClient); const firebaseConfig$ = httpService.get<FirebaseConfigResponse>(config.getFirebaseConfigUrl) .pipe(catchError((e) => throwError(() => e))); return lastValueFrom(firebaseConfig$); } The bootstrapFirebase function initializes the FirebaseApp and App Check, loads the Firebase remote configuration and cloud functions, and stores them in the config service for later use. export async function bootstrapFirebase() { try { const configService = inject(ConfigService); const firebaseConfig = await loadFirebaseConfig(); const { app, recaptchaSiteKey } = firebaseConfig; const firebaseApp = initializeApp(app); const remoteConfig = await fetchRemoteConfig(firebaseApp); initializeAppCheck(firebaseApp, { provider: new ReCaptchaEnterpriseProvider(recaptchaSiteKey), isTokenAutoRefreshEnabled: true, }); const functionRegion = getValue(remoteConfig, 'functionRegion').asString(); const functions = getFunctions(firebaseApp, functionRegion); connectEmulators(functions, remoteConfig); configService.loadConfig(firebaseApp, remoteConfig, functions); } catch (err) { console.error(err); } } The AppConfig remains unchanged. import { ApplicationConfig, provideAppInitializer } from '@angular/core'; import { bootstrapFirebase } from './app.bootstrap'; export const appConfig: ApplicationConfig = { providers: [ provideAppInitializer(async () => bootstrapFirebase()), ] }; I create an AudioTagsComponent and a new signal form to input the scene, emotion, pace, and voice name in the Angular frontend. <div> <h3> <span class="text-xl">🎙️</span> Customize Audio Generation </h3> <div class="grid grid-cols-1 md:grid-cols-2 gap-4"> <!-- Scene --> <div class="flex flex-col gap-1.5 md:col-span-2"> <label for="scene">Scene Description</label> <textarea id="scene" [formField]="audioPromptForm.scene" ></textarea> </div> <!-- Emotion --> <div class="flex flex-col gap-1.5"> <label for="emotion">Vocal Emotion</label> <input type="text" id="emotion" [formField]="audioPromptForm.emotion" placeholder="e.g., panicked, whispers" /> </div> <!-- Pace --> <div class="flex flex-col gap-1.5"> <label for="pace">Speaking Pace</label> <input type="text" id="pace" [formField]="audioPromptForm.pace" placeholder="e.g., very slow, rapid" /> </div> <!-- Voice Option --> <div class="flex flex-col gap-1.5 md:col-span-2"> <label for="voiceOption">AI Voice Model</label> <select id="voiceOption" [formField]="audioPromptForm.voiceOption" > <option value="" disabled selected>Select a voice...</option> @for (option of sortedVoiceOptions(); track option.name) { <option [value]="option.name" class="bg-slate-800">{{ option.label }}</option> } </select> </div> </div> </div> import { ChangeDetectionStrategy, Component, computed, signal } from '@angular/core'; import { form, FormField } from '@angular/forms/signals'; import { VOICE_OPTIONS } from './constants/voice-options.const'; import { AudioPromptData } from './types/audio-prompt-data.type'; @Component({ selector: 'app-audio-tags', imports: [FormField], templateUrl: './audio-tags.component.html', changeDetection: ChangeDetectionStrategy.OnPush, }) export class AudioTagsComponent { #audioPromptModel = signal<AudioPromptData>({ scene: 'A news anchor reading the news in a busy newsroom', emotion: 'professional, slightly serious', pace: 'moderate, clear enunciation', voiceOption: 'Kore' }); audioPromptForm = form(this.#audioPromptModel); sortedVoiceOptions = computed(() => { const sortedList = VOICE_OPTIONS.sort((a, b) => a.name.localeCompare(b.name)); return sortedList.map(option => ({ name: option.name, label: `${option.name} - ${option.description}` })); }); audioPromptModel = this.#audioPromptModel.asReadonly(); } The AudioTagsComponent is imported into ObscureFactComponent such that users can input values into the experimental signal form. In the HTML template of ObscureFactComponent, the <app-audio-tags> has a template variable audioTags, and audioTags.audioPromptModel() resolves to an instance of AudioPromptData. The data is assigned to the audioTags property of the generateSpeech method. <div class="w-full mt-6"> <app-audio-tags #audioTags /> <h3>A surprising or obscure fact about the tags</h3> @if (interestingFact()) { <p>{{ interestingFact() }}</p> <app-error-display [error]="ttsError()" /> <app-text-to-speech [isLoadingSync]="isLoadingSync()" [isLoadingStream]="isLoadingStream()" [isLoadingWebAudio]="isLoadingWebAudio()" [audioUrl]="audioUrl()" (generateSpeech)="generateSpeech({ mode: $event, audioTags: audioTags.audioPromptModel() })" [playbackRate]="playbackRate()" /> } @else { <p>The tag(s) does not have any interesting or obscure fact.</p> } </div> import { AudioPromptData } from './audio-prompt-data.type'; import { GenerateSpeechMode } from '../../generate-audio.util'; export type ModeWithAudioTags = { mode: GenerateSpeechMode; audioTags: AudioPromptData; }; export type AudioPrompt = { scene: string; emotion: string; pace: string; transcript: string; voiceOption: string; }; The generateSpeech method uses the fact and audioTags to contruct an instance of AudioPrompt. When the mode is stream, the SpeechService calls generateAudioBlobURL to use the audioPrompt to construct a blob URL. When the mode is sync, the SpeechService calls generateAudio to use the audioPrompt to generate an encoded base64 string. When the mode is web_audio_api, the AudioPlayerService calls playStream to stream the audio. import { SpeechService } from '@/ai/services/speech.service'; import { AudioPrompt } from '@/ai/types/audio-prompt.type'; import { ChangeDetectionStrategy, Component, inject, input, OnDestroy, signal } from '@angular/core'; import { revokeBlobURL } from '../blob.util'; import { AudioTagsComponent } from './audio-tags/audio-tags.component'; import { ModeWithAudioTags } from './audio-tags/types/mode-audio-tags.type'; import { generateSpeechHelper, streamSpeechWithWebAudio, ttsError } from './generate-audio.util'; import { AudioPlayerService } from './services/audio-player.service'; @Component({ selector: 'app-obscure-fact', templateUrl: './obscure-fact.component.html', imports: [ TextToSpeechComponent, ], changeDetection: ChangeDetectionStrategy.OnPush, }) export class ObscureFactComponent implements OnDestroy { interestingFact = input<string | undefined>(undefined); speechService = inject(SpeechService); audioPlayerService = inject(AudioPlayerService); isLoadingSync = signal(false); isLoadingStream = signal(false); isLoadingWebAudio = signal(false); audioUrl = signal<string | undefined>(undefined); ttsError = ttsError; async generateSpeech({ mode, audioTags }: ModeWithAudioTags) { const fact = this.interestingFact(); if (fact) { revokeBlobURL(this.audioUrl); this.audioUrl.set(undefined); const audioPrompt = { ...audioTags, transcript: fact, }; if (mode === 'sync' || mode === 'stream') { const loadingSignal = mode === 'stream' ? this.isLoadingStream : this.isLoadingSync; const speechFn = (audioPrompt: AudioPrompt) => mode === 'stream' ? this.speechService.generateAudioBlobURL(audioPrompt) : this.speechService.generateAudio(audioPrompt); await generateSpeechHelper(audioPrompt, loadingSignal, this.audioUrl, speechFn); } else if (mode === 'web_audio_api') { await streamSpeechWithWebAudio( audioPrompt, this.isLoadingWebAudio, (audioPrompt: AudioPrompt) => this.audioPlayerService.playStream(audioPrompt)); } } } ngOnDestroy(): void { revokeBlobURL(this.audioUrl); } } The SpeechService has a generateAudio method that calls the readFact cloud function to obtain the encoded base64 string. Similarly, the service has a generateAudioBlobURL method that streams the chunks to create a buffer and prepend it with the WAV header. The constructBlobURL creates a blob URL from the Blob Part array. export function constructBlobURL(parts: BlobPart[]) { return URL.createObjectURL(new Blob(parts, { type: 'audio/wav' })); } import { AudioPrompt } from '@/ai/types/audio-prompt.type'; import { constructBlobURL } from '@/photo-panel/blob.util'; import { inject, Injectable } from '@angular/core'; import { Functions, httpsCallable } from 'firebase/functions'; import { StreamMessage } from '../types/stream-message.type'; import { ConfigService } from './config.service'; @Injectable({ providedIn: 'root' }) export class SpeechService { private configService = inject(ConfigService); private get functions(): Functions { if (!this.configService.functions) { throw new Error('Firebase Functions has not been initialized.'); } return this.configService.functions; } async generateAudio(audioPrompt: AudioPrompt) { const readFactFunction = httpsCallable<AudioPrompt, string>( this.functions, 'textToAudio-readFact' ); const { data: audioUri } = await readFactFunction(audioPrompt); return audioUri; } async generateAudioStream(audioPrompt: AudioPrompt) { const readFactStreamFunction = httpsCallable<AudioPrompt, number[] | undefined, StreamMessage>( this.functions, 'textToAudio-readFact' ); return readFactStreamFunction.stream(audioPrompt); } async generateAudioBlobURL(audioPrompt: AudioPrompt) { const { stream, data } = await this.generateAudioStream(audioPrompt); const audioParts: BlobPart[] = []; for await (const audioChunk of stream) { if (audioChunk && audioChunk.type === 'data') { audioParts.push(new Uint8Array(audioChunk.payload.buffer.data)); } } const wavHeader = await data; if (wavHeader && wavHeader.length) { audioParts.unshift(new Uint8Array(wavHeader)); } return constructBlobURL(audioParts); } } Similar to SpeechService.generateAudioBlobURL, the playStream method of AudioPlayerService also calls generateAudioStream to get a stream of chunks and play each of them immediately. import { SpeechService } from '@/ai/services/speech.service'; import { AudioPrompt } from '@/ai/types/audio-prompt.type'; import { inject, Injectable, OnDestroy, signal } from '@angular/core'; @Injectable({ providedIn: 'root' }) export class AudioPlayerService implements OnDestroy { async playStream(audioPrompt: AudioPrompt) { const { stream } = await this.speechService.generateAudioStream(audioPrompt); for await (const audioChunk of stream) { ... process each chunk ... } } ngOnDestroy(): void { ... release resources to prevent memory leak ... } } This is the end of the walkthrough for the demo. You should now be able to input different combinations of scene, emotion, and pace to create a unique personality to say the given text in an audio clip. The examples in Gemini AI Studio and Vertex AI Studio use static audio tags and transcripts and they work correctly for me. When I applied dynamic audio tags and transcripts in the demo, the Gemini 3.1 TTS Flash Preview model ignored the audio tags. The issue was resolved after debugging in Gemini CLI for hours. Here are the Caveats and Lessons Learned: The Token Boundary Trap. The code originally concatenated tags and transcript without a space (for example, "[giggle][slow]Before"). The LLM tokenizer failed to recognize the instruction to change the behavior and pace of the audio. My fix was to insert a space between the tags and the transcript, which was "[giggle] [slow] Before". Sanitize inputs before injecting into the prompt template. The sanitize functions remove Markdown headers (#) and triple quotes from the scene and transcript. The cleansed scene and transcript are injected into the prompt template to construct the final audio prompt. LLM does not understand idiom. I typed "at a snail's pace" in the signal form and inserted "[at a snail's pace]" before the line. However, the model vocalized the tag literally, and no pace change occurred. "Repetitive Weighting" is a Real Strategy. If standard tags like [slow] and [fast] are not dramatic enough, prepend the pace with "very" to increase the dramatic effect of the pace. It was evident when [very, very, very slow] generated a longer audio than [slow]. Replace newline character (\n) with \\n. to flatten the lines into a single paragraph. When the scene and transcript are cleansed and escaped, they are injected into the prompt template while the structure is preserved for the LLM parser. Conclusion The integration of text-to-speech with Firebase's serverless scalability empowers Angular applications for real-time audio generation. First, the Angular application neither requires the genai dependency nor stores the Vertex AI environment variables in a .env file. The client application calls the Cloud Functions to perform the text to speech tasks to generate an audio stream. The Cloud Functions receive arguments from the client, and execute a TTS operation to either return the entire audio as an encoded base64 string or stream the audio bytes in chunks. During local development, the Firebase Emulator calls the functions at http://localhost:5001 instead of the ones deployed on the Cloud Run platform to save cost. Try cloning the GitHub repository, uploading an image to generate an obscure fact, and using the Gemini 3.1 Flash TTS preview model to speak it with the specified scene, emotion, and pace. Demo GitHub Repo Firebase Cloud Functions Connect to the Cloud Functions Emulator Audio Tags Advanced Audio Prompting Prompting Strategies Previous Post about Gemini 2.5 Flash TTS, Angular and Firebase