Async generators in JavaScript provide an elegant way to handle Server-Sent Events (SSE)โespecially when dealing with token-by-token or chunked data streams from LLMs.
Letโs explore how they help you handle real-time streaming with ease.
๐ฅ What is SSE?
SSE (Server-Sent Events) lets the server push text messages to the browser over a single HTTP connection. Each message looks like:
data: Hello
data: world!
SSE is great for:
- Real-time logs
- Chat apps
- Streaming LLM responses
๐ Enter Async Generators
An async function*
lets you:
- Read from a stream incrementally
- Use
await
while iterating - Yield chunks as soon as they arrive
โจ Example: Consuming SSE with Async Generator
async function* parseSSE(stream) {
const reader = stream.getReader();
const decoder = new TextDecoder();
let buffer = '';
while (true) {
const { value, done } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const lines = buffer.split('\n');
buffer = lines.pop();
for (const line of lines) {
if (line.startsWith('data: ')) {
yield line.slice(6);
}
}
}
}
Usage:
const response = await fetch('/api/stream', {
method: 'POST',
headers: { Accept: 'text/event-stream' }
});
for await (const chunk of parseSSE(response.body)) {
console.log('Chunk:', chunk);
}
๐ Benefits
- โ
Clean syntax with
for await...of
- โ Supports POST & headers (good for LLMs)
- โ Works in Node.js (with fetch polyfill)
- โ Custom parsing logic
๐ Summary
Async generators give you streaming superpowers with SSE:
- Handle LLM output in real-time
- Decode and react to each message chunk
- Maintain control over request method, headers, and stream parsing
Perfect for modern AI-powered apps using LLM APIs that stream!