RPC for Service (and other types of) Workers -- move that heavy computation off of your UI thread!
localStorage to be accessed within the worker codenpm add swarpc
Also add a Standard-Schema-compliant validation library of your choosing
# For example
npm add arktype
If you want to use the latest commit instead of a published version, you can, either by using the Git URL:
npm add git+https://github.com/gwennlbh/swarpc.git
Or by straight up cloning the repository and pointing to the local directory (very useful to hack on sw&rpc while testing out your changes on a more substantial project):
mkdir -p vendored
git clone https://github.com/gwennlbh/swarpc.git vendored/swarpc
npm add file:vendored/swarpc
This works thanks to the fact that dist/ is published on the repository (and kept up to date with a CI workflow).
We use ArkType in the following examples, but, as stated above, any validation library is a-okay (provided that it is Standard Schema v1-compliant)
import type { ProceduresMap } from "swarpc";
import { type } from "arktype";
export const procedures = {
searchIMDb: {
// Input for the procedure
input: type({ query: "string", "pageSize?": "number" }),
// Function to be called whenever you can update progress while the procedure is running -- long computations are a first-class concern here. Examples include using the fetch-progress NPM package.
progress: type({ transferred: "number", total: "number" }),
// Output of a successful procedure call
success: type({
id: "string",
primary_title: "string",
genres: "string[]",
}).array(),
},
} as const satisfies ProceduresMap;
In your worker file:
import fetchProgress from "fetch-progress"
import { Server } from "swarpc"
import { procedures } from "./procedures.js"
// 1. Give yourself a server instance
const swarpc = Server(procedures)
// 2. Implement your procedures
swarpc.searchIMDb(async ({ query, pageSize = 10 }, onProgress) => {
const queryParams = new URLSearchParams({
page_size: pageSize.toString(),
query,
})
return fetch(`https://rest.imdbapi.dev/v2/search/titles?${queryParams}`)
.then(fetchProgress({ onProgress }))
.then((response) => response.json())
.then(({ titles } => titles)
})
// ...
// 3. Start the event listener
swarpc.start(self)
Here's a Svelte example!
<script>
import { Client } from "swarpc"
import { procedures } from "./procedures.js"
const swarpc = Client(procedures)
let query = $state("")
let results = $state([])
let progress = $state(0)
</script>
<search>
<input type="text" bind:value={query} placeholder="Search IMDb" />
<button onclick={async () => {
results = await swarpc.searchIMDb({ query }, (p) => {
progress = p.transferred / p.total
})
}}>
Search
</button>
</search>
{#if progress > 0 && progress < 1}
<progress value={progress} max="1" />
{/if}
<ul>
{#each results as { id, primary_title, genres } (id)}
<li>{primary_title} - {genres.join(", ")}</li>
{/each}
</ul>
If you use SvelteKit, just name your service worker file src/service-worker.ts
If you use any other (meta) framework, please contribute usage documentation here :)
Preferred over service workers for heavy computations, since you can run multiple instances of them (see Configure parallelism)
If you use Vite, you can import files as Web Worker classes:
import { Client } from "swarpc";
import { procedures } from "$lib/off-thread/procedures.ts";
import OffThreadWorker from "$lib/off-thread/worker.ts?worker";
const client = Client(procedures, {
worker: OffThreadWorker, // don't instanciate the class, sw&rpc does it
});
By default, when a worker is passed to the Client's options, the client will automatically spin up navigator.hardwareConcurrency worker instances and distribute requests among them. You can customize this behavior by setting the Client:options.nodes option to control the number of nodes (worker instances).
When Client:options.worker is not set, the client will use the Service worker (and thus only a single instance).
Use Client#(method name).broadcast to send the same request to all nodes at once. This method returns a Promise that resolves to an array of PromiseSettledResult (with an additional property, node, the ID of the node the request was sent to), one per node the request was sent to.
For example:
const client = Client(procedures, {
worker: MyWorker,
nodes: 4,
});
for (const result of await client.initDB.broadcast("localhost:5432")) {
if (result.status === "rejected") {
console.error(
`Could not initialize database on node ${result.node}`,
result.reason,
);
}
}
You also have a very convenient way to aggregate the results of all nodes, if you don't need to handle errors in a fine-grained way:
const userbase = await client.tableSize.broadcast
.orThrow("users")
.then((counts) => sum(counts))
.catch((e) => {
// e is an AggregateError with every failing node's error
console.error("Could not get total user count:", e);
});
Otherwise, you have access to a handful of convenience properties on the returned array, to help you narrow down what happened on each node:
async function userbase() {
const counts = await client.tableSize.broadcast("users");
if (counts.ko) {
throw new Error(
`All nodes failed to get table size: ${counts.failureSummary}`,
);
}
return {
exact: counts.ok,
count:
sum(counts.successes) +
average(counts.successes) * counts.failures.length,
};
}
To make your procedures meaningfully cancelable, you have to make use of the AbortSignal API. This is passed as a third argument when implementing your procedures:
server.searchIMDb(async ({ query }, onProgress, { abortSignal }) => {
// If you're doing heavy computation without fetch:
// Use `abortSignal?.throwIfAborted()` within hot loops and at key points
for (...) {
abortSignal?.throwIfAborted();
...
}
// When using fetch:
await fetch(..., { signal: abortSignal })
})
Instead of calling await client.myProcedure() directly, call client.myProcedure.cancelable(). You'll get back an object with
async cancel(reason): a function to cancel the requestrequest: a Promise that resolves to the result of the procedure call. await it to wait for the request to finish.Example:
// Normal call:
const result = await swarpc.searchIMDb({ query });
// Cancelable call:
const { request, cancel } = swarpc.searchIMDb.cancelable({ query });
setTimeout(() => cancel().then(() => console.warn("Took too long!!")), 5_000);
await request;
The "once" mode allows you to automatically cancel any previous ongoing call before running a new one. This is useful for scenarios like search-as-you-type, where you only care about the latest request.
Cancel any previous call of the same method:
// If any previous call of searchIMDb is ongoing, it gets cancelled beforehand
const result = await swarpc.searchIMDb.once({ query });
Cancel any previous call of the same method with the same key:
// If any previous call of searchIMDb with "foo" as the key is ongoing,
// it gets cancelled beforehand
const result = await swarpc.searchIMDb.onceBy("foo", { query });
This allows multiple concurrent calls with different keys:
// These two calls can run concurrently
const result1 = await swarpc.searchIMDb.onceBy("search-bar", {
query: "action",
});
const result2 = await swarpc.searchIMDb.onceBy("sidebar", { query: "comedy" });
Cancel any ongoing call with the same global key, across all methods:
// Any call from ANY procedure with "global-search" key gets cancelled beforehand
const result = await swarpc.onceBy("global-search").searchIMDb({ query });
This is useful when you want to ensure only one operation of a certain type is running at a time, regardless of which procedure is being called.
You can combine "once" mode with broadcasting as well, just use .broadcast.once or .broadcast.onceBy instead of .once or .onceBy:
// Load the inference model on all nodes. If we call this again before the previous model finishes loading,
// the previous load requests get cancelled.
await swarpc.loadInferenceModel.broadcast.once({ url });
localStorage for the Server to accessYou might call third-party code that accesses on localStorage from within your procedures.
Some workers don't have access to the browser's localStorage, so you'll get an error.
You can work around this by specifying to swarpc localStorage items to define on the Server, and it'll create a polyfilled localStorage with your data.
An example use case is using Paraglide, a i18n library, with the localStorage strategy:
// In the client
import { getLocale } from "./paraglide/runtime.js";
const swarpc = Client(procedures, {
localStorage: {
PARAGLIDE_LOCALE: getLocale(),
},
});
await swarpc.myProcedure(1, 0);
// In the server
import { m } from "./paraglide/runtime.js";
const swarpc = Server(procedures);
swarpc.myProcedure(async (a, b) => {
if (b === 0) throw new Error(m.cannot_divide_by_zero());
return a / b;
});