Skip to content

Switching to a WebAssembly Runtime

Author's photo Sebastian
Tue Sep 09 2025 ~8 minute read

TrailBase has been embedding a V8 JavaScript runtime for the last 10 months allowing users to implement custom HTTP and job handlers. During this time we’ve experienced several issues and limitations - we’re therefore excited to announce that TrailBase is adopting wasmtime as a WebAssembly (WASM) runtime.

This is - and likely will remain - the biggest user-facing change to TrailBase 🙏. To ease migration, the plan for releases v0.17 and v0.18 is to be transitional, i.e. support both runtimes. First, v0.17 makes the new WASM runtime available allowing us to collect early feedback and address issues. We don’t expect major changes to the guest APIs. We put a lot of effort into making the first release usable and all examples have already been migrated. If everything goes to plan 🤞, v0.18 will then mark V8 for deprecation to remove it in subsequent releases.

In the following we’re going to touch a bit more about the rational, the opportunities and what to watch out for.

Rational

Before getting into the benefits of the new runtime, let’s quickly touch on some of the issues we had.

The V8 JavaScript engine is a amazing piece of engineering. However, it was never designed as an embeddable or backend-first solution. It’s primary target remains the Chrome browser. Third-party vendors like Node.js and Deno have taken it upon themselves to extend V8, with APIs for accessing the file-system, sockets, etc. These extensions themselves are extensive and have sprawling dependencies. They’re also primarily designed to serve their own ecosystem rather than be embedded elsewhere. For reference, ~70% of the trail binary are the JavaScript runtime while only linking a subset of Node.js APIs. Despite well written, this is a huge chunk of unsafe code with a heightened level of scrutiny on it due to its pivotal role in browsers.

In practice, the current JS runtime isn’t serving anyone particularly well:

  • it’s not Node.js compatible,
  • it inflates binary size and the security surface,
  • newer deno versions bundle a stale SQLite causing linker issues, and
  • prevents us from building “truly” static binaries with MUSL1.

Comparably, the new WASM runtime is a lot simpler, safer and yet high-performant. It also supports WASI2, which greatly eases embedding and allows us to support guests in multiple languages.

Opportunities

Finally, let’s talk about some of the immediate and future benefits we can expect…

Rigorous State Isolation

V8’s isolates and JIT are expensive, thus they’re typically re-used across requests opening up the gates for accidental state sharing. There are specialized JIT-free “edge” runtimes like LLRT to specifically solve this issue at the expense of performance/throughput. Wasmtime, on the other hand, makes it cheap and easy to spawn fully isolated instances per request3. We expect this provide immediate safety benefits for users.

Performance

This one is a bit more mixed, however combined with an efficient guest environment (e.g. Rust, C++, …), wasmtime can outperform V8 by a factor of almost 4. On the flip side, JS guests will be slower. More on that regression below.

Flexible Guest Language Choice

Many languages support compilation to WASM. This gives users more freedom in choosing and customizing their server-side environment. For now, TrailBase supports JS/TS and Rust out-of-the-box. We have plans to support more in the future. Independently and maybe more importantly, WASI makes it straight-forward to support custom guests.

Moreover, different endpoints can be implemented in different WASM components and thus different languages, allowing you to optimize performance as you go. For example, an expensive, high-QPS endpoint could be rewritten in Rust, which could yield 10x-100x performance gains.

Better I/O Sandboxing

Previously, isolates were given untethered I/O access, which together with a very dynamic guest language can pose a security risk. I/O is now limited to read-only file access for an explicitly provided sandbox root (--runtime-root-fs). We’re planning ot extend I/O capabilities over time on a per-need basis. If you’re missing anything, let us know.

The new integration also fixes timers - such as setTimeout and setInterval, which were unreliable in our previous V8 integration.

Less Code

As mentioned before, switching off V8 removes a lot of high-scrutiny code, cuts binary size roughly in half and lets us build truly static binaries with MUSL.

Rethinking Composition & Licensing Model

The increased flexibility and performance provided by the new WASM runtime opens up a path to making this a singular entry-point for extending TrailBase (including SQLite extensions) as opposed to framework use-cases. In turn, this may allow us to adopt a more popular copyleft license w/o inflicting obligations on your business-logic.

Regressions

While WASM can be handily faster than V8 - as seen above - JavaScript in particular loads and runs significantly slower when compared to a highly specialized and optimized runtime like V8. What may feel like a step backward from a JS-centric point of view, may also provide opportunities to optimize individual endpoints in different languages based on specific needs.

In practice, JS is probably one of the least-efficient compile-to-WASM languages. Instead of emitting immediate WebAssembly, current build flows bundle an interpreter like SpiderMonkey to work around JS’ dynamic nature, i.e. eval('/* ... */')4.

In practice, using JCO with SpiderMonkey and weval is about as fast as Goja - PocketBase’s JS interpreter - but about 40x slower than V8. On the other hand, Rust compiled to WASM is almost 4x faster than V8 with more predictable latency and a lower resource footprint. A benefit of the new runtime integration is that different endpoints can be implemented in different languages providing extra flexibility and potential to optimize.

Migration

The first difference you’ll encounter is the need for a build-step: JS/TS -> WASM5. For now, we recommend to copy the template in examples/wasm-guest-ts or examples/wasm-guest-ts, depending on whether you prefer TypeScript or JavaScript respectively. You can then simply run pnpm install && pnpm build to build the WASM component. For TrailBase to pick up you *.wasm component, they need to be placed in <traildepot>/wasm/.

The second big difference you’ll notice right away is that the APIs for registering endpoints had to change to work in the context of the short-lived and isolated runtime instances. Previously we were relying on global state for routing. To avoid re-initialization on every request and for consistency with guest languages that do not support eager initialization of globals, we’re now using module exports.

TypeScript endpoint before:

addRoute(
"GET",
"/test",
stringHandler(async (req: StringRequestType) => {
const uri: ParsedPath = parsePath(req.uri);
const table = uri.query.get("table");
if (table) {
const rows = await query(`SELECT COUNT(*) FROM "${table}"`, []);
return `entries: ${rows[0][0]}`;
}
return `test: ${req.uri}`;
}

and after:

export default defineConfig({
httpHandlers: [
HttpHandler.get("/test", (req: Request) : string => {
const table = uri.getQueryParam("table");
if (table) {
const rows = await query(`SELECT COUNT(*) FROM "${table}"`, []);
return `entries: ${rows[0][0]}`;
}
return `test: ${req.url()}`;
},
],
});

Alternatively in Rust:

use trailbase_wasm::db::{query, Value};
use trailbase_wasm::http::{HttpError, HttpRoute, StatusCode, routing};
use trailbase_wasm::{Guest, export};
struct Endpoints;
impl Guest for Endpoints {
fn http_handlers() -> Vec<HttpRoute> {
return vec![
routing::get("/test", async |req| {
let Some(table) = req.query_param("table") else {
return Ok(format!("test: {:?}", req.url()));
};
let rows = query(format!("SELECT COUNT(*) FROM '{table}'"), [])
.await
.map_err(|err| HttpError::message(StatusCode::INTERNAL_SERVER_ERROR, err))?;
return Ok(format!("entries: {:?}", rows[0][0]));
}),
];
}
}
export!(Endpoints);

See /examples/wasm-guest-(js|rust|ts) for further examples while we continue to improve the documentation.

Next-Steps

First and foremost, we’d love to hear from you. We’d like to make the transition as smooth as possible and the WASM runtime best-in-class 🙏.

Once the new runtime integration has seen more mileage and unforeseen surprises are worked out, we’d like to sunset V8 expediently. This will provide immediate benefits in terms of portability, security, build-times and binary sizes.

From that point on we plan to invest heavily into making the integration the best we can. With the previous V8 integration, given its rough edges and unstable APIs, we were unsure and limiting the effort. The plans is to support a wider range of extension points and guest languages, thus supporting more use-cases and making it suitable for wider range of developers. If you think there’s any language that would be particularly valuable, e.g. due to its unique and important ecosystem, let us know. When WASIp3 becomes available, hopefully in the near future, we’re also planning to transparently upgrade making asynchronous interactions between host and guest more of a first-class citizen.

Thank you for making it this far and your time 🙏.


Footnotes

  1. GLIBC static binaries aren’t really static.

  2. As system to express cross-component interfaces for WASM in a language-agnostic manner. It’s like gRPC but FFI, i.e. no I/O thus allowing both synchronous and asynchronous interactions.

  3. For state sharing between requests you’ll need should rely on SQLite or KVStore. Note that even with long-lived V8 isolates that was already the case, since state was only shared coincidentally within the same isolate, i.e. subsequent requests may or may not be able to see that state.

  4. That said, bundling the interpreter with static input unlocks some optimizations, e.g. Futamura projection using weval.

  5. A long-standing feature request for us has been to support hot-restart when components change. We’re still planning to get there. A separate watcher process would re-build the WASM component and signal the trail binary to reload the WASM component.