Skip to content

Record APIs

TrailBase’s restful CRUD Record APIs allow you to access your TABLEs and VIEWs in an easy and type-safe manner.

  • Tables and views need to be STRICTly1 typed to guarantee type-safety all the way from your records, via JSON schema, to your client-side language bindings 2.
  • They need to have a sequential primary key column to allow for stable sorting and thus efficient cursor-based pagination. Either an explicit INTEGER or UUIDv7 PRIMARY KEY will do, including FOREIGN KEY columns.

Configuration

Record APIs can be configured via the admin dashboard or in TrailBase’s configuration file. Not all features are yet exposed via the UI like exporting the same table/view multiple times as different API endpoints. Editing the configuration file directly, you can set up as many as you like allowing for some extra flexibility around permissions and visibility.

An example API setup for managing user avatars:

record_apis: [
{
name: "_user_avatar"
table_name: "_user_avatar"
conflict_resolution: REPLACE
autofill_missing_user_id_columns: true
acl_world: [READ]
acl_authenticated: [CREATE, READ, UPDATE, DELETE]
create_access_rule: "_REQ_.user IS NULL OR _REQ_.user = _USER_.id"
update_access_rule: "_ROW_.user = _USER_.id"
delete_access_rule: "_ROW_.user = _USER_.id"
}
]

A quick explanation:

  • The name needs to be unique. It’s what is used to access the API via http://<host>/api/records/v1/name
  • table_name references the table or view that is being exposed.
  • conflict_resolution declares what should happen if a newly created record is conflicting with an existing one.
  • autofill_missing_user_id_column lets you omit fields for columns with a foreign key relationship to _user(id). The field will then be filled with the credentials of the authenticated user. In most cases, this should probably be off, this only useful if you cannot explicitly provide the user id yourself, e.g. in a static HTML form.
  • acl_world and acl_authenticated define that anyone can read avatars but only authenticated users can modify them. The following access_rules further narrow mutations to records where the user column (or request field for insertions) match. In other words, user X cannot modify user Y’s avatar.

Permissions

Access can be controlled through combination of a simple ACL-based system (a matrix of who and what) and custom SQL access rules of the nature: f(req, user, row) -> bool. Generally, the ACLs are checked first and then the access rules are evaluated when present.

For example, to validate that the requestor provided a secret key and is member of a group “mygroup”:

(_REQ_.secret = 'foo' AND EXISTS(
SELECT 1 FROM groups
WHERE
groups.member = _USER_.id
AND groups.name = 'mygroup'
))
  • _REQ_ is an injected sub-query containing the request fields. It is available in access rules for CREATE and UPDATE operations.
  • Similarly, _ROW_ is a sub-query of the target record. It is available in access rules for READ, UPDATE, and DELETE operations.
  • Lastly, _USER_.id references the id of the currently authenticated user and NULL otherwise.

Independently, you can use VIEWs to filter which rows and columns of your TABLEs should be accessible.

Building Access Groups and Capabilities

As hinted at by the example above, the SQL access rules can be used to build higher-level access protection such as group ACLs or capabilities. What makes the most sense in your case, is very application dependent. The <repo>/examples/blog has an “editor” group to control who can write blog posts.

Somewhat on a tangent and pretty meta, group and capability tables can themselves be exposed via Record APIs. This can be used to programmatically manage permissions, e.g. for building a moderation dashboard. When exposing authorization primitives, make sure the permissions are appropriately tight to avoid permission escalations.

Write-only columns

Columns with names starting with an underscore can be written on insert or updated but are hidden on reads. This is meant as a convenient convention to allow for internal data fields, e.g hiding the record owner in an otherwise public data set or hiding a user’s internal credit rating from their profile. A similar effect could otherwise be achieved by only allowing create/update operations on the table and exposing a subset of columns through a readable VIEW. However, this would lead to different API endpoints for read vs create/update.

Access

After setting up your API, TrailBase will expose the following main endpoints3:

  • Create: POST /api/records/v1/<api_name>
  • Read: GET /api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>
  • Update: PATCH /api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>
  • Delete: DELETE /api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>
  • List: GET /api/records/v1/<api_name>?<params>
  • Change Subscriptions:
    GET /api/records/v1/<api_name>/subscribe/[*|<url-safe_b64_uuid_or_int>]
  • Schema: GET /api/records/v1/<api_name>/schema

All of the endpoints accept requests that are JSON encoded, url-encoded, or multipart/form-data encoded, which makes them accessible via rich client-side applications, progressive web apps, and static HTML forms alike.

Create

The create endpoint lets you insert new records and potentially override existing ones depending on conflict resolution strategy.

import { Client } from "trailbase";
export const create = async (client: Client): Promise<string | number> =>
await client.records("simple_strict_table").create({ text_not_null: "test" });

Read

The read endpoint lets you read specific records given their id.

import { Client } from "trailbase";
export const read = async (client: Client, id: string | number) =>
await client.records("simple_strict_table").read(id);

Update

The update endpoint lets you modify, i.e. partially update, existing records given their id

import { Client } from "trailbase";
export const update = async (client: Client, id: string | number, record: object) =>
await client.records("simple_strict_table").update(id, record);

Delete

import { Client } from "trailbase";
export const remove = async (client: Client, id: string | number) =>
await client.records("simple_strict_table").delete(id);

The delete endpoints lets you remove a record given its id.

List: Filter, Sort and Paginate

Using the GET /api/records/v1/<api_name>?<params> endpoint and given sufficient permissions one can query records based the given read_access_rule and query parameters.

Parameters:

  • Pagination can be controlled via the following query parameters:
    • limit=N, with a built-in hard limit of 1024 to avoid abuse.
    • cursor=<primary key> to offset into results.
    • count=true will yield a total_count of records in the result. This can be used together with limit and cursor to build pagination UIs.
  • Ordering can be controlled using the order=[[+-]?<column_name>]+ parameter, e.g. order=created,-rank, which sorts records based on their created column in ascending order first (same as ”+”) and subsequently in descending order by their rank column due to the minus prefix.
  • Filtering can be controlled by passing one or more <column_name>[op]=<value> parameters. For example, revenue[gt]=0 would list records with a positive revenue only. Supported operators are:
    • equal, is the empty operator, e.g. ?success=TRUE.
    • not|ne: not equal
    • gte: greater-than-equal
    • gt: greater-than
    • lte: less-than-equal
    • lt: less-than
    • like: SQL LIKE operator
    • re: SQL REGEXP operator
  • Parent records, i.e. records pointed to by foreign key columns, can be expanded using the ?expand=<col0>,<col>` parameter, if the respective columns were allow-listed in the API configuration.

For example, to query the top-3 ranked movies with a watch time below 2 hours and “love” in their description:

import { Client, type ListResponse } from "trailbase";
export const list = async (client: Client): Promise<ListResponse<object>> =>
await client.records("movies").list({
pagination: {
limit: 3,
},
order: ["rank"],
filters: ["watch_time[lt]=120", "description[like]=%love%"],
});

Subscribe

The streaming subscribe endpoints lets you listen for changes to tables backing an API or specific records given their id. Change events can be insertions, updates, and deletions.

import { Client } from "trailbase";
export const subscribe = async (client: Client, id: string | number) =>
await client.records("simple_strict_table").subscribe(id);
export const subscribeAll = async (client: Client) =>
await client.records("simple_strict_table").subscribe("*");

Schema

The schema endpoint allows for reading the APIs JSON schema definition. This can be useful for driving external code generation or introspection in general.

File Uploads

Record APIs can also support file uploads and downloads. There’s some special handling in place so that only metadata is stored in the underlying table while the actual files are kept in an object store.

By adding a TEXT column with a CHECK(jsonschema('std.FileUpload')) constrained to your table definition, you instruct TrailBase to store file metadata as defined by the “std.FileUpload” JSON schema while keeping the contents in a separate object store. Files can then be upload by sending their contents as part of your JSON requests or multipart/form-data POST requests. Downloading files is slightly different, since reading the column through record APIs will only yield the metadata. There’s a dedicated GET API endpoint for file downloads:

/api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>/file/<column_name>

S3 Integration

By default, TrailBase will keep the object store on the local file system under <data-dir>/uploads. Alternatively, one can configure an S3 bucket via the configuration file, it’s not yet accessible through the admin dashboard. If you need support for other storage backends, let us know.

Custom JSON Schemas

Akin to std.FileUpload above, you can register your own nested JSON schemas to be used with column CHECKs. For now, the dashboard only allows viewing all registered JSON schemas, however you can register schemas using the configuration:

schemas: [
{
name: "simple_schema"
schema:
'{'
' "type": "object",'
' "properties": {'
' "name": { "type": "string" },'
' "obj": { "type": "object" }'
' },'
' "required": ["name"]'
'}'
}
]

Once registered, schemas can be added as column constraints:

CREATE TALE test (
simple TEXT CHECK(jsonschema('simple_schema')),
-- ...
) STRICT;

When generating new client-side bindings for a table or view with such nested schemas, they will be included ensuring type-safety all the way to the client-side APIs.


Footnotes

  1. By default, SQLite are not strictly typed. Column types merely express type-affinities. Unless tables are explicitly created as STRICT columns can store any data type.

  2. Views are more tricky to strictly type, since they’re the result of an arbitrary SELECT statement. TrailBase parses the CREATE VIEW statement and will allow record APIs only on top of a conservative subset, where it can infer the column types. Over time, TrailBase will be able to support larger subsets. Let us know if you have provably strictly typed queries that you think should be supported but aren’t.

  3. There’s also a few other endpoints, e.g. for downloading files, which will be described further down in the docs.