Record APIs
TrailBase’s restful CRUD Record APIs allow you to access your TABLE
s and
VIEW
s in an easy and type-safe manner.
- Tables and views need to be
STRICT
ly1 typed to guarantee type-safety all the way from your records, via JSON schema, to your client-side language bindings 2. - They need to have a sequential primary key column to allow for stable sorting
and thus efficient cursor-based pagination. Either an explicit
INTEGER
or UUIDv7PRIMARY KEY
will do, includingFOREIGN KEY
columns.
Configuration
Record APIs can be configured via the admin dashboard or in TrailBase’s configuration file. Not all features are yet exposed via the UI like exporting the same table/view multiple times as different API endpoints. Editing the configuration file directly, you can set up as many as you like allowing for some extra flexibility around permissions and visibility.
An example API setup for managing user avatars:
record_apis: [ { name: "_user_avatar" table_name: "_user_avatar" conflict_resolution: REPLACE autofill_missing_user_id_columns: true acl_world: [READ] acl_authenticated: [CREATE, READ, UPDATE, DELETE] create_access_rule: "_REQ_.user IS NULL OR _REQ_.user = _USER_.id" update_access_rule: "_ROW_.user = _USER_.id" delete_access_rule: "_ROW_.user = _USER_.id" }]
A quick explanation:
- The
name
needs to be unique. It’s what is used to access the API viahttp://<host>/api/records/v1/name
table_name
references the table or view that is being exposed.conflict_resolution
declares what should happen if a newly created record is conflicting with an existing one.autofill_missing_user_id_column
lets you omit fields for columns with a foreign key relationship to_user(id)
. The field will then be filled with the credentials of the authenticated user. In most cases, this should probably be off, this only useful if you cannot explicitly provide the user id yourself, e.g. in a static HTML form.acl_world
andacl_authenticated
define that anyone can read avatars but only authenticated users can modify them. The followingaccess_rules
further narrow mutations to records where theuser
column (or request field for insertions) match. In other words, user X cannot modify user Y’s avatar.
Permissions
Access can be controlled through combination of a simple ACL-based system (a
matrix of who and what) and custom SQL access rules of the nature:
f(req, user, row) -> bool
.
Generally, the ACLs are checked first and then the access rules are evaluated
when present.
For example, to validate that the requestor provided a secret key and is member of a group “mygroup”:
(_REQ_.secret = 'foo' AND EXISTS( SELECT 1 FROM groups WHERE groups.member = _USER_.id AND groups.name = 'mygroup' ))
_REQ_
is an injected sub-query containing the request fields. It is available in access rules forCREATE
andUPDATE
operations.- Similarly,
_ROW_
is a sub-query of the target record. It is available in access rules forREAD
,UPDATE
, andDELETE
operations. - Lastly,
_USER_.id
references the id of the currently authenticated user andNULL
otherwise.
Independently, you can use VIEW
s to filter which rows and columns of
your TABLE
s should be accessible.
Building Access Groups and Capabilities
As hinted at by the example above, the SQL access rules can be used to
build higher-level access protection such as group ACLs or capabilities.
What makes the most sense in your case, is very application dependent.
The <repo>/examples/blog
has an “editor” group to control who can write blog
posts.
Somewhat on a tangent and pretty meta, group and capability tables can themselves be exposed via Record APIs. This can be used to programmatically manage permissions, e.g. for building a moderation dashboard. When exposing authorization primitives, make sure the permissions are appropriately tight to avoid permission escalations.
Write-only columns
Columns with names starting with an underscore can be written on insert or
updated but are hidden on reads. This is meant as a convenient convention to
allow for internal data fields, e.g hiding the record owner in an otherwise public
data set or hiding a user’s internal credit rating from their profile. A
similar effect could otherwise be achieved by only allowing create/update
operations on the table and exposing a subset of columns through a readable
VIEW
. However, this would lead to different API endpoints for read vs
create/update.
Access
After setting up your API, TrailBase will expose the following main endpoints3:
- Create:
POST /api/records/v1/<api_name>
- Read:
GET /api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>
- Update:
PATCH /api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>
- Delete:
DELETE /api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>
- List:
GET /api/records/v1/<api_name>?<params>
- Change Subscriptions:
GET /api/records/v1/<api_name>/subscribe/[*|<url-safe_b64_uuid_or_int>]
- Schema:
GET /api/records/v1/<api_name>/schema
All of the endpoints accept requests that are JSON encoded, url-encoded, or
multipart/form-data
encoded, which makes them accessible via rich client-side
applications, progressive web apps, and static HTML forms alike.
Create
The create endpoint lets you insert new records and potentially override existing ones depending on conflict resolution strategy.
import { Client } from "trailbase";
export const create = async (client: Client): Promise<string | number> => await client.records("simple_strict_table").create({ text_not_null: "test" });
import 'package:trailbase/trailbase.dart';
Future<RecordId> create(Client client) async => await client .records('simple_strict_table') .create({'text_not_null': 'test'});
using TrailBase;using System.Text.Json.Nodes;
public partial class Examples { public static async Task<RecordId> Create(Client client) => await client.Records("simple_strict_table").Create( new JsonObject { ["text_not_null"] = "test" });}
use trailbase_client::Client;
pub async fn create(client: &Client) -> anyhow::Result<String> { Ok( client .records("simple_strict_table") .create(serde_json::json!({ "text_not_null": "test", })) .await?, )}
curl \ --header "Content-Type: application/json" \ --header "Authorization: Bearer ${AUTH_TOKEN}" \ --request POST \ --data '{"text_not_null": "test"}' \ http://localhost:4000/api/records/v1/simple_strict_table
Read
The read endpoint lets you read specific records given their id.
import { Client } from "trailbase";
export const read = async (client: Client, id: string | number) => await client.records("simple_strict_table").read(id);
import 'package:trailbase/trailbase.dart';
Future<Map<String, dynamic>> read(Client client, RecordId id) async => await client.records('simple_strict_table').read(id);
using TrailBase;using System.Text.Json.Nodes;
public partial class Examples { public static async Task<JsonNode?> Read(Client client, RecordId id) => await client.Records("simple_strict_table").Read<JsonNode>(id);}
use trailbase_client::{Client, RecordId};
pub async fn read(client: &Client, id: impl RecordId<'_>) -> anyhow::Result<serde_json::Value> { Ok(client.records("simple_strict_table").read(id).await?)}
curl \ --header "Content-Type: application/json" \ --header "Authorization: Bearer ${AUTH_TOKEN}" \ http://localhost:4000/api/records/v1/simple_strict_table/${RECORD_ID}
Update
The update endpoint lets you modify, i.e. partially update, existing records given their id
import { Client } from "trailbase";
export const update = async (client: Client, id: string | number, record: object) => await client.records("simple_strict_table").update(id, record);
import 'package:trailbase/trailbase.dart';
Future<void> update(Client client, RecordId id) async => await client .records('simple_strict_table') .update(id, {'text_not_null': 'updated'});
using TrailBase;using System.Text.Json.Nodes;
public partial class Examples { public static async Task Update(Client client, RecordId id) => await client.Records("simple_strict_table").Update( id, new JsonObject { ["text_not_null"] = "updated" });}
use trailbase_client::{Client, RecordId};
pub async fn update(client: &Client, id: impl RecordId<'_>) -> anyhow::Result<()> { Ok( client .records("simple_strict_table") .update( id, serde_json::json!({ "text_not_null": "updated", }), ) .await?, )}
curl \ --header "Content-Type: application/json" \ --header "Authorization: Bearer ${AUTH_TOKEN}" \ --request PATCH \ --data '{"text_not_null": "updated"}' \ http://localhost:4000/api/records/v1/simple_strict_table/${RECORD_ID}
Delete
import { Client } from "trailbase";
export const remove = async (client: Client, id: string | number) => await client.records("simple_strict_table").delete(id);
import 'package:trailbase/trailbase.dart';
Future<void> delete(Client client, RecordId id) async => await client.records('simple_strict_table').delete(id);
using TrailBase;
public partial class Examples { public static async Task Delete(Client client, RecordId id) => await client.Records("simple_strict_table").Delete(id);}
use trailbase_client::{Client, RecordId};
pub async fn delete(client: &Client, id: impl RecordId<'_>) -> anyhow::Result<()> { Ok(client.records("simple_strict_table").delete(id).await?)}
curl \ --header "Content-Type: application/json" \ --header "Authorization: Bearer ${AUTH_TOKEN}" \ --request DELETE \ http://localhost:4000/api/records/v1/simple_strict_table/${RECORD_ID}
The delete endpoints lets you remove a record given its id.
List: Filter, Sort and Paginate
Using the GET /api/records/v1/<api_name>?<params>
endpoint and given
sufficient permissions one can query records based the given read_access_rule
and query parameters.
Parameters:
- Pagination can be controlled via the following query parameters:
limit=N
, with a built-in hard limit of 1024 to avoid abuse.cursor=<primary key>
to offset into results.count=true
will yield atotal_count
of records in the result. This can be used together withlimit
andcursor
to build pagination UIs.
- Ordering can be controlled using the
order=[[+-]?<column_name>]+
parameter, e.g.order=created,-rank
, which sorts records based on theircreated
column in ascending order first (same as ”+”) and subsequently in descending order by theirrank
column due to the minus prefix. - Filtering can be controlled by passing one or more
<column_name>[op]=<value>
parameters. For example,revenue[gt]=0
would list records with a positiverevenue
only. Supported operators are:- equal, is the empty operator, e.g.
?success=TRUE
. - not|ne: not equal
- gte: greater-than-equal
- gt: greater-than
- lte: less-than-equal
- lt: less-than
- like: SQL
LIKE
operator - re: SQL
REGEXP
operator
- equal, is the empty operator, e.g.
- Parent records, i.e. records pointed to by foreign key columns, can be
expanded using the
?expand=<col0>,<col
>` parameter, if the respective columns were allow-listed in the API configuration.
For example, to query the top-3 ranked movies with a watch time below 2 hours and “love” in their description:
import { Client, type ListResponse } from "trailbase";
export const list = async (client: Client): Promise<ListResponse<object>> => await client.records("movies").list({ pagination: { limit: 3, }, order: ["rank"], filters: ["watch_time[lt]=120", "description[like]=%love%"], });
import 'package:trailbase/trailbase.dart';
Future<ListResponse> list(Client client) async => await client.records('movies').list( pagination: Pagination(limit: 3), order: ['rank'], filters: ['watch_time[lt]=120', 'description[like]=%love%'], );
using TrailBase;using System.Text.Json.Nodes;
public partial class Examples { public static async Task<ListResponse<JsonObject>> List(Client client) => await client.Records("movies").List( pagination: new Pagination(limit: 3), order: ["rank"], filters: ["watch_time[lt]=120", "description[like]=%love%"]);}
use trailbase_client::{Client, ListArguments, ListResponse, Pagination};
pub async fn list(client: &Client) -> anyhow::Result<ListResponse<serde_json::Value>> { Ok( client .records("movies") .list(ListArguments { pagination: Pagination { limit: Some(3), cursor: None, }, order: Some(&["rank"]), filters: Some(&["watch_time[lt]=120", "description[like]=%love%"]), ..Default::default() }) .await?, )}
curl --globoff \ --header "Content-Type: application/json" \ --header "Authorization: Bearer ${AUTH_TOKEN}" \ --request GET \ 'http://localhost:4000/api/records/v1/movies?limit=3&order=rank&watch_time[lt]=120&description[like]=%love%'
Subscribe
The streaming subscribe endpoints lets you listen for changes to tables backing an API or specific records given their id. Change events can be insertions, updates, and deletions.
import { Client } from "trailbase";
export const subscribe = async (client: Client, id: string | number) => await client.records("simple_strict_table").subscribe(id);
export const subscribeAll = async (client: Client) => await client.records("simple_strict_table").subscribe("*");
import 'dart:async';
import 'package:trailbase/trailbase.dart';
Future<Stream<Event>> subscribe(Client client, RecordId id) async => await client.records('simple_strict_table').subscribe(id);
Future<Stream<Event>> subscribeAll(Client client) async => await client.records('simple_strict_table').subscribeAll();
using TrailBase;
public partial class Examples { public static async Task<IAsyncEnumerable<Event>> Subscribe(Client client, RecordId id) => await client.Records("simple_strict_table").Subscribe(id);
public static async Task<IAsyncEnumerable<Event>> SubscribeAll(Client client) => await client.Records("simple_strict_table").SubscribeAll();}
use trailbase_client::{Client, DbEvent, RecordId, Stream};
pub async fn subscribe( client: &Client, id: impl RecordId<'_>,) -> anyhow::Result<impl Stream<Item = DbEvent>> { Ok(client.records("simple_strict_table").subscribe(id).await?)}
pub async fn subscribe_all(client: &Client) -> anyhow::Result<impl Stream<Item = DbEvent>> { Ok(client.records("simple_strict_table").subscribe("*").await?)}
Schema
The schema endpoint allows for reading the APIs JSON schema definition. This can be useful for driving external code generation or introspection in general.
File Uploads
Record APIs can also support file uploads and downloads. There’s some special handling in place so that only metadata is stored in the underlying table while the actual files are kept in an object store.
By adding a TEXT
column with a CHECK(jsonschema('std.FileUpload'))
constrained to your table definition, you instruct TrailBase to store file
metadata as defined by the “std.FileUpload” JSON schema while keeping the contents
in a separate object store.
Files can then be upload by sending their contents as part of your JSON
requests or multipart/form-data
POST requests.
Downloading files is slightly different, since reading the column through
record APIs will only yield the metadata. There’s a dedicated GET API endpoint
for file downloads:
/api/records/v1/<api_name>/<url-safe_b64_uuid_or_int>/file/<column_name>
S3 Integration
By default, TrailBase will keep the object store on the local file system under
<data-dir>/uploads
.
Alternatively, one can configure an S3 bucket via the
configuration file,
it’s not yet accessible through the admin dashboard.
If you need support for
other storage backends,
let us know.
Custom JSON Schemas
Akin to std.FileUpload
above, you can register your own nested JSON schemas
to be used with column CHECK
s.
For now, the dashboard only allows viewing all registered JSON schemas, however
you can register schemas using the configuration:
schemas: [ { name: "simple_schema" schema: '{' ' "type": "object",' ' "properties": {' ' "name": { "type": "string" },' ' "obj": { "type": "object" }' ' },' ' "required": ["name"]' '}' }]
Once registered, schemas can be added as column constraints:
CREATE TALE test ( simple TEXT CHECK(jsonschema('simple_schema')),
-- ...) STRICT;
When generating new client-side bindings for a table or view with such nested schemas, they will be included ensuring type-safety all the way to the client-side APIs.
Footnotes
-
By default, SQLite are not strictly typed. Column types merely express type-affinities. Unless tables are explicitly created as
STRICT
columns can store any data type. ↩ -
Views are more tricky to strictly type, since they’re the result of an arbitrary
SELECT
statement. TrailBase parses theCREATE VIEW
statement and will allow record APIs only on top of a conservative subset, where it can infer the column types. Over time, TrailBase will be able to support larger subsets. Let us know if you have provably strictly typed queries that you think should be supported but aren’t. ↩ -
There’s also a few other endpoints, e.g. for downloading files, which will be described further down in the docs. ↩