diff options
34 files changed, 1951 insertions, 0 deletions
diff --git a/README.md b/README.md new file mode 100644 index 0000000..33b3998 --- /dev/null +++ b/README.md @@ -0,0 +1,44 @@ +# JsonDerulo API Spec. v2.0 + + +## What is this? +- JsonDerulo Compliant Server Requirements + - JsonDerulo (v2) shouldn't be viewed as something only implemented by one backend (even if it is sofar only implemented by one which serves as reference called SeckelAPI) + - Intended to serve as a reference for anyone trying to build a compliant one in any programing language of their choice. +- JsonDerulo Compliant Client Requirements + - What basic features every client must support + - So intercompatible programms and stuff are possible, core features like : + - Listing toolkit compatibility + - If gui like a module chooser idk + - format standards idk + - handling of json responses with unexpected fields (dont crash but like warn in console) +- Generally serve as an API Reference of what features can do what and how to do em. + +## What this isnt? +- Guide on how to configure a specific server + - For that wait on the release of the SeckelAPI Server v2 which is kinda used as a reference server and should include configuration docs. + +## Navigation + +### Basics +- [Overview](overview.md) the fundamentals like base url, content types, rate limiting, date formats +- [Errors](errors.md) what error responses look like and what status codes mean +- [Client Requirements](client.md) what a compliant client needs to support + +### Authentication +- [Auth](auth/README.md) login, logout, session status, clearing sessions + +### Database Queries +- [Query](query/README.md) the unified /query endpoint for select, insert, update, delete, count and batching + +### Permissions +- [Permissions](permissions/README.md) table level, column level, ownership scoping + +### Schema +- [Schema](schema/README.md) core tables, system columns, session persistence + +### Administration +- [Admin](admin/README.md) config reload, user preferences, config priority + +### Toolkits +- [Toolkits](toolkits/README.md) custom endpoints, toolkit groups, the whole plugin system
\ No newline at end of file diff --git a/admin/README.md b/admin/README.md new file mode 100644 index 0000000..2b29885 --- /dev/null +++ b/admin/README.md @@ -0,0 +1,11 @@ +# Administration + +[back to index](../README.md) + +Server management stuff. Config reloading, user preferences and how config priority works. Partially adminier territory. + +## Pages + +- [Reload](reload.md) `POST /reload` hot reload config and optionally restart the server +- [Preferences](preferences.md) `POST /preferences` get/set/reset per user settings +- [Config Preference](config_preference.md) how the server should decide between config files and database settings diff --git a/admin/config_preference.md b/admin/config_preference.md new file mode 100644 index 0000000..842b7d5 --- /dev/null +++ b/admin/config_preference.md @@ -0,0 +1,28 @@ +# Config Preference + +[back to admin](README.md) /// [home](../README.md) + +If the server can read none critical settings from two places in addition to the `jde_settings` table in the database something has to win. The `config_preference` setting controls that. + +## Config Preference States + +### tables (default) + +Database always wins. If a value exists in `jde_settings` it overrides whatever the config files say (unless its a criticial setting) + +This is the default because you usually want admins to be able to change stuff at runtime via the database without having to edit files and restart. + +### local + +Config file wins for keys that were **explicitly written** in the config file. If you actually typed `session_timeout_minutes = 120` in your security config then that value sticks regardless of whats in the database. + +But for settings you didnt write in the file (or ones that got set as default because empty) the database can still override them. + +This is useful when you want your local file to be the source of truth but still allow the database to fill in settings you havent configured locally. + +## Notes + +- the database is always read regardless of this setting. config_preference only controls who wins when they are fighting and having a hissifit as to who wins. +- this applies to security settings, permission definitions, group configs etc +- changing this setting itself requires editing the config file (its not a DB overridable setting, that would be idiotic) +- after changing settings in the database call some servers might choose not to implement live hot reloading for all settings, use `POST /reload` to apply them diff --git a/admin/preferences.md b/admin/preferences.md new file mode 100644 index 0000000..760d4f7 --- /dev/null +++ b/admin/preferences.md @@ -0,0 +1,90 @@ +# POST /preferences + +[back to admin](README.md) /// [home](../README.md) + +Get, set or reset per user preference settings. Preferences should be stored as JSON on the user record and scoped by client type. This way different apps can have their own settings without stepping on each other and becoming horny. + +## Request Body + +```json +{ + "action": "get", + "client": "emoegui", + "user_id": 123, + "type": "normal", + "data": {} +} +``` + +| Field | Type | Required | Notes | +|-----------|---------|----------|---------------------------------------------------------------------------------------------| +| `action` | string | yes | `get`, `set` or `reset` | +| `client` | string | yes | client identifier (1 to 128 chars, alphanumeric plus underscores) | +| `user_id` | integer | no | target user (defaults to yourself). accessing another users prefs requires `read-write-all` | +| `type` | string | no | scope of the operation (see below) | +| `data` | object | set only | the settings to write. must be a JSON DERULO object | + +## Scoping with type + +Preferences are stored in two layers: `global` (shared across all clients) and per client (like `emoegui`, `mobile`, etc). The `type` field controls which scope you operate on. + +### For get + +| Type | Default | What you get | +|----------|---------|-----------------------------------------------------------------------| +| `normal` | yes | global settings merged with client specific (client overrides global) | +| `client` | | only the client specific settings | +| `global` | | only the global settings | + +### For set + +| Type | Default | What happens | +|----------|---------|---------------------------------------------------| +| `client` | yes | deep merges `data` into the client specific scope | +| `global` | | deep merges `data` into the global scope | + +Deep merge means nested objects should be recursively merged not replaced. Scalar values are overwritten. so you can update one key deep in a nested object without blowing away the rest. + +### For reset + +| Type | Default | What happens | +|----------|---------|------------------------------------------------------| +| `client` | yes | wipes the client specific scope | +| `global` | | wipes the global scope | +| `full` | | nukes ALL preferences (sets the whole thing to NULL) | + +## Access Control + +Controlled by your groups `user_settings_access` setting: + +| Access Level | What you can do | +|------------------|-----------------------------------------| +| `read-own-only` | can only GET your own preferences | +| `read-write-own` | can GET/SET/RESET your own preferences | +| `read-write-all` | can GET/SET/RESET any users preferences | + +Trying to access another users preferences when you only have `read-write-own` should give you a 403. dont be nosy. Long nosed looking ah mf. + +## Response + +```json +{ + "success": true, + "preferences": { + "theme": "dark", + "language": "en" + }, + "request_id": "abc123" +} +``` + +For set and reset the response should still include the current preferences after the operation so the client doesnt have to make a second request. + +## Error Responses + +| Code | When | +|------|-----------------------------------------------------------------------------------------------------------| +| 400 | invalid action, invalid client name, wrong type for the action, missing data for set, data isnt an object | +| 401 | no token or invalid session | +| 403 | cant access another users prefs, write not allowed with read only access | +| 500 | database error | diff --git a/admin/reload.md b/admin/reload.md new file mode 100644 index 0000000..5cfc5b9 --- /dev/null +++ b/admin/reload.md @@ -0,0 +1,82 @@ +# POST /reload + +[back to admin](README.md) /// [home](../README.md) + +Hotreload server configuration. Rereads all config files, merges with DB settings, rebuilds the permission system. All without downtime hopefully maybe. + +Requires power >= `min_reload_power` (configurable). + +## Request Body (optional) + +```json +{ + "restart": false, + "forced": false +} +``` + +| Field | Type | Default | Notes | +|-----------|---------|---------|------------------------------------------| +| `restart` | boolean | false | schedule a full server restart (reexec) | +| `forced` | boolean | false | skip idle wait and restart immediately | + +If you send no body at all the server just does a config reload without restarting. + +## Why restart? + +Some things might be unable to be hot reloaded because theyre baked in the oven at startup such as: +- rate limiter settings +- server port +- database connection settings +- scheduled queries + +For these feel free to do a full restart. A compliant server should reexec itself (same binary, fresh process). + +## Restart modes + +### Graceful restart (`restart: true, forced: false`) + +Schedules a restart that waits for a configurable idle period before actually restarting. If theres user activity the timer should reset so you dont pull the rug from under active users lol. + +Response: + +```json +{ + "success": true, + "message": "Configuration reloaded: server will restart after 30s of idle time...", + "request_id": "abc123", + "restarting": true, + "restart_wait_seconds": 30, + "note": "Server restart is scheduled to apply heavy config changes (rate limits)." +} +``` + +### Forced restart (`restart: true, forced: true`) + +Restarts immediately, server should flush cached sessions to the database if it supports persistant sessions first (with a reasonable timeout so a dead DB doesnt block the restart) then reexec. + +Because the server dies before it can send a json derulo response back the client wont really get a response. Just fire and forget, then wait for jsonderulo to come back lol. + +Should require higher power than a normal reload (configurable via `min_manual_restart_power`). + +### No restart (`restart: false`) + +Just reloads config. Fast and safe. + +```json +{ + "success": true, + "message": "Configuration reloaded successfully (TOML + DB merged, RBAC rebuilt)", + "request_id": "abc123", + "restarting": false, + "note": "Rate limiter and scheduled query changes require a restart to take effect." +} +``` + +## Error Responses + +| Code | When | +|------|---------------------------------------------------------| +| 401 | no token or invalid session | +| 403 | power level too low (for reload or for restart) | +| 500 | config reload failed (bad config syntax, DB error, etc) | diff --git a/auth/README.md b/auth/README.md new file mode 100644 index 0000000..de81c28 --- /dev/null +++ b/auth/README.md @@ -0,0 +1,26 @@ +# Authentication + +[back to index](../README.md) + +How sessions work. Login to get a token, send it on every request, logout when done. + +Not that hard see? + +## Pages + +- [Login](login.md) `POST /auth/login` authenticate and get a session token +- [Logout](logout.md) `POST /auth/logout` kill yourself +- [Status](status.md) `GET /auth/status` check if your session is still alive and when it kills itself +- [Clear Sessions](clear_sessions.md) `POST /auth/clear-sessions` adminier: nuke all sessions for a specific user + +## How it works + +1. call `/auth/login` with credentials using ur sign in methodick. +2. get a magic session token back +3. send that token as `Authorization: Bearer <token>` on every damn request after that +4. when youre done call `/auth/logout` to kill yourself +5. sessions expire after a configurable timeout if you dont use them + +Sessions should optionally be able to persist across server restarts (server should only store a hash of the token in the database, never the raw token itself obviously). On restart it loads them hash brownies back. + +Each user should have a max number of concurrent sessions (configurable). When you go over the limit the oldest session gets deported to hell automatically to make room for the new one. diff --git a/auth/clear_sessions.md b/auth/clear_sessions.md new file mode 100644 index 0000000..4c597ee --- /dev/null +++ b/auth/clear_sessions.md @@ -0,0 +1,46 @@ +# POST /auth/clear-sessions + +[back to auth](README.md) /// [home](../README.md) + +Adminier endpoint to nuke all active sessions for a specific user. Useful when you need to force someone to relogin or if an account might be compromised. + +Requires a power level equal to or greater than `min_clear_sessions_power` (configurable). + +## Request Body + +```json +{ + "user_id": 42 +} +``` + +| Field | Type | Required | Notes | +|-----------|---------|----------|----------------------------------------------| +| `user_id` | integer | yes | the user whose sessions you want to kill duh | + +## Success Response (200) + +```json +{ + "success": true, + "sessions_cleared": 3, + "user_id": 42 +} +``` + +`sessions_cleared` tells you how many sessions were murdered. If the user had no active sessions its 0 and thats still a success. + +## Error Responses + +| Code | When | +|------|-----------------------------| +| 401 | no token or invalid session | +| 403 | your power level is too low | + +The 403 error message optionally may tell you what power level is required. + +## Notes + +- this only affects the targeted users sessions obv. your own stays alive +- works on any user including yourself if you want to nuke your own other sessions +- cleared sessions are removed from both the in memory cache and the database (if persistent sessions are on and supported obv.) diff --git a/auth/login.md b/auth/login.md new file mode 100644 index 0000000..ea9ea46 --- /dev/null +++ b/auth/login.md @@ -0,0 +1,69 @@ +# POST /auth/login + +[back to auth](README.md) /// [home](../README.md) + +Authenticate and get a session token. No auth required (obviously). + +## Request Body + +```json +{ + "method": "password", + "username": "admin", + "password": "secret" +} +``` + +## Fields + +| Field | Type | Required | Notes | +|----------------|--------|---------------|-----------------------------------------------| +| `method` | string | yes | `password`, `pin` or `token` | +| `username` | string | password, pin | required for password and pin methods | +| `password` | string | password | only for password method | +| `pin` | string | pin | only for pin method | +| `login_string` | string | token | only for token method (NFC cards, badges etc) | + +## Auth Methods + +### password +Standard username + password. No EyePee Restrictions. + +### pin +Short numeric PIN for quick auth (like kiosk scenarios). The clients IP **must** be whitelisted in the servers security config or its a 403. This is a hard requirement, there should never be a bypass for this. ever. seriously. (unless you are dumb enough to set whitelisted IP's to 0.0.0.0/0 for whatever reason but then thats your own fault dumbass) + +### token +Login string based auth for things like NFC card scans or badge readers. Same deal as PIN, clients IP **must** be whitelisted or its 403. + +## Success Response + +```json +{ + "success": true, + "token": "your-session-token-here", + "user": { + "id": 1, + "username": "admin", + "name": "Full Name", + "role": "administrators", + "power": 100 + } +} +``` + +Store the `token` and send it on all future requests as `Authorization: Bearer <token>`. + +The `user` object gives you basic info about whos logged in. `power` is the users power level (1 lowest, 100 highest) which determines what jsonderulo lets them do. + +## Error Responses + +| Code | When | +|------|------------------------------------------------------------------------------| +| 400 | missing required fields for the chosen method | +| 401 | wrong password, wrong pin, wrong login_string, user not found, user inactive | +| 403 | IP not whitelisted (pin and token methods only) | +| 500 | database error | + +## Concurrent Sessions + +Each user should have a max number of concurrent sessions (configurable per power level + a global default). When a user logs in and is already at the limit the **oldest** session should get automatically evicted from json derulos premises! diff --git a/auth/logout.md b/auth/logout.md new file mode 100644 index 0000000..4d0d469 --- /dev/null +++ b/auth/logout.md @@ -0,0 +1,27 @@ +# POST /auth/logout + +[back to auth](README.md) /// [home](../README.md) + +Kill your yourself. Requires a valid session token obviously. + +## Request + +No sexy json derulo body needed. Just send the token in the header: +``` +Authorization: Bearer <token> +``` + +## Success Response + +```json +{ + "success": true, + "message": "Logged out successfully" +} +``` + +## Error Response + +| Code | When | +|------|-----------------------------------------------| +| 401 | no token provided or token is invalid/expired | diff --git a/auth/status.md b/auth/status.md new file mode 100644 index 0000000..354ea9b --- /dev/null +++ b/auth/status.md @@ -0,0 +1,61 @@ +# GET /auth/status + +[back to auth](README.md) /// [home](../README.md) + +Check if your session is still alive and get details about it. + +## Request + +No body. Token in header: +``` +Authorization: Bearer <token> +``` + +## Valid Session Response (200) + +```json +{ + "success": true, + "valid": true, + "user": { + "id": 1, + "username": "admin", + "name": "Full Name", + "role": "administrators", + "power": 100 + }, + "session": { + "created_at": "2026-02-14T10:30:00Z", + "last_accessed": "2026-02-14T11:45:00Z", + "timeout_minutes": 120, + "remaining_seconds": 5400, + "expires_at": "2026-02-14T13:45:00Z" + } +} +``` + +The `session` object tells you when the session was created, when it was last used, how long the timeout is, how many seconds are left and when itll expire if nothing happens. + +If session refresh on activity is enabled (which it should be by default) then `last_accessed` gets updated on every request so the session keeps extending as long as youre actively bothering jsonderulo. + +## Invalid/Expired Session Response (401) + +```json +{ + "success": true, + "valid": false, + "message": "Session expired or invalid [request_id: 12345678]" +} +``` + +Note that `success` is still true here because the status check itself worked fine, the session is just dead lol. Check the `valid` field to know if the session is actually alive. + +## No Token Response (401) + +```json +{ + "success": false, + "valid": false, + "error": "No authorization token provided [request_id: 12345678]" +} +``` diff --git a/client.md b/client.md new file mode 100644 index 0000000..9b23dcf --- /dev/null +++ b/client.md @@ -0,0 +1,31 @@ +# Client Requirements + +[back to index](README.md) + +## What a compliant client needs to support + +This isnt about building a specific UI. Just more or less about the minimum stuff a client needs to somewhat handle to work with a JsonDerulo compliant server properly. + +### Must support + +- **Auth process**: login via `/auth/login`, store the token, send it as `Authorization: Bearer <token>` on all subsequent requests +- **Session handling**: check `/auth/status` to see if session is still alive. handle 401 responses gracefully (redirect to login or show a message, dont just crash) +- **JSON DEROLUE everywhere (even your dreams!)**: all request bodies are JSON, JSON IS EVERYWHERE, all responses are JSON. set content type accordingly. JSON JSON JAYS SON JAYSON DERULO !!!! +- **Error handling**: read `success` field first. if its false read the `error` field and show it to the user somehow + - For UI's its adviced to show them as an overlay/popup and not just a little toast message, this way the user has an actual chance at noting the request I'd which will help the admin figure out wtf actually went wrong. + - For None UI's show error in console +- **Unexpected fields**: if the server response has fields the client doesnt know about just ignore them. dont crash. maybe log a warning to console bt DO NOT BREAK! json derulo is counting on you! + +### Should support + +- **Toolkit listing**: the client should be able to list what toolkits are available and preferably have options to only work with certain ones (and hide ones known not to work and stuff) some kind of module chooser after login if it has a GUI is advised. +- **Rate limit handling**: when you get a 429 response read the `retry_after` field and wait that long before retrying dont just hammer jsonderulo server! + - For UI's preferably block user interaction (and obviously all server requests) until after retry_after time has expired otherwise your jail time will increase! +- **Permissions awareness**: call `GET /permissions` after login to know what the user/client can and cant do. (kinda also where toolkits you have access to are listed btw.) + - For UIs either grey out or prefferably hide stuff that user dont have access to rather than letting them try and fail. + +### Nice to have + +- **Preferences**: use `/preferences` to store and load user settings per client type. the server supports global and per client scopes so you can share some settings across apps +- **Batch queries**: if youre doing lots of the same action (like inserting 50 rows) use batch mode instead of 50 individual fucking requests +- **Graceful reconnect**: if you get a 503 (database is having a stroke ugh...) dont panic. show a message and retry in a bit, the server should be capable of keeping your request temporarily while it tries an do cpr on the database automatically and auto resend your request. diff --git a/errors.md b/errors.md new file mode 100644 index 0000000..ecddc51 --- /dev/null +++ b/errors.md @@ -0,0 +1,39 @@ +# Errors + +[back to index](README.md) + +## Error Response Format + +All error responses follow the same shape: + +```json +{ + "message":"Session expired or invalid [request_id: 12346480868868238719]","success":false +} +``` + +The `request_id` is always included at the end of the error message so you can look it up in the server logs aswell as being included in the message part so it is displayed to users/clients. +The length of the request_id is capped at 32 Numbers and shouldnt be shorter than like 8) + +Some endpoints might include extra fields (like `endpoint` for toolkit errors or `results` for batch query errors) but `success` and `error` are always there. + +## Status Codes +The following status codes (and what they mean) should be returned by JsonDerulo API compliant server + +| Code | When | +|------|----------------------------------------------------------------------------------------------------------------| +| 200 | everything worked and Json Derulo is happy | +| 400 | bad request, missing fields, invalid table or column names, too many WHERE conditions, etc. | +| 401 | no token provided, token invalid or expired | +| 403 | you dont have permission (power level too low, IP not whitelisted, table access denied) | +| 404 | toolkit not found or disabled, endpoint path doesnt exist | +| 429 | rate limited, wait and try again | +| 500 | server error, database query failed, config reload failed, json derulo died | +| 502 | toolkit endpoint got an error from upstream (HTTP proxy mode) | +| 503 | database is down, depressed or having a stroke | + +## Notes + +- difference between 401 and 403: you get 401 when theres no valid session at all and you are trying to betray the jsonderulo server. 403 is when you have a session but your power (bottom) level or json's permissions dont allow the action. +- 503 only happens when the server has detected database connectivity loss. Server shouldnt crash, but tells you the database is currently having a hissifit and will keep trying to reconnect and pass your request in the background. +- batch query errors should be able to return partial results with a `results` array showing which sub queries succeeded and which failed. diff --git a/images/jayson-de-json.png b/images/jayson-de-json.png Binary files differnew file mode 100644 index 0000000..e7fd7f5 --- /dev/null +++ b/images/jayson-de-json.png diff --git a/overview.md b/overview.md new file mode 100644 index 0000000..6b01e3e --- /dev/null +++ b/overview.md @@ -0,0 +1,91 @@ +# Overview + +[back to index](README.md) + +## Base URL + +The server listens on `http://<host>:<port>`. Port is configurable, no specific port is required by the spec. but it is advised to use 5777 for internal purposes and 80/443 for External Purposes, If you want HTTPS you need a reverse proxy in front. + +## Content Type + +Everything is JSON (Derulo bahahahahah). Requests and always responses use `Content-Type: application/json`. + +The only exception is when a toolkit endpoint uses `response_format: "passthrough"` for proxying file downloads or something none Jayson from an upstream service. In that case the response mimetype comes from whatever upstream decides to send back. + +## Toolkit and Config file Format : +- All config and toolkit files should use toml + - If you hate toml for whatever reason you can use whatever you want in your implementation with the following two conditions being bundled with the server : + - Built in support for converting the fully speced out reference server's config (SeckelAPI) to your implementations config format + - Built in support for converting to fully compatible reference server's (SeckelAPI) toml files from your implementation config format + - I dont want to make your life pain with this but just like give users the freedom of using a different backend server with ease if one ever comes. (might even happen from my side as seckel api is tbh a bit of a mess lol) + +## Authentication + +Every endpoint requires a session token except these three: + +- `GET /` version info +- `GET /health` service and DB status +- `POST /auth/login` where you get a damn session token in the first place + +Therefore to talk with JsonDerulo get a session token from `/auth/login` and send it on every request after that as `Authorization: Bearer <token>` it's the only way JsonDerulo wants to talk to you! + +## All Ways to talk to JsonDerulo (Endpoints) + +| Method | Path | Auth | What it does | +|----------|-----------------------------|------|------------------------------------------------------| +| GET | `/` | no | version info and stuff | +| GET | `/health` | no | service health and DB connectivity | +| POST | `/auth/login` | no | get a magic session token | +| POST | `/auth/logout` | yes | kill your session :( | +| GET | `/auth/status` | yes | check if your session is still alive | +| POST | `/auth/clear-sessions` | yes | adminier: nuke all sessions for a specific user | +| POST | `/query` | yes | database queries (select insert update delete count) | +| GET | `/permissions` | yes | see what tables and columns you can access | +| POST | `/preferences` | yes | get/set/reset user preferences | +| POST | `/reload` | yes | adminier: hot/hard reload server config | +| POST/GET | `/tk/app/{toolkit}/{*path}` | yes | toolkit application endpoint | +| POST/GET | `/tk/lib/{toolkit}/{*path}` | yes | toolkit library endpoint | + +## Rate Limiting + +Backend should support two separate rate limiters: one for auth routes and one for API routes. +Both are per IP with per second and per minute burst limits and should be capable of detecting external IP addresses (if behind proxi) + +When you get rate limited the JsonDerulo responds with HTTP 429 and a JSON body telling you how long you should simply admire Jaysons Body instead of telling him your Schizophrenic JsonDerulo request, body should look like this: + +```json +{ + "success": false, + "error": "Too many requests. Naughty! Go sit in a corner for 3 Seconds!", + "retry_after": 3 +} +``` + +Rate limiting can be disabled entirely in config but that shouldnt be done in production obviously. + +## Date and Time Formats + +Query responses serialize the upstream database types into JSON like this: + +| MySQL Type | JSON Format | +|--------------------------|-----------------------| +| `DATE` | `YYYY-MM-DD` | +| `DATETIME` | `YYYY-MM-DD HH:MM:SS` | +| `TIMESTAMP` | RFC 3339 string | +| `TIME` | `HH:MM:SS` | +| `BOOLEAN` / `TINYINT(1)` | `true` / `false` | + +(I thought I implemented changing them but idk TODO in v2.1) + +## Request IDs + +Every request gets a unique request_id. It shows up in error messages and audit logs so you can trace stuff if something goes wrong. The length of the request_id is capped at 32 Numbers and shouldnt be shorter than like 8) + +Example : +```json +{"message":"Session expired or invalid [request_id: 10036480868868238719]","success":false} +``` + +## Request Body Limits + +Theres should be configurable max request body size (in KB/MB/GB). Requests bigger than that get rejected. There should however be options to extend this for individual toolkit endpoints if nececsary. diff --git a/permissions/README.md b/permissions/README.md new file mode 100644 index 0000000..881daa7 --- /dev/null +++ b/permissions/README.md @@ -0,0 +1,89 @@ +# Permissions + +[back to index](../README.md) + +How jsonderulo decides what you can and cant do (and how to punish you!) + +## Pages + +- [Table Permissions](table_permissions.md) the permission codes that control read/write access per table +- [Column Permissions](column_permissions.md) fine grained column level access control +- [Ownership Scoping](ownership.md) how pinned_to restricts which rows you can see and modify + +## GET /permissions + +Call `GET /permissions` with your token to see what you have access to. This should be the one stop shop for a client to figure out what parts of jsonderulo you are allowed to molest. + +```json +{ + "success": true, + "user": { + "id": 1, + "username": "admin", + "name": "Admin User", + "role": "administrators", + "power": 100 + }, + "permissions": { + "jde_settings": "rw", + "jde_groups": "rw", + "jde_users": "rw" + }, + "column_rules": { + "jde_users.password": "block", + "jde_users.pin_code": "block" + }, + "toolkits": { + "beepzone": { + "type": "application", + "group": "managers", + "permissions": { + "assets": "rw", + "transactions": "rw", + "audit_log": "r" + }, + "column_rules": { + "transactions.amount": "r", + "assets.serial_number": "block" + } + }, + "opensigma": { + "type": "library", + "group": "admins", + "permissions": { + "sigma_config": "rw" + } + } + }, + "user_settings_access": "read-write-own" +} +``` + +### Whats in here + +**`permissions`** for core tables and compatability with the few unoficial internal jde v1 tools I made (usually adminier stuff or user table partial read for joins all depicted from `jde_groups` aka. core_groups). Tables that belong to toolkits shouldn't show up here lol. + +**`column_rules`** same as above but for column level rules like this `"table.column": "permission_code eg. block"` entries for core table columns with explicit column level rules (see [column permissions](column_permissions.md)). Only present when there are any obviously. + +**`toolkits`** is an part of json derulos body grouped by toolkit name. Each toolkit entry should include: +- `type`: `"application"` or `"library"` so clients know what kind of toolkit it is and whether to show it as a selectable application or not +- `group`: the users resolved toolkit group name (from `jde_associations` or `toolkit_overrides`). missing if no association exists but thats something that shouldnt really occure as then that entire toolkit shouldnt even be shown to the user. +- `permissions`: table permission map for this toolkit only +- `column_rules`: column level rules for this toolkits tables (only present when there are any). same `"table.column": "code"` format + +This way a client can see at a glance which toolkits they have access to, what group they are in for each one, what parts of json derulos body they can molest and what column level restrictions exist and by that effectively be able to decide what feature to show or not. + +**`user_settings_access`** tells you what you can do with the [preferences](../admin/preferences.md) endpoint. + +## How permissions should be resolved + +Permissions come from two layers that get merged: + +1. **Core permissions** from `jde_groups` (your core group has a power level and permission rules) +2. **Toolkit permissions** from toolkit group tables (linked via `jde_associations`) + +A compliant server should merge these at startup and on config reload. The final resolved permissions for your power level determine what jsonderulo lets you do. + +Wildcard rules like `"*:r"` should apply to all tables that dont have an explicit rule. + +Read only tables (configured in toolkit definitions) should automatically downgrade any writable code to its read only equivalent (`rw` becomes `r`, `rwg` becomes `rg`, etc). diff --git a/permissions/column_permissions.md b/permissions/column_permissions.md new file mode 100644 index 0000000..120e9f7 --- /dev/null +++ b/permissions/column_permissions.md @@ -0,0 +1,44 @@ +# Column Permissions + +[back to permissions](README.md) /// [home](../README.md) + +Beyond table level access the server must be able to also control individual columns access permissions. This is to be done through `advanced_rules` in the permission config (or just the combined permissions column in the db). A compliant server should support all of these. + +## Format + +Column rules use `"table.column:code"` format: + +```json +["vfy_items.price:block", "jde_users.password:block", "jde_users.email:r"] +``` + +Wildcard: `"table.*:code"` applies to all columns on that table, not sure why you'd use that but whatever + +## Column Permission Codes + +| Code | Name | Can read | Can write | Notes | +|---------------|---------------------|----------|-----------|------------------------------------------------------| +| `block` / `b` | block | no | no | column is invisible | +| `bo` | block own | no | no | blocked if the row is owned by you | +| `bg` | block group | no | no | blocked if the row is owned by someone in your group | +| `boi` | block own inverse | yes | yes | blocked UNLESS you own the row | +| `bgi` | block group inverse | yes | yes | blocked UNLESS someone in your group owns the row | +| `r` | read only | yes | no | can see it but cant change it | +| `rw` | read write | yes | yes | full access to this column | +| `rwa` | read write all | yes | yes | full access including system managed cols | + +## How blocking should work + +### On select +Blocked columns (`b`, `bo`, `bg`) should be stripped from the results. They just dont appear in the returned data. If json derolu stripped for you a `warning` field should be included in the response telling you the hot stripping actions by jaysonderulo. + +### On insert and update +Columns with `block`, `bo`, `bg` or `r` (read only) should be cause json derulo to strip for you (removes illigal data you sent). You can try to set them but theyll be ignored because of jsons hot body. Again a `warning` should tell you if jsonderulo stripped for you. + +## System columns + +Some columns should be write protected by default even without explicit column rules: +- `created_at`, `created_by`, `last_modified_at`, `last_modified_by`, `pinned_to` +- plus any columns listed in `write_protected_columns` in the toolkit config + +These should only be writable if your table permission is `rwa`. diff --git a/permissions/ownership.md b/permissions/ownership.md new file mode 100644 index 0000000..a91d196 --- /dev/null +++ b/permissions/ownership.md @@ -0,0 +1,41 @@ +# Ownership Scoping + +[back to permissions](README.md) /// [home](../README.md) + +When you have a get noscoped permission code (`rwo`, `rwg`, `ro`, `rg`) jaysonderulo must automatically restrict which rows you can see and modify. This is done through the `pinned_to` system column. + +## How pinned_to works + +`pinned_to` is a by default auto managed system column that should be added to tables (unless the table opts out). When you insert a row `pinned_to` gets set to your user ID. + +The server should use this column to filter rows based on your permission level: + +### Own scoping (rwo, ro) + +Should add `WHERE table.pinned_to = <your_user_id>` to every query. You can only see and modify rows where `pinned_to` equals your own user ID. + +### Group scoping (rwg, rg) + +Should add a subquery filter that checks if `pinned_to` belongs to any user in the same core group as you. So you can see rows created by anyone in your core group afaik but not by people in other groups. + +## On insert + +When you insert a row: +- `pinned_to` should be automatically set to your user ID +- if you have `rwa` permission you can explicitly set `pinned_to` to someone else (like assigning a row to another user) +- with any other permission code you cant override `pinned_to`, its always you so json derulo can strip for you <3 + +## On update and delete + +The ownership filter should be applied to the WHERE clause so: +- with `rwo` you can only update/delete rows you own +- with `rwg` you can only update/delete rows owned by someone in your group +- with `rw`/`rwa` theres little/no restriction (unless overwritten by advanced type rules) + +## On select and count + +Same deal. The filter should be injected so you only see rows youre allowed to see. Count only counts rows you have access to. + +## Tables without pinned_to + +If a table has opted out of the `pinned_to` system column (via `system_column_overrides`) then ownership scoping cant work on it. So dont give ownership scoped permissions (`rwo`, `rwg`, `ro`, `rg`) to tables that dont have the `pinned_to` column, How the server handles such exceptions is not explecityly rules for JDE v2 so either jsonderulo hangs itself because he cant serve your request or it might just give the permission without `o` or the `g` addition. Will be spanked to be standardized in future jde revisions.
\ No newline at end of file diff --git a/permissions/table_permissions.md b/permissions/table_permissions.md new file mode 100644 index 0000000..884744e --- /dev/null +++ b/permissions/table_permissions.md @@ -0,0 +1,62 @@ +# Table Permissions + +[back to permissions](README.md) /// [home](../README.md) + +Every table has a permission code that controls what actions are allowed. These are set in the groups config (both core groups and toolkit groups). A compliant jsonderulo server must support all of these. + +## Permission Codes + +| Code | Name | Read | Write | Row Restriction | +|------|------|------|-------|----------------| +| `rwa` | read write all | yes | yes | none (can also write system columns) | +| `rw` | read write | yes | yes | none | +| `rwg` | read write group | yes | yes | only rows owned by users in your group | +| `rwo` | read write own | yes | yes | only rows you own | +| `r` | read | yes | no | none | +| `rg` | read group | yes | no | only rows owned by users in your group | +| `ro` | read own | yes | no | only rows you own | + +## What the codes mean + +### Full access codes + +**rwa** is the admin code. full read and write plus the ability to write system columns directly (like setting `pinned_to` to another user on insert). Only give this to trusted admin groups obviously. + +**rw** is standard read write. You can read and write everything but system columns should be auto managed by the server, you cant override them. + +### Scoped write codes + +**rwg** lets you read and write but only rows where `pinned_to` belongs to a user in the same core group as you. Good for team based access where a team can see and edit their own teams stuff. + +**rwo** same but personal. you can only touch rows where `pinned_to` equals your own user id. + +### Read only codes + +**r** read everything, write nothing. + +**rg** read only rows belonging to your groups users. + +**ro** read only your own rows. + +## Format in config + +Permission rules are stored as JSON arrays of `"table:code"` strings: + +```json +["*:rw", "jde_settings:r", "vfy_logs:r"] +``` + +`*` is the wildcard. It applies to every table that doesnt have its own explicit rule. So in the example above every table gets `rw` except `jde_settings` and `vfy_logs` which are read only. + +## Read only tables + +Tables marked as `read_only` in the toolkit config should automatically have their writable codes downgraded: + +| Original | Downgraded to | +|----------|--------------| +| `rwa` | `r` | +| `rw` | `r` | +| `rwg` | `rg` | +| `rwo` | `ro` | + +So even if your group gives you `rw` on a read only table you effectively only get `r`. diff --git a/query/README.md b/query/README.md new file mode 100644 index 0000000..54e99af --- /dev/null +++ b/query/README.md @@ -0,0 +1,51 @@ +# Query + +[back to index](../README.md) + +The `/query` endpoint is where all database operations happen. One endpoint, one JSON DERULO format, multiple actions. Pretty much the heart of jsonderulo <3 and whats the minimum requirement for JSON DERULO v1 compliance (in addition to authentication). + +## Pages + +### Actions +- [Select](select.md) read rows from tables +- [Insert](insert.md) add new rows +- [Update](update.md) modify existing rows +- [Delete](delete.md) remove rows +- [Count](count.md) count matching rows + +### Features +- [Filters](filters.md) where clauses, structured filters, joins, ordering +- [Batch](batch.md) run multiple queries of the same type in one request + +## How it works + +`POST /query` with a JSONDERULO body. The `action` field tells the server what you want to do. The `table` field tells it which table. + +```json +{ + "action": "select", + "table": "items" +} +``` + +Thats the simplest possible query. Selects all columns from the items table with the default limit applied. + +Every query should run inside a transaction. The server should set `@current_user_id` and `@request_id` as SQL session variables before running your query so triggers and stuff can reference them if needed in your schema. + +## Auth + +Bearer token required. The server should ALWAYS check your permissions for the target table (and any joined tables) before doing anything. See [permissions](../permissions/README.md) for how that works. + +## Common Response Fields + +Every response has `success` (boolean). Beyond that: + +| Action | Extra fields | +|--------|----------------------------------------------| +| select | `data` (array of rows), optionally `warning` | +| insert | `data` (last insert id), `rows_affected` | +| update | `rows_affected`, if applicable `warning` | +| delete | `rows_affected`, if applicable `warning` | +| count | `data` (the count as a number) | + +The `warning` field should appear when json derulo striped for you, example : your limit being capped to a maximum or blocked columns being stripped away by json derulo stripping. its not an error just a hey bbgurl please comply to what you're allowed to do and to json derulos orders! diff --git a/query/batch.md b/query/batch.md new file mode 100644 index 0000000..471adf5 --- /dev/null +++ b/query/batch.md @@ -0,0 +1,63 @@ +# Batch Queries + +[back to query](README.md) /// [home](../README.md) + +Run multiple queries of the same type in one request. All should share the same `action` and `table`. + +## Request + +```json +{ + "action": "insert", + "table": "items", + "queries": [ + { "data": { "name": "Widget A", "price": 10 } }, + { "data": { "name": "Widget B", "price": 20 } }, + { "data": { "name": "Widget C", "price": 30 } } + ], + "rollback_on_error": true +} +``` + +## Fields + +| Field | Type | Required | Notes | +|---------------------|---------|----------|-------------------------------------------------------------| +| `action` | string | yes | shared across all sub queries | +| `table` | string | yes | shared across all sub queries | +| `queries` | array | yes | array of individual query bodies (without action and table) | +| `rollback_on_error` | boolean | no | default false. if true, any failure rolls back everything | + +Each item in `queries` can have the same fields as a normal single query (columns, data, where, filter, limit, offset, order_by) just without repeating action and table. + +## Permission + +Batch operations should require the `allow_batch_operations` permission for your power level (on `jde_groups`). Without it you get a 403. + +Someday better granularity of this permission might be introduced but for now this should suffice. + +## How it should work + +- everything runs in a single transaction +- for INSERT batches the server should ideally be smart about it and combine them into one multi row INSERT statement instead of doing N individual inserts (way faster) +- for SELECT and COUNT each sub query runs individually within the transaction +- if `rollback_on_error` is true and any sub query fails the entire batch should be rolled back and nothing committed +- if `rollback_on_error` is false (default) failures are reported but successful queries still commit + +## Response + +Success looks the same as the individual action responses but with a `results` array for partial success/failure scenarios. + +When `rollback_on_error` is true and something fails: + +```json +{ + "success": false, + "error": "Batch operation failed, all changes rolled back [request_id: abc123]", + "results": [ + { "success": true, "rows_affected": 1 }, + { "success": false, "error": "duplicate key" }, + { "success": true, "rows_affected": 1 } + ] +} +``` diff --git a/query/count.md b/query/count.md new file mode 100644 index 0000000..bb4e14d --- /dev/null +++ b/query/count.md @@ -0,0 +1,48 @@ +# Count + +[back to query](README.md) /// [home](../README.md) + +Count rows matching a filter. Like select but just gives you the number in data field. + +## Request + +```json +{ + "action": "count", + "table": "items", + "filter": { + "column": "status", + "op": "=", + "value": "active" + } +} +``` + +## Fields + +| Field | Type | Required | Notes | +|----------|--------|----------|-----------------------------| +| `action` | string | yes | must be `"count"` | +| `table` | string | yes | target table | +| `where` | object | no | simple key value filter | +| `filter` | object | no | structured filter | +| `joins` | array | no | join other tables if needed | + +No limit or offset for count, it always counts everything that matches. + +## Response + +```json +{ + "success": true, + "data": 42 +} +``` + +`data` is just the count as a number. + +## Behaviors + +- should support joins so you can count rows with conditions on related tables +- ownership scoping should apply: if you have `rwo` or `rwg` the count should only include rows you have access to +- you need at least read permission on the table diff --git a/query/delete.md b/query/delete.md new file mode 100644 index 0000000..db55292 --- /dev/null +++ b/query/delete.md @@ -0,0 +1,46 @@ +# Delete + +[back to query](README.md) /// [home](../README.md) + +Remove rows from a table. + +## Request + +```json +{ + "action": "delete", + "table": "items", + "filter": { + "column": "id", + "op": "=", + "value": 42 + } +} +``` + +## Fields + +| Field | Type | Required | Notes | +|----------|--------|----------|-------------------------| +| `action` | string | yes | must be `"delete"` | +| `table` | string | yes | target table | +| `where` | object | yes* | simple key value filter | +| `filter` | object | yes* | structured filter | + +*at least one of `where` or `filter` is required. no unfiltered deletes lol. + +## Response + +```json +{ + "success": true, + "rows_affected": 1 +} +``` + +## Behaviors + +- WHERE clause is **mandatory**. same as update, no blanket deletes allowed. jsonderulo doesnt want you nuking entire tables on accident +- ownership scoping should be applied based on your permission level +- limit should be applied and capped to your max +- you need write permission on the table (`rw`, `rwa`, `rwo`, `rwg`) diff --git a/query/filters.md b/query/filters.md new file mode 100644 index 0000000..13a929b --- /dev/null +++ b/query/filters.md @@ -0,0 +1,151 @@ +# Filters + +[back to query](README.md) /// [home](../README.md) + +How to filter, join and sort your query results. + +## Simple Where + +The `where` field is the lazy cunt way. Just key value pairs, all treated as `= value` with AND between them. + +```json +{ + "action": "select", + "table": "items", + "where": { + "status": "active", + "category_id": 3 + } +} +``` + +This becomes `WHERE status = 'active' AND category_id = 3`. + +## Structured Filter + +The `filter` field gives you full crazy schizo control. Supports operators, nesting, AND/OR/NOT. + +### Single condition + +```json +{ "column": "price", "op": ">", "value": 10 } +``` + +### AND (all must match) + +```json +{ + "and": [ + { "column": "status", "op": "=", "value": "active" }, + { "column": "price", "op": ">=", "value": 5 } + ] +} +``` + +### OR (any can match) + +```json +{ + "or": [ + { "column": "status", "op": "=", "value": "draft" }, + { "column": "status", "op": "=", "value": "active" } + ] +} +``` + +### NOT + +```json +{ + "not": { "column": "status", "op": "=", "value": "deleted" } +} +``` + +### Nesting + +You can nest these however you want: + +```json +{ + "and": [ + { "column": "price", "op": ">", "value": 0 }, + { + "or": [ + { "column": "status", "op": "=", "value": "active" }, + { "column": "is_featured", "op": "=", "value": true } + ] + } + ] +} +``` + +### Supported Operators + +| Operator | What it does | +|---------------|-------------------------------------------------| +| `=` | equals | +| `!=` | not equals | +| `>` | greater than | +| `>=` | greater than or equal | +| `<` | less than | +| `<=` | less than or equal | +| `like` | SQL LIKE pattern matching | +| `not_like` | inverse of LIKE | +| `in` | value is in a list (pass an array as value) | +| `not_in` | value is not in a list | +| `is_null` | column is NULL (no value needed) | +| `is_not_null` | column is not NULL (no value needed) | +| `between` | between two values (pass `[min, max]` as value) | + +### Maximum Conditions + +Theres a per power level limit on how many conditions you can have in one query (`max_where_conditions` on the group). If you exceed it the server should give you a 400. this is to prevent people from going insane with query complexity. + +## Joins + +Add `joins` to your query to join other tables. Works with select and count. + +```json +{ + "action": "select", + "table": "items", + "columns": ["items.name", "categories.name"], + "joins": [ + { + "table": "categories", + "on": "items.category_id = categories.id", + "type": "LEFT" + } + ] +} +``` + +| Field | Type | Required | Default | Notes | +|---------|--------|----------|-----------|------------------------------------| +| `table` | string | yes | | table to join | +| `on` | string | yes | | join condition | +| `type` | string | no | `"INNER"` | `INNER`, `LEFT`, `RIGHT` | + +type may also be full but keep in mind that this is not support on MySQL/MariaDB backed servers so I advice against using it. + +You need read permission on every table you join. If you dont have access to a joined table the whole query should get rejected by jsonderulo and he will spank you. + +## Order By + +Sort your results: + +```json +{ + "order_by": [ + { "column": "created_at", "direction": "DESC" }, + { "column": "name", "direction": "ASC" } + ] +} +``` + +| Field | Type | Required | Default | +|-------------|--------|----------|---------| +| `column` | string | yes | | +| `direction` | string | no | `"ASC"` | + +You can sort by multiple columns. They apply in order. diff --git a/query/insert.md b/query/insert.md new file mode 100644 index 0000000..899f804 --- /dev/null +++ b/query/insert.md @@ -0,0 +1,50 @@ +# Insert + +[back to query](README.md) /// [home](../README.md) + +Add a new row to a table. + +## Request + +```json +{ + "action": "insert", + "table": "items", + "data": { + "name": "New Widget", + "category_id": 3, + "price": 24.99, + "status": "draft" + } +} +``` + +## Fields + +| Field | Type | Required | Notes | +|----------|--------|----------|------------------------------| +| `action` | string | yes | must be `"insert"` | +| `table` | string | yes | target table | +| `data` | object | yes | column value pairs to insert | + +## Response + +```json +{ + "success": true, + "data": 42, + "rows_affected": 1 +} +``` + +`data` is the auto increment ID of the newly inserted row (`last_insert_id`). + +## Behaviors + +- columns you dont have write access to should cause jsonderulo to start stripping for you and remove the naughty stuff from `data`. if theres nothing left after stripping you get an error (cant insert nothing lol) +- system columns should be auto populated: + - `created_by` set to your user ID + - `pinned_to` set to your user ID (unless you have `rwa` permission and explicitly set it to someone else) + - `created_at` handled by SQL defaults +- if the table has auto generation rules configured (like auto generated codes or UUIDs) those should get applied automatically +- you need write permission on the table. read only permission codes like `r`, `rg` and `ro` will give you a nice 403 diff --git a/query/select.md b/query/select.md new file mode 100644 index 0000000..0b84dc8 --- /dev/null +++ b/query/select.md @@ -0,0 +1,67 @@ +# Select + +[back to query](README.md) /// [home](../README.md) + +Read rows from a table. + +## Request + +```json +{ + "action": "select", + "table": "items", + "columns": ["id", "name", "price"], + "filter": { + "and": [ + { "column": "status", "op": "=", "value": "active" }, + { "column": "price", "op": ">", "value": 10 } + ] + }, + "joins": [ + { + "table": "categories", + "on": "items.category_id = categories.id", + "type": "LEFT" + } + ], + "order_by": [ + { "column": "name", "direction": "ASC" } + ], + "limit": 50, + "offset": 0 +} +``` + +## Fields + +| Field | Type | Required | Default | Notes | +|------------|------------------|----------|----------------|--------------------------------------------------------------| +| `action` | string | yes | | must be `"select"` | +| `table` | string | yes | | target table | +| `columns` | array of strings | no | all columns | which columns to return | +| `where` | object | no | | simple key value filter (see [filters](filters.md)) | +| `filter` | object | no | | structured filter with operators (see [filters](filters.md)) | +| `joins` | array | no | | join other tables (see [filters](filters.md)) | +| `order_by` | array | no | | sort results (see [filters](filters.md)) | +| `limit` | integer | no | server default | max rows to return | +| `offset` | integer | no | 0 | skip this many rows | + +## Response + +```json +{ + "success": true, + "data": [ + { "id": 1, "name": "Widget", "price": 19.99 }, + { "id": 2, "name": "Gadget", "price": 29.99 } + ] +} +``` + +## Behaviors + +- if you dont specify `columns` you get all columns you have read access to +- if you request columns youre not allowed to see they should get silently stripped and a `warning` should be included so you know json derulo censored you +- `limit` should be auto applied if you dont set one (from config defaults). if you set one higher than your allowed max it gets capped and a `warning` tells you +- ownership scoping should be automatically applied based on your permission level (see [ownership](../permissions/ownership.md)) +- you need read permission on every joined table in regards to what you want to join obv. otherwise your whole query gets spanked by jsonderulo! diff --git a/query/update.md b/query/update.md new file mode 100644 index 0000000..2c5b753 --- /dev/null +++ b/query/update.md @@ -0,0 +1,54 @@ +# Update + +[back to query](README.md) /// [home](../README.md) + +Modify existing rows in a table. + +## Request + +```json +{ + "action": "update", + "table": "items", + "data": { + "price": 29.99, + "status": "active" + }, + "filter": { + "column": "id", + "op": "=", + "value": 42 + } +} +``` + +## Fields + +| Field | Type | Required | Notes | +|----------|--------|----------|------------------------------| +| `action` | string | yes | must be `"update"` | +| `table` | string | yes | target table | +| `data` | object | yes | column value pairs to update | +| `where` | object | yes* | simple key value filter | +| `filter` | object | yes* | structured filter | + +*at least one of `where` or `filter` is required. you cant do an unfiltered update. + +## Response + +```json +{ + "success": true, + "rows_affected": 1 +} +``` + +## Behaviors + +- a WHERE clause is **mandatory**. the server refuses to run an update without one. this is a safety thingy yk +- columns you dont have write access to are silently stripped. `warning` is included if json derulo stripped for you +- `last_modified_by` auto set to your user ID +- `last_modified_at` is handled by sqls ON UPDATE trigger +- ownership scoping applies: if you have `rwo` you can only update rows where `pinned_to` equals your user ID uness you have rwa. same deal with `rwg` for group scoping +- limit is applied if set and capped to your max +- auto generation rules apply on update too if configured for the update action diff --git a/schema/README.md b/schema/README.md new file mode 100644 index 0000000..dc82f01 --- /dev/null +++ b/schema/README.md @@ -0,0 +1,11 @@ +# Schema + +[back to index](../README.md) + +The core database tables that every jsonderulo compliant server needs and how the server should auto manage certain columns. + +## Pages + +- [Core Tables](core_tables.md) the required tables (users, groups, settings, associations, sessions) +- [System Columns](system_columns.md) auto managed columns that jde adds to your tables +- [Sessions](sessions.md) how persistent sessions work under the hood diff --git a/schema/core_tables.md b/schema/core_tables.md new file mode 100644 index 0000000..789d2a1 --- /dev/null +++ b/schema/core_tables.md @@ -0,0 +1,158 @@ +# Core Tables + +[back to schema](README.md) /// [home](../README.md) + +These tables are required for the API to function. They handle user management, permissions, configuration and session persistence how you wish to implement them upstream wise idc just gotta be somewhat sql compatible obviously lol. + +## jde_settings + +Database driven configuration. Values here can override config file settings (application depends on [config preference](../admin/config_preference.md)). + +| Column | Type | Notes | +|---------------|---------------------|----------------------| +| `id` | INT AUTO_INCREMENT | primary key | +| `setting` | VARCHAR(255) UNIQUE | setting name | +| `value` | TEXT | setting value | +| `description` | TEXT NULL | optional description | + +### Overridable settings + +These are the settings a compliant server should support being overridden from the database. The `setting` column is the key and `value` is parsed to the appropriate type. + +#### Session settings + +| Setting | Type | What it controls | +|---------|------|-----------------| +| `session_timeout_minutes` | number | default session timeout | +| `refresh_session_on_activity` | bool | whether sessions extend on use | +| `max_concurrent_sessions` | number | default max sessions per user | +| `session_update_interval_seconds` | number | how often cached sessions flush to DB | +| `min_clear_sessions_power` | number | min power level to clear another users sessions | + +#### Query limits + +| Setting | Type | What it controls | +|---------|------|-----------------| +| `default_max_limit` | number | global default max rows per query | +| `default_max_where_conditions` | number | global default max WHERE conditions | + +#### User preferences + +| Setting | Type | What it controls | +|---------|------|-----------------| +| `default_user_settings_access` | string | default access level (`read-own-only`, `read-write-own`, `read-write-all`) | + +#### Security and IP whitelisting + +| Setting | Type | What it controls | +|---------|------|-----------------| +| `whitelisted_pin_ips` | JSON array | IPs/ranges allowed for PIN auth | +| `whitelisted_string_ips` | JSON array | IPs/ranges allowed for token/badge auth | + +#### Rate limiting (may. require restart to take effect) + +| Setting | Type | What it controls | +|---------|------|-----------------| +| `enable_rate_limiting` | bool | master switch for rate limiting | +| `auth_rate_limit_per_second` | number | auth burst limit per IP per second | +| `auth_rate_limit_per_minute` | number | auth limit per IP per minute | +| `api_rate_limit_per_second` | number | API burst limit per user per second | +| `api_rate_limit_per_minute` | number | API limit per user per minute | + +For SeckelAPI specifically : +Rate limit settings get merged into config on reload but the actual rate limiter middleware is only rebuilt on full restart. So changing these requires either a manual restart via `/reload` or waiting for an automatic restart if thats enabled. + +#### Application level settings + +You can also store arbitrary application level stuff like `company_name`, `date_format`, `datetime_format` etc. These dont get merged into the server config but can be read by toolkit endpoints via `inject_db_context` or queried by clients directly. Basically use the table as a key value store for whatever you need for shared tk settings if u want. + +### Settings that SHOULDNOT be overWRITABLE from the database + +Some things are config file only for good reasons: + +- **`config_preference`** controls how DB settings are merged in the first place. overriding it from the DB would be IDIOTIC +- **server bind address** (`host`, `port`) no need to explain id say +- **database connection** (`host`, `port`, `database`, `username`, `password`) cant change DB creds from inside the DB are you mad or smth? +- **`min_reload_power`** and **`min_manual_restart_power`** security critical, dont want these changeable from DB duh +- **`persistent_sessions`**, **`cache_recent_sessions`** fundamental fkn session architecture choices +- **`hash_pins`**, **`hash_tokens`** would literrally break existing credentials +- **logging config** file paths, log levels etc are infrastructure level +- **toolkit definitions** managed via their own config files not jde_settings +- **system column config** structural stuff thats set at startup + +## jde_groups + +Group/role definitions with power levels and permissions. + +| Column | Type | Notes | +|------------------------|---------------------|-------------------------------------------------------| +| `id` | INT AUTO_INCREMENT | primary key | +| `name` | VARCHAR(100) UNIQUE | group name | +| `power` | INT | 1 (lowest) to 100 (highest) | +| `permissions` | JSON NULL | table and column permission rules | +| `max_limit` | INT NULL | max rows per query (NULL = use default) | +| `max_where` | INT NULL | max WHERE conditions (NULL = use default) | +| `timeout_session` | INT NULL | session timeout in minutes | +| `max_sessions` | INT NULL | max concurrent sessions | +| `rollback_on_error` | BOOL | default TRUE | +| `allow_batch` | BOOL | default FALSE | +| `user_settings_access` | ENUM | `read-write-own`, `read-write-all` or `read-own-only` | + +The `permissions` column holds a JSON array of permission rules like `["*:rw", "logs:r"]`. See [table permissions](../permissions/table_permissions.md). + +## jde_users + +User accounts for authentication. + +| Column | Type | Notes | +|---------------------|---------------------|------------------------------------------------------------| +| `id` | INT AUTO_INCREMENT | primary key | +| `name` | VARCHAR(200) | display name | +| `username` | VARCHAR(100) UNIQUE | login name | +| `password` | VARCHAR(255) | bcrypt hashed | +| `pin_code` | VARCHAR(8) NULL | short PIN for kiosk auth | +| `login_string` | VARCHAR(255) NULL | badge/NFC card identifier | +| `core_group_id` | INT FK | references jde_groups.id | +| `email` | VARCHAR(255) NULL | | +| `phone` | VARCHAR(50) NULL | | +| `notes` | TEXT NULL | | +| `active` | BOOLEAN | default TRUE | +| `last_login_date` | DATETIME NULL | | +| `preferences` | JSON NULL | user settings (see [preferences](../admin/preferences.md)) | +| `toolkit_overrides` | JSON NULL | per user toolkit group overrides | + +### toolkit_overrides + +Allows overriding which toolkit group a user belongs to without changing the junction table: + +```json +[ + { "toolkit": "beepzone", "group": "manager" } +] +``` + +NULL means inherit from the core groups associations in `jde_associations`. + +## jde_associations + +Links core groups to toolkit specific groups. Each core group can have one membership per toolkit. + +| Column | Type | Notes | +|----------------------|--------------------|--------------------------------------| +| `id` | INT AUTO_INCREMENT | primary key | +| `core_group_id` | INT FK | references jde_groups.id | +| `toolkit_name` | VARCHAR(100) | toolkit identifier (like `beepzone`) | +| `toolkit_group_name` | VARCHAR(100) | group name within the toolkit | + +Unique on `(core_group_id, toolkit_name)` so a group can only be in one toolkit group per toolkit. + +## jde_sessions + +Internal table for persistent session storage. **Not accessible via the /query endpoint.** See [sessions](sessions.md) for details. + +| Column | Type | Notes | +|-----------------|----------------|--------------------------------| +| `token_hash` | VARCHAR(64) PK | SHA 256 hash of session token | +| `user_id` | INT FK | references jde_users.id | +| `created_at` | TIMESTAMP | when the session was created | +| `last_accessed` | TIMESTAMP | when the session was last used | diff --git a/schema/sessions.md b/schema/sessions.md new file mode 100644 index 0000000..2eb89d3 --- /dev/null +++ b/schema/sessions.md @@ -0,0 +1,57 @@ +# Sessions + +[back to schema](README.md) /// [home](../README.md) + +How session persistence and management works under the hood. This is the internal guts of json derulo, for the client facing auth flow see [auth](../auth/README.md). + +## Architecture +Of note : Below is partially optional in regards to persistant sessions and insane hashing idea i had at 3am, dont keep your session tokens in plaintext or unencrypted on anything else but maybe RAM (hashmaps) and you should be fine. + +## Token handling + +When you login the server generates a UUID v4 token. The raw token is returned to you and **never stored anywhere on the server**. The server only keeps a SHA 256 hash of it. + +When you send a request with your token to json derulo he hashes it and looks up the hash in the cache. This means even if someone gets access to the database or the servers memory they only see hashes not usable tokens. + +## Persistence + +When `persistent_sessions` is enabled the server stores session hashes in the `jde_sessions` table. This has two parts: + +### Periodic flushing your hashes down the toilet (onto mariadb) + +A background task should run periodically (with preferably support for custom intervals). It: +1. writes new sessions to the database +2. updates `last_accessed` for sessions that had activity since the last flush +3. deletes sessions that were logged out or expired +4. cleans up any really old rows as a safety net + +Changes should preferably be batched so the server isnt hitting the database on every single request a billion extra times. + +### Startup load + +On server start (including after a restart) the server should load all valid sessions from `jde_sessions` back into memory by JOINing with `jde_users` and `jde_groups` to reconstruct full session objects. Expired sessions should be discarded. + +## Pre restart flushing of toilet doo doo + +When a forced restart is requested via `/reload` the server should flush all pending session changes to the database before re execing. Should have a timeout so a dead database doesnt block the restart though lmao. + +## Session timeouts + +Each session should have a timeout based on the users power level (configurable per power level). If session refresh on activity is enabled (should be by default) the timeout resets on every request. + +Expired sessions should be cleaned up by the periodic background task. + +## Concurrent session limits + +Each user has a maximum number of concurrent sessions (`max_concurrent_sessions` configurable per power level with a global default). When a user logs in and is at the limit the **oldest session** (earliest `created_at`) is automatically evicted. + +## The jde_sessions table + +This table is internal only. It should never never never EVER be accessible via the `/query` endpoint and the server is hardcoded to skip it for system column management (no `pinned_to`, `created_by` etc on this table neederd). + +| Column | Type | Notes | +|-----------------|-------------|---------------------------------------| +| `token_hash` | VARCHAR(64) | primary key, SHA 256 hash | +| `user_id` | INT | FK to jde_users.id, ON DELETE CASCADE | +| `created_at` | TIMESTAMP | when the session was created | +| `last_accessed` | TIMESTAMP | when the session was last used | diff --git a/schema/system_columns.md b/schema/system_columns.md new file mode 100644 index 0000000..14f2a7d --- /dev/null +++ b/schema/system_columns.md @@ -0,0 +1,56 @@ +# System Columns + +[back to schema](README.md) /// [home](../README.md) + +The server should be able to automatically add and manage certain columns on your tables. These are called system columns. + +## The columns + +| Column | Type | When populated | Notes | +|--------------------|----------------|----------------|-------------------------------------------------------------------------------| +| `created_at` | TIMESTAMP | on insert | defaults to CURRENT_TIMESTAMP | +| `created_by` | INT NULL | on insert | FK to jde_users.id, set to the authenticated users ID | +| `last_modified_at` | TIMESTAMP NULL | on update | uses ON UPDATE CURRENT_TIMESTAMP | +| `last_modified_by` | INT NULL | on update | FK to jde_users.id, set to the authenticated users ID | +| `pinned_to` | INT NULL | on insert | FK to jde_users.id, used for [ownership scoping](../permissions/ownership.md) | + +The FK columns (`created_by`, `last_modified_by`, `pinned_to`) reference jde_users with ON DELETE SET NULL and ON UPDATE CASCADE. + +## Auto management + +On startup the server should check every enabled tables columns and automatically add any missing system columns. This means you dont have to define them in your CREATE TABLE statements schema wise, Json Derulo will take care of them for you if you want <3 +TO let Jays Son Handle it a compliant server should support : +- master switch to enable/disable auto management +- separate switch for core tables (jde_users etc) + +Fully internal tables like reference servers `jde_sessions` should always be skipped regardless of these settings. + +## Per table overrides + +You can disable specific system columns for specific tables. In toolkit config: + +```toml +[system_column_overrides.my_changelog_table] +all = false # skip ALL system columns on this table +``` + +Or selectively: + +```toml +[system_column_overrides.my_table] +pinned_to = false # no ownership column +created_by = false # dont track who created rows +``` + +Same thing works for core tables via `core_system_column_overrides` in the columns config section. + +## Runtime repair + +If a query fails because of a missing system column (MySQL error 1054 "unknown column") the server should try to auto fix it by adding the missing column at runtime and retrying the query. This should have a cooldown so it doesnt spam ALTER TABLE on every request if something is actually wrong. + +## Write protection + +System columns are write protected by default. Only users with `rwa` table permission can write to them directly. For everyone else the server manages them automatically: +- `created_by` and `pinned_to` are set on insert +- `last_modified_by` is set on update +- the timestamp columns are handled by MySQL diff --git a/toolkits/README.md b/toolkits/README.md new file mode 100644 index 0000000..814c1f8 --- /dev/null +++ b/toolkits/README.md @@ -0,0 +1,11 @@ +# Toolkits + +[back to index](../README.md) + +The plugin system. Toolkits extend the server with custom tables, endpoints, permissions and scheduled tasks. + +## Pages + +- [Overview](overview.md) what toolkits are and the two types +- [Endpoints](endpoints.md) custom API endpoints (internal, HTTP proxy, executable) +- [Permissions](permissions.md) toolkit groups, associations and how permissions are resolved diff --git a/toolkits/endpoints.md b/toolkits/endpoints.md new file mode 100644 index 0000000..760895e --- /dev/null +++ b/toolkits/endpoints.md @@ -0,0 +1,137 @@ +# Toolkit Endpoints + +[back to toolkits](README.md) /// [home](../README.md) + +Custom API routes defined by toolkits. These live under `/tk/app/{toolkit}/` or `/tk/lib/{toolkit}/` depending on the toolkit type. + +## Route format + +``` +POST /tk/app/beepzone/kiosk/scan +GET /tk/lib/opensigma/status +``` + +The path after the toolkit name is matched against endpoint definitions in the toolkit config. + +## Endpoint types + +### Internal + +Runs logic within the server process. Can use `pseudo_querys` to execute database queries based on the request payload. No external service needed. + +### HTTP + +Proxies the request to an external HTTP service. The server wraps the payload in a context object (user info, DB context etc), sends it to the upstream URL and returns the response. + +Can return JSON (wrapped in the standard envelope) or `passthrough` (raw bytes with the upstream content type, useful for file downloads). + +### Executable + +Runs a shell command on the server. The request context is passed as a JSON string argument. Output is parsed as JSON or wrapped in `{ "output": "..." }` if its not valid JSON. + +## Request flow + +1. verify the toolkit exists and is enabled (404 if not) +2. match the endpoint path in config (404 if not found) +3. check endpoint level permissions for your power level (403 if denied) +4. validate payload against `client_filter` rules (400 if invalid) +5. fetch `inject_db_context` values from the database +6. resolve your toolkit group +7. build the request context +8. apply `overwrite_wrapping` transformations +9. execute based on endpoint type + +## Request context + +The context object sent to backends looks like this: + +```json +{ + "user_id": 1, + "username": "admin", + "power": 100, + "core_group_id": 1, + "core_group_name": "administrators", + "toolkit_group": "managers", + "request_id": "abc123", + "payload": { ... }, + "db_context": { "key": "value" } +} +``` + +`toolkit_group` is the users group within this specific toolkit (resolved via `jde_associations` or `toolkit_overrides`). + +`db_context` contains values fetched via `inject_db_context` config. + +## Client filter + +Validates the incoming request payload before anything else happens. + +```toml +[endpoints.0.client_filter] +allowed_fields = ["barcode", "quantity"] +required_fields = ["barcode"] +max_body_bytes = 4096 +``` + +| Field | Notes | +|-------------------------|------------------------------------------------------------------| +| `allowed_fields` | only these fields are accepted, everything else gets stripped | +| `required_fields` | these must be present or you get 400 | +| `max_body_bytes` | max payload size | +| `allowed_content_types` | restrict accepted content types | +| `field_rules` | per field validation (type, max_length, pattern, allowed_values) | + +Per field rules example: + +```toml +[endpoints.0.client_filter.field_rules.barcode] +type = "string" +max_length = 50 +pattern = "^[A-Z0-9-]+$" +``` + +## Database context injection + +`inject_db_context` lets you pull values from the database and include them in the request context. + +Simple: pull a value by column name +```toml +inject_db_context = ["users.email"] +``` + +Filtered: pull specific rows based on payload values +```toml +inject_db_context = ["items.name?id=$payload.item_id"] +``` + +The `$payload.field` syntax references fields from the clients request body. + +## Overwrite wrapping + +Modify the context envelope before sending it to the backend. + +- prefix with `-` to remove a field: `"-power"` removes the power level from context +- without prefix: promotes a db_context value to a top level field + +## Success response + +```json +{ + "success": true, + "request_id": "abc123", + "endpoint": "/tk/app/beepzone/kiosk/scan", + "data": { ... } +} +``` + +## Error responses + +| Code | When | +|------|-----------------------------------------------------------| +| 400 | payload validation failed | +| 401 | no token or invalid session | +| 403 | endpoint permission denied | +| 404 | toolkit not found/disabled, endpoint path doesnt exist | +| 500 | internal error, DB context fetch failed, executable error | +| 502 | HTTP upstream returned an error | diff --git a/toolkits/overview.md b/toolkits/overview.md new file mode 100644 index 0000000..93b955c --- /dev/null +++ b/toolkits/overview.md @@ -0,0 +1,42 @@ +# Toolkit Overview + +[back to toolkits](README.md) /// [home](../README.md) + +A toolkit is a bundle of tables, permissions, endpoints and optionally scheduled tasks that extends the server for a specific use case. + +## Types + +There are two toolkit types as of JsonDerulo API Spec v2: + +### Application + +Route prefix: `/tk/app/{toolkit}/{*path}` + +Application toolkits are full featured modules with their own UI or client (not intended to be handled by the JDE Server alone). Think of them as apps that use the server as a backend. A POS system, an inventory manager, a monitoring dashboard etc. + +### Library + +Route prefix: `/tk/lib/{toolkit}/{*path}` + +Library toolkits provide shared services or utilities. They exist to be used by application toolkits or serve as seperatable assignable permissions. Think of them as shared backend modules. + +The only functional difference is the route prefix. Everything else (permissions, endpoints, config) should work basically the same way. + +## What a toolkit can define + +- **tables**: database tables it owns. with option for the jde server to manage system columns on these and enforcing permissions +- **read_only_tables**: tables that reject all writes through the API (overwritable by powerlevel) +- **write_protected_columns**: extra columns besides system columns that cant be written by normal users +- **endpoints**: custom API routes backed by internal logic, HTTP services or executables +- **groups_table**: a database table holding toolkit specific group definitions with their own permissions +- **scheduled_queries**: SQL like queries that run periodically on a timer +- **auto_generation**: rules for auto generating field values (UUIDs, sequential codes, random strings etc) +- **system_column_overrides**: per table control over which system columns to skip + +## Config file structure + +Toolkits are one of the few which must follow a configured in TOML files. The main `toolkits.toml` sets global defaults and can define inline toolkits. Most toolkits live in individual files under the `toolkitd/` subdirectory (like `toolkitd/beepzone.toml`). + +## Enabling and disabling + +Each toolkit has an `enabled` field (default true). Disabled toolkits are completely ignored, their tables arent registered, their endpoints dont exist and their permissions arent loaded. diff --git a/toolkits/permissions.md b/toolkits/permissions.md new file mode 100644 index 0000000..e32e9d3 --- /dev/null +++ b/toolkits/permissions.md @@ -0,0 +1,68 @@ +# Toolkit Permissions + +[back to toolkits](README.md) /// [home](../README.md) + +How toolkit specific permissions work and how they get merged with core permissions. + +## Toolkit groups + +Each toolkit can have its own groups table (configured via `groups_table`). This table needs at minimum: + +| Column | Type | Notes | +|------------------------|---------|---------------------------------------------------------------------------| +| `name` | VARCHAR | group name, referenced by `jde_associations.toolkit_group_name` | +| `permissions` | JSON | array of permission rules (same format as core: `["table:rw", "logs:r"]`) | +| `endpoint_permissions` | JSON | array of allowed endpoint paths like this `["kiosk/*", "report"]` | + +## Associations + +Core groups are linked to toolkit groups via the `jde_associations` junction table. One entry per core group per toolkit: + +``` +core group "staff" (power 50) -> beepzone toolkit -> beepzone group "operators" +core group "admin" (power 100) -> beepzone toolkit -> beepzone group "managers" +``` + +When a user authenticates the server knows their core group which gives their power level and core permissions. Then for each toolkit it looks up the association to find their toolkit group and merges those permissions in. + +## Permission resolution + +At startup and on config reload the server should: + +1. read core groups from `jde_groups` +2. read toolkit groups from each toolkits `groups_table` +3. join them through `jde_associations` +4. merge the permission rules per power level +5. build the reals jsonderulo RBAC structure + +Toolkit permissions are additive to core permissions. If core gives you `r` on a table and the toolkit gives you `rw` on the same table you end up with `rw`. + +## User level overrides + +Users can have `toolkit_overrides` in their JSON preferences on `jde_users`: + +```json +[{ "toolkit": "beepzone", "group": "manager" }] +``` + +This overrides the association from their core group. So even if their core group maps to "operators" in beepzone they personally get "manager" permissions. + +## Fallback permissions + +When the database is partially fucked or the groups table doesnt exist yet the server can fall back to static permissions defined in the toolkit config and users table, they should look somewhat the same as + +```toml +[db_fallback_permissions.100] +basic_rules = ["assets:rw", "logs:r"] +advanced_rules = ["assets.secret_field:block"] +``` + +The key is the power level as a string. These only kick in when DB sourced permissions arent really available (or if the server implementation allows setting preferences as to which one wins if both are set) + +## Endpoint permissions + +Separate from table permissions. Endpoint permissions control which custom endpoint paths a user can access. These are resolved per power level from the toolkit groups `endpoint_permissions` column. + +Glob patterns are supported: `"kiosk/*"` matches all paths under kiosk. + +Endpoint fallback permissions can also be defined in the toolkit config for when the database isnt available. |
