back to query /// home
Run multiple queries of the same type in one request. All should share the same action and table.
{
"action": "insert",
"table": "items",
"queries": [
{ "data": { "name": "Widget A", "price": 10 } },
{ "data": { "name": "Widget B", "price": 20 } },
{ "data": { "name": "Widget C", "price": 30 } }
],
"rollback_on_error": true
}
| Field |
Type |
Required |
Notes |
action |
string |
yes |
shared across all sub queries |
table |
string |
yes |
shared across all sub queries |
queries |
array |
yes |
array of individual query bodies (without action and table) |
rollback_on_error |
boolean |
no |
default false. if true, any failure rolls back everything |
Each item in queries can have the same fields as a normal single query (columns, data, where, filter, limit, offset, order_by) just without repeating action and table.
Batch operations should require the allow_batch_operations permission for your power level (on jde_groups). Without it you get a 403.
Someday better granularity of this permission might be introduced but for now this should suffice.
- everything runs in a single transaction
- for INSERT batches the server should ideally be smart about it and combine them into one multi row INSERT statement instead of doing N individual inserts (way faster)
- for SELECT and COUNT each sub query runs individually within the transaction
- if
rollback_on_error is true and any sub query fails the entire batch should be rolled back and nothing committed
- if
rollback_on_error is false (default) failures are reported but successful queries still commit
Success looks the same as the individual action responses but with a results array for partial success/failure scenarios.
When rollback_on_error is true and something fails:
{
"success": false,
"error": "Batch operation failed, all changes rolled back [request_id: abc123]",
"results": [
{ "success": true, "rows_affected": 1 },
{ "success": false, "error": "duplicate key" },
{ "success": true, "rows_affected": 1 }
]
}