-
Notifications
You must be signed in to change notification settings - Fork 47
Data Stores in 5 mins
jbirddog edited this page Jan 23, 2024
·
7 revisions
- Allow BPMN Architects to define custom data models that can be shared across process instances
- ClientA wants to deploy a website that will find a recipe based on selected ingredients, or allow the user to add an ingredient
- Not adding an ingredients table to the backend
- A Service Task could be used (postgres or custom recipe backend)
- Or a Data Store could be added with reads and writes modeled in the diagram
- There can be many types of Data Stores, each handle the structure and persistence of the stored data
- Data Stores are implemented in the backend and must be registered with SpiffWorkflow
- When a Data Store is added to a process model, the type of Data Store and its identifier is defined
- To/From arrows are then drawn which cause data to be put it/taken from task data
- They work a lot like data objects, main difference is the data persists outside of the process instance
- Allows storing any valid JSON blob
- The entire blob is read/written when accessed
- Very flexible but not optimal for large datasets
- Stored in the backend's database, one row per instance
- Allows storing any valid JSON blob
- The entire blob is read/written when accessed
- Very flexible but not optimal for large datasets
- Stored in a file that is committed to git along with process models
- Good for small static global data that should be included in PRs/promoted to other environments
- Allows storing any valid JSON blob
- Each blob is segmented by two levels of keys
- An example use case, could be used to manage car lots for a large chain of dealerships
- Top level key could be the car lot id
- Second level keys could be "inventory", "employees", "weekly-deals", etc
- Data is lazy loaded via a function pointer placed in task data
- A subset of data is read/written when accessed
- Stored in the backend's database, one row per instance/top level key/secondary key
- Good for larger datasets
- Requires a structured JSON blob that details records to be used by the Type Ahead Widget
- Allows records to be indexed with different search terms (Dan, dan@, Mr. Funk, His Funkness -> Dan Funk)
- Is write-only from within BPMN diagrams
- Requires the entire data to be loaded in one shot
- Stored in the backend's database, one row per instance/search term
- Good only if you are using the Type Ahead Widget
- From the UI, still some work to be done
- Requires manually editing the XML to tweak what the modeler generates
- Work in progress
- By default Data Stores are global and can be used from any process model
- As an alternative, an "upsearch" mechanism is currently being added to allow only using Data Stores defined within the "scope" of a given process model
- Branch being worked on to allow JSON Data Stores to be added to a process group
- Only process models in or below that process group can see that Data Store
- Spiff Arena's permission model is then used to decide who can create Data Stores at a given location