Multi-step Object creation workflow with Actions - palantir-foundry

What are best practices for a multi-step workflow with conditional creation of Objects using Actions?
Consider the following scenario:
Events are reported to an organization via a Foundry workflow within a Workshop. Each Event (Object) is associated with a Location (Object). Events can occur at new Locations (Create) or existing Locations (reference).
This is a real world scenario and requires at least the following steps:
Create Event
Designate Location, if new Create otherwise reference existing
Ideally steps 1 and 2 would occur on separate Pages in a Workshop with logical junctures. Are there best practices and/or new Foundry documentation describing this Action orientated workflow (ala, the new Workshop Stepper widget)?

Related

Single events channel with multiple consumer groups (with competing members)

I have a scenario where I want to publish actions in a single channel and want multiple groups of consumers compete for actions they are specialized to process.
E.g. 2 types of actions with an attribute indicating their type:
Complex action
Simple action
Then, I need 2 groups of consumers (that I can scale in/out based on the load) to process 2 types of actions.
The solution should be as simple/cheap as possible.
Options I considered so far: Azure Service Bus, RabbitMQ, Kafka.
Azure SB approach I came up with seems a bit complex:
Single topic with subscribers sending filtered actions to 2 separate queues and consumers processing actions.
Is there a simpler/cheaper ootb approach in azure?
Suggestions?

Detect data anomalies in data pipe and trigger scheduled datapipeline

In Foundry, we have a data pipeline where we want to insert a code node (repo or workbook) that detects anomalies and then sends and email or some other alert about the problem.
Having trouble finding this in the documentation, can someone point me to it?
Ideally we would love to have the code trigger the Scheduler to do a pipeline run to create a REPORT, (maybe even Quiver, to do some timeline analysis). Is this possible? Are there examples in the documentation?
Check out the documentation in the Data Health section of the platform documentation. There are a number of patterns possible, including defining data expectations in your code.
Whether defined as expectations or dataset health checks, failures can be set up to create Issues within the platform, which can have default assignees (individuals or groups) that will also send notifications, which are both in platform and over email (depending on per-user configuration).
Health check failures will also automatically populate the data health tab in the Project Catalog view, which can serve as a dashboard to view the overall health of the project. You can also surface these in the Data Lineage view with a coloring based on Data Health to understand issues across the breadth of the pipeline.
For a comprehensive approach to pipeline health, review the Pipelines and best practices section in the Code Repositories documentation.

How to implement Event-carried State Transfer?

I watched Mr. Martin Fowler's seminar on Event-Drivent Architecture. I see the benefits of Event-carried State Transfer but still haven't found a way to do it as he said. How can I copy data from this database to the database continuously, and can this copy cause errors?
Copy directly from one database to another is usually a bad idea as it creates coupling. A better approach is for one service to publish events about the changes, events that other then can subscribe to.
The publishing of events can be implemented in many different ways. For example:
The publisher can publish an ATOM feed that the subscribers can poll and traverse for changes. For example the EventStoreDB publishes ATOM feeds to support this.
The publisher can publish its events to Kafka, that then subscribers can consume events from.

Live/Reactive Aggregation in FeathersJS

I am trying to implement a real-time app with FeathersJS and Feathers-Vuex.
A simple example - A todo app where users can add goals, add tasks to goals, and an effort (1-5) to each task. I want to find out the total effort needed for the goal. Anytime the effort of a task changes (CRUD), the effort of the goal gets updated.
So something like -
Goal: G1 (11/15)
Tasks: T1 (4/5), T2 (2/3), T3 (5/5)
How do I calculate this and keep it in sync in FeathersJS + FeathersVuex?
What I've tried so far -
FastJoin to populate this data to each goal record but it wasn't reactive - Not sure how to "listen" to changes to "Tasks"
Hooks and storing stats in the database - It worked for a while but started getting out of hands (when the more services were involved in calculations) and I ended up with too much coupling.
Loading everything in Vuex and calculating it on the front-end - Worked well for prototyping but doesn't work for actual cases where there would be too many records to be able to pre-load everything.
A custom Service - "GoalStats" which calculates the relevant stats by aggregating things from multiple services [Read that it is the recommended way for this]
What I can't seem to figure out is - how to keep things reactive when data is computed/aggregated across services. (Eg. Adding a new task in the above example changes the goal priority)
Still relatively new to FeathersJS and FeathersVuex - so not really sure what I am missing here.
Update
Here is what I have settled on for now - Use hooks from all the dependent services to trigger an empty PATCH request on the Custom Service (if needed). In FeathersVuex, I have added the service and it gets updated.
So, in context of the example above, I am using the before and after hooks of Tasks service to check if Effort value is being changed, or if the task is being added/removed. If so, I dispatch a PATCH request which calls the GET behind the scenes in my custom service, and recalculates the stats which then follows the existing events flow.
Not sure if there is a better way to go about this and/or if there are best practices around managing these cross-service "triggers"

ways to learn implementing workflow of a software

How many ways are there to learn implementing workflow of a software? What are them?
If you mean the user workflow, how the user is guided through the software...
I usually use some sort of state machine to limit what functionality can be triggered by the user and what information will be presented to the user in a particular state of the workflow. This way I can concentrate on designing each segment of the flow in its own "sandbox" and decision making becomes a lot easier.
If you do not mean user workflow, you can ignore this reply.
Usually you do have steps in workflow. Step consist of some precondition (business logic hidden from UI), some user interaction (user entering some data, and doing some “user stuff”), and post conditions. Usually user interaction part has one or more user chosen “exists”, and every exit consist it’s own post condition (usually every user exit has it’s own business logic depending of a meaning of an exit from a step). Exits navigate workflow to next step. Sometimes you can have fully automatic steps (i.e. using some external data source, calling some web service, important calculation, and so on).
If your workflow is simple, you may implement it as a set of classes representing each step, and configuration of steps order can be put in XML. When your workflow will grow bigger, and bigger, it may be reasonable to search for some workflow engine, (discussion of WF engines is I think beyond the scope of this question).
One important thing – steps can be orthogonal, but it is harder to design. If your steps rely one on another, person configuring workflow and steps order must be fully aware of such dependencies (e.g: user address step will probably depends on user object creation step, and removing user object creation step from a workflow, will result in trying to access nonexistent object).