I wrote some JUnit-tests on my process. In some cases I used
RuntimeService
.createProcessInstanceByKey("ID") //
.startBeforeActivity("taskID") //
.setVariables(map) //
.execute()
to start a process from a given task (not from the beginning).
This works well so far. In one case, the starting task is in one of two flows after a parallel gateway. The process now just executes until it reaches the 'end' gateway of this parallel flow.
Is there a way to 'mock' that missing token on the second incoming sequence flow?
I hope, you understood me ;-)
You can execute
runtimeService
.createProcessInstanceModification(processInstanceId)
.startBeforeActivity(idOfGateway)
.execute();
If there are n missing tokens make sure to call #startBeforeActivity n times.
Related
I wish to use Twilio in the context of an adventure game. As the gamer (Geocacher) progresses on an actual treasure (cache) hunt, clues are given by text when certain code words or numbers are entered as part of the thread. I am very comfortable creating the flow in Studio and have a working flow that is debugged and ready to go.
Not so fast, grasshopper! Strange things started to happen when beta testing the flow. Basically texts that show as being sent arrive to the user out of sequence in the thread. The SM logs show everything is working correctly (message sent) but, what I call Zombie messages arrive to the user after a previous message has arrived. The Zombies are legitimate messages from the Flow but out of the correct sequence and that makes the thread unusable for my purposes.
I learned too late in my "programming" that Twilio states, "If you send multiple SMS messages to the same user in a short time, Twilio cannot guarantee that those messages will arrive in the order that you sent them." Ugh!
So , I started with the Help Techs at Twillio and one solution is to create a subflow that basically is inserted after a Send Message Widget. This sub flow basically Fetches the message via the SMS SID to check for SMS status. If status is "delivered", we can safely say the message has been received by the recipient and then permit the next message in the flow.
That sound great but I am not a programmer and will never be able to integrate the suggested code much less debug it when things don't work. There might be many other approaches that you guys can suggest. The task is 1.) Send a message, 2.) Run a subflow that checks for message delivery, 3.) send the next message in the sequence.
I need to move on to implementation and this type of sub flow is out of my wheelhouse. I am willing to pay for programming help.
I can post the JSON code that was created as a straw man but have no idea how to use it and if it is the optimum solution if that is of help. It would seem that a lot of folks experience this issue and would like a solution. A nice tight JSON subflow with directions on how to insert would seem to be a necessary part of the Widget toolkit provided by Twillio in Studio.
Please Help Me! =)
As you stated, the delivery of the message cannot be guaranteed. Checking the status of the sent message is the most reliable, using a subflow, a Twilio Function, or a combination. Just keep in mind that Twilio Functions have a 10s execution time limit. I don't expect delivering the SMS will take longer than 10s is most cases. If you're worried about edge cases, you'd have to loop the check for the status multiple times. I'll share a proof of concept later for this.
An easier way, but it still doesn't guarantee delivery order, would be to add some delay between each message. There's no built-in delay widget, but here's code on how to create a Twilio Function to add delays, up to 10s.
A more hacky way to implement delays without having to use this Twilio Function, is to use the Send & Wait For Reply Widget and configure the "Stop Gathering After" property to the amount of delay you'd like to add. If the user responds, connect to the next widget, if they don't also connect to the widget.
As mentioned earlier, here's th Subflow + Function proof of concept I hacked together:
First, create a Twilio Functions Service, in the service create two functions:
/delay:
// Helper function for quickly adding await-able "pauses" to JavaScript
const sleep = (delay) => new Promise((resolve) => setTimeout(resolve, delay));
exports.handler = async (context, event, callback) => {
// A custom delay value could be passed to the Function, either via
// request parameters or by the Run Function Widget
// Default to a 5 second delay
const delay = event.delay || 5000;
// Pause Function for the specified number of ms
await sleep(delay);
// Once the delay has passed, return a success message, TwiML, or
// any other content to whatever invoked this Function.
return callback(null, `Timer up: ${delay}ms`);
};
/get-message:
exports.handler = function(context, event, callback) {
const messageSid = event.message_sid,
client = context.getTwilioClient();
if(!event.message_sid) throw "message_sid parameter is required.";
client.messages(messageSid)
.fetch()
.then(message => callback(null, message))
.catch((error) => {
console.error(error);
return callback(error);
});
};
Then, create a Studio Flow named something like "Send and Wait until Delivered".
In this flow, you send the message, grabbing the message body passed in from the parent flow, {{trigger.parent.parameters.message_body}}.
Then, you run the /get-message Function, and check the message status.
If delivered, set status variable to delivered. This variable will be passed back to the parent flow. If any of these accepted,queued,sending,sent, then the message is still in route, so wait a second using the /delay function, then loop back to the /get-message function.
If any other status, it is assumed there's something wrong and status is set to error.
Now you can create your parent flow where you call the subflow, specifying the message_body parameter. Then you can check the status variable for the subflow, whether it is 'delivered' or 'error'.
You can find the export for the subflow and the parent flow in this GitHub Gist. You can import it and it could be useful as a reference.
Personally, I'd add the /delay function, and use that after every message, adding a couple of seconds delay. I'd personally assume the delay adds enough buffer for no zombie messages to appear.
Note: The code, proof of concept, and advice is offered as is without liability to me or Twilio. It is not tested against a production workload, so make sure you test this thoroughly for your use case!
we have a Google cloudbuild pipeline that does the following:
Create a temp function
Run some tests
Deploy the production function
However, the second step often fails due to the first container not being ready yet:
Step #3: Unexpected token '<' at 2:1
Step #3: <html><head>
Step #3: site unreachable
looks like it is returning some placeholder html from nginx.
How can we fix that?
Currently, we just put an ugly sleep between steps
You probably want to have a look to the Cloud Functions API, there you can find the operations endpoint that will tell you if the operation is finished or not (assuming v1, otherwise look below): https://cloud.google.com/functions/docs/reference/rest/v1/operations/get
The operation is the same Id that is returned in the creation operation. Also you can list them with the list endpoint (in the same doc).
Suppose an ethereum smart contract has external function "foo" whose logic has state-reverting exception require(1 == 0, 'error: you broke the simulation!');.
If ethereum-client A broadcasts transaction "txA" which is a function call on foo, how can ethereum-client B access the state-reverting message corresponding to "txA"?
edit: by "how can", I mean how can a developer practically enable ethereum-client B to access this data. i.e. Can you please point me in the direction of the correct (lower-level.. not webui) api/rpc call from a particular tool?
Clearly this is possible since block explorers provide such messages for failed transactions. I read through some of the source of etherscan, but their javascript is minimized and not easily readable.
Thanks in advance!
See this: https://ethereum.stackexchange.com/questions/39817/are-failed-transactions-included-in-the-blockchain
Failed transactions often are included in the chain.
What you sometimes see, if you're using e.g. MetaMask, is a popup saying "this transaction will fail" that happens before the transaction is sent to the chain. This is MetaMask trying to be helpful and prevent you wasting gas. But you can force send the transaction anyway, and you'll get a failed/reverted transaction posted on-chain (like this one for this Solidity source).
So to answer the original question, if TxA was posted on-chain, then client B will process it and get the revert message itself. If TxA was not posted on-chain, then there is no record of it.
I see where thread::create creates a thread and thread::send sends a script to it. But thread::join has no script argument. thread::join is presented in the manual as if it is a alternate for thread::send, but I can't see how to send scripts to a thread if it's joinable.
I see it blocks, which an be useful for some apps, but I don't see the value statement in thread::join yet, please give an example of how thread::join can run scripts in a separate thread. Or better explain it's value in a way the manual does not make clear to me.
I do not know where you got the idea that thread::join runs a script; it doesn't. What it actually does is send a (C API level) message to the other thread to ask it to terminate gracefully and then waits for the thread to actually terminate; the thread::wait command knows how to handle such messages correctly, but most of that is just “run the event loop and watch in case a terminate message comes in” (which is why that command is supposed to always be used as the last one of a thread's body script if it is supposed to become responsive to events).
The actual joinability is about handling the reverse message signalling that a thread has really terminated.
I have written a script to update my db table after reading data from db tables and solr. I am using asyn.waterfall module. The problem is that the script is not getting exited after successful completion of all operations. I have used db connection pool also thinking that may be creating the script to wait infinitly.
I want to put this script in crontab and if it will not exit properly it would be creating a hell lot of instances unnecessarily.
I just went through this issue.
The problem with just using process.exit() is that the program I am working on was creating handles, but never destroying them.
It was processing a directory and putting data into orientdb.
so some of the things that I have come to learn is that database connections need to be closed before getting rid of the reference. And that process.exit() does not solve all cases.
When my project processed 2,000 files. It would get down to about 500 left, and the extra handles would have filled up the available working memory. Which means it would not be able to continue. Therefore never reaching the process.exit at the end.
On the other hand, if you close the items that are requesting the app to stay open, you can solve the problem at its source.
The two "Undocumented Functions" that I was able to use, were
process._getActiveHandles();
process._getActiveRequests();
I am not sure what other functions will help with debugging these types of issues, but these ones were amazing.
They return an array, and you can determine a lot about what is going on in your process by using these methods.
You have to tell it when you're done, by calling
process.exit();
More specifically, you'll want to call this in the callback from async.waterfall() (the second argument to that function). At that point, all your asynchronous code has executed, and your script should be ready to exit.
EDIT: As pointed out by #Aaron below, this likely has to do with something like a database connection being active, and not allowing the node process to end.
You can use the node module why-is-node-running:
Run npm install -D why-is-node-running
Add import * as log from 'why-is-node-running'; in your code
When you expect your program to exit, add a log statement:
afterAll(async () => {
await app.close();
log();
})
This will print a list of open handles with a stacktrace to find out where they originated:
There are 5 handle(s) keeping the process running
# Timeout
/home/maf/dev/node_modules/why-is-node-running/example.js:6 - setInterval(function () {}, 1000)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
# TCPSERVERWRAP
/home/maf/dev/node_modules/why-is-node-running/example.js:7 - server.listen(0)
/home/maf/dev/node_modules/why-is-node-running/example.js:10 - createServer()
We can quit the execution by using:
connection.destroy();
If you use Visual Studio code, you can attach to an already running Node script directly from it.
First, run the Debug: Attached to Node Process command:
When you invoke the command, VS Code will prompt you which Node.js process to attach to:
Your terminal should display this message:
Debugger listening on ws://127.0.0.1:9229/<...>
For help, see: https://nodejs.org/en/docs/inspector
Debugger attached.
Then, inside your debug console, you can use the code from The Lazy Coder’s answer:
process._getActiveHandles();
process._getActiveRequests();