When a user calls my number, I wish to have Twilio <Say> something to them while <Dial>ing another number, the issue is, I can only seem to get it to do one or the other (I.E. Say, then dial (Delaying the dial), or, dial, then say (Not saying until the call is over)). What I want is either of the following (First one would be preferable, although answers to do both would be the best (In case I need the opposite one in the future/someone Googling)):-
Initiate the call to the new number AND start saying "Lorem ipsum...", if the say finishes first then silence until the call is picked up, if the phone number picks up first, let the say finish then transfer them/combine the calls.
Initiate the call to the new number AND start saying "Lorem ipsum...", if the say finishes first then silence until the call is picked up, if the phone number picks up first, cut the say command off and instantly transfer/combine the calls.
Thanks!
Twilio evangelist here.
There is no way to do this using just Twiml as Twilio processes Twiml sequentially so its going to finish the <Say> before moving on to the <Dial>.
You could combine Twiml with the REST API to do this however. In the same HTTP request where your generating the TwiML with the in it, you would also make a call out to the REST API to have Twilio start an outbound phone call.
Twilio would <Say> what you want to Caller A while dialing Caller B. When Caller B answers, put them into a conference. Once Caller A finishes listening to the <Say> put them into the same conference.
This way, regardless of who gets their first, Caller A or Caller B, either will wait for the other. You can use the StatusCallback parameter to detect if Caller B never answers and in that scenario redirect Caller A out of the conference.
Hope that helps.
Related
I wish to use Twilio in the context of an adventure game. As the gamer (Geocacher) progresses on an actual treasure (cache) hunt, clues are given by text when certain code words or numbers are entered as part of the thread. I am very comfortable creating the flow in Studio and have a working flow that is debugged and ready to go.
Not so fast, grasshopper! Strange things started to happen when beta testing the flow. Basically texts that show as being sent arrive to the user out of sequence in the thread. The SM logs show everything is working correctly (message sent) but, what I call Zombie messages arrive to the user after a previous message has arrived. The Zombies are legitimate messages from the Flow but out of the correct sequence and that makes the thread unusable for my purposes.
I learned too late in my "programming" that Twilio states, "If you send multiple SMS messages to the same user in a short time, Twilio cannot guarantee that those messages will arrive in the order that you sent them." Ugh!
So , I started with the Help Techs at Twillio and one solution is to create a subflow that basically is inserted after a Send Message Widget. This sub flow basically Fetches the message via the SMS SID to check for SMS status. If status is "delivered", we can safely say the message has been received by the recipient and then permit the next message in the flow.
That sound great but I am not a programmer and will never be able to integrate the suggested code much less debug it when things don't work. There might be many other approaches that you guys can suggest. The task is 1.) Send a message, 2.) Run a subflow that checks for message delivery, 3.) send the next message in the sequence.
I need to move on to implementation and this type of sub flow is out of my wheelhouse. I am willing to pay for programming help.
I can post the JSON code that was created as a straw man but have no idea how to use it and if it is the optimum solution if that is of help. It would seem that a lot of folks experience this issue and would like a solution. A nice tight JSON subflow with directions on how to insert would seem to be a necessary part of the Widget toolkit provided by Twillio in Studio.
Please Help Me! =)
As you stated, the delivery of the message cannot be guaranteed. Checking the status of the sent message is the most reliable, using a subflow, a Twilio Function, or a combination. Just keep in mind that Twilio Functions have a 10s execution time limit. I don't expect delivering the SMS will take longer than 10s is most cases. If you're worried about edge cases, you'd have to loop the check for the status multiple times. I'll share a proof of concept later for this.
An easier way, but it still doesn't guarantee delivery order, would be to add some delay between each message. There's no built-in delay widget, but here's code on how to create a Twilio Function to add delays, up to 10s.
A more hacky way to implement delays without having to use this Twilio Function, is to use the Send & Wait For Reply Widget and configure the "Stop Gathering After" property to the amount of delay you'd like to add. If the user responds, connect to the next widget, if they don't also connect to the widget.
As mentioned earlier, here's th Subflow + Function proof of concept I hacked together:
First, create a Twilio Functions Service, in the service create two functions:
/delay:
// Helper function for quickly adding await-able "pauses" to JavaScript
const sleep = (delay) => new Promise((resolve) => setTimeout(resolve, delay));
exports.handler = async (context, event, callback) => {
// A custom delay value could be passed to the Function, either via
// request parameters or by the Run Function Widget
// Default to a 5 second delay
const delay = event.delay || 5000;
// Pause Function for the specified number of ms
await sleep(delay);
// Once the delay has passed, return a success message, TwiML, or
// any other content to whatever invoked this Function.
return callback(null, `Timer up: ${delay}ms`);
};
/get-message:
exports.handler = function(context, event, callback) {
const messageSid = event.message_sid,
client = context.getTwilioClient();
if(!event.message_sid) throw "message_sid parameter is required.";
client.messages(messageSid)
.fetch()
.then(message => callback(null, message))
.catch((error) => {
console.error(error);
return callback(error);
});
};
Then, create a Studio Flow named something like "Send and Wait until Delivered".
In this flow, you send the message, grabbing the message body passed in from the parent flow, {{trigger.parent.parameters.message_body}}.
Then, you run the /get-message Function, and check the message status.
If delivered, set status variable to delivered. This variable will be passed back to the parent flow. If any of these accepted,queued,sending,sent, then the message is still in route, so wait a second using the /delay function, then loop back to the /get-message function.
If any other status, it is assumed there's something wrong and status is set to error.
Now you can create your parent flow where you call the subflow, specifying the message_body parameter. Then you can check the status variable for the subflow, whether it is 'delivered' or 'error'.
You can find the export for the subflow and the parent flow in this GitHub Gist. You can import it and it could be useful as a reference.
Personally, I'd add the /delay function, and use that after every message, adding a couple of seconds delay. I'd personally assume the delay adds enough buffer for no zombie messages to appear.
Note: The code, proof of concept, and advice is offered as is without liability to me or Twilio. It is not tested against a production workload, so make sure you test this thoroughly for your use case!
If you go to this transaction page on etherscan, scroll down to the Input Data section and click the Decode Input Data button- it gives you nothing, which I can only assume means that etherscan was unable to decode the input data given the ABI for that contract.
My question is, why? What is special about that contract/ABI (or really any contract like this one) that would prevent the transaction from being decoded?
The called function signagure is 0xfaa916d3, the rest of the data are arguments. The contract ABI doesn't define any function that would translate to the 0xfaa916d3 signature. Which means that the fallback function was called.
In this case, the fallback function acts as a proxy, creates an internal transaction and delegates the call to the target contract (which can theretically do the same or create multiple other internal transactions, etc.)
However, Etherscan currently only compares the signature to the ABI of the root transaction recipient and ignores the internal transactions recipients' ABIs in the "Decode input data" feature.
Why? My guess is that it's easier to scan just one level, and not that high priority to implement and account for all the edge cases such as multiple internal calls with the same signature. But you'd need to ask them for the real reason. :)
Suppose an ethereum smart contract has external function "foo" whose logic has state-reverting exception require(1 == 0, 'error: you broke the simulation!');.
If ethereum-client A broadcasts transaction "txA" which is a function call on foo, how can ethereum-client B access the state-reverting message corresponding to "txA"?
edit: by "how can", I mean how can a developer practically enable ethereum-client B to access this data. i.e. Can you please point me in the direction of the correct (lower-level.. not webui) api/rpc call from a particular tool?
Clearly this is possible since block explorers provide such messages for failed transactions. I read through some of the source of etherscan, but their javascript is minimized and not easily readable.
Thanks in advance!
See this: https://ethereum.stackexchange.com/questions/39817/are-failed-transactions-included-in-the-blockchain
Failed transactions often are included in the chain.
What you sometimes see, if you're using e.g. MetaMask, is a popup saying "this transaction will fail" that happens before the transaction is sent to the chain. This is MetaMask trying to be helpful and prevent you wasting gas. But you can force send the transaction anyway, and you'll get a failed/reverted transaction posted on-chain (like this one for this Solidity source).
So to answer the original question, if TxA was posted on-chain, then client B will process it and get the revert message itself. If TxA was not posted on-chain, then there is no record of it.
I want to restrict calls to a Feathers service method for externals calls with associateCurrentUser.
I also want to allow the server to call this service method without restricting it.
The use case is that through this service then clients use a lock table, all clients can see all locks, and occasionally the server should clear out abandoned rows in this table. Row abandonment can happen on network failures etc. When the server removes data then the normal Feathers remove events should be emitted to the clients.
I would imagine that this should be a mix of associateCurrentUser and disallow hooks but I can't even begin to experiment with this as I don't see how it would be put together.
How would one implement this, please?
Update:
I found this answer User's permissions in feathers.js API from Daff which implies that if the hook's context.params.provider is null then the call is internal, otherwise external. Can anyone confirm if this is really so in all cases, please?
It seems to be so from my own tests but I don't know if there are any special cases out there that might come and bite me down the line.
If the call is external params.provider will be set to the transport that has been used (currently either rest, socketio or primus, documented here, here and here).
If called internally on the server there is not really any magic. It will be whatever you pass as params. If you pass nothing it will be undefined if you pass (or merge with) hook.params in a hook it will be the same as what the original method was called with.
// `params` is an empty object so `params.provider` will be `undefined`
app.service('messages').find({})
// `params.provider` will be `server`
app.service('messages').find({ provider: 'server' })
// `params.provider` will be whatever the original hook was called with
function(hook) {
hook.app.service('otherservice').find(hook.params);
}
Thanks for your reply for my question: Is this a bug of Box API v2 when getting events
This is a new problem related to this. The problem is that I cannot reliably use the next_stream_position I got from previous calls to track events.
Given this scenario:
Given the following two GET HTTP queries:
1. GET https://api.box.com/2.0/events?stream_position=1336039062458
This one returns the JSON file which contains one file entry of myfile.pdf and the next stream position = 1336039062934
2. GET https://api.box.com/2.0/events?stream_position=1336039062934
This call uses the stream position I got from the first call. However, it returns the JSON contains the exactly same file entry of myfile.pdf with the first call.
I think if the first call gives a stream position, it should be used as a mark for that exact time (say: TIme A). If I use that stream position in subsequent queries, no events before "Time A" should be returned.
Is this a bug? Or did I use the API in the wrong way?
Many thanks.
Box’s /events endpoint is focused on delivering to you a highly reliable list of all the events relevant to your Box account. Events are registered against a time-sequenced list we call the stream_position. When you hit the /events API and pass in a stream_position we respond to you with the events that happened slightly before that stream position, up to the current stream_position, or the chunk_size, whichever is lesser. Due to timing lag and our preference to make sure you don’t miss some event, you may receive duplicate events when you call the /events API. You may also receive events that look like they are ‘before’ events that you’ve already received. Our philosophy is that it is better for you to know what has happened, than to be in the dark and miss something important.
Box events currently give you a window roughly 5 seconds into the past, so that you don't miss some event.
We have considered just delaying the events we send you by about 5 seconds and de-duplicating the events on our side, but at this point we've turned the dial more towards real-time. Let us know if you'd prefer a fully de-duped stream, that was slower.
For now, (in beta) if you write your client to check for duplicate events, and discard them, that will be best. We are about to add an event_id to the payload so you can de-duplicate on that. Until then, you'll have to look at a bunch of fields, depending on the event type... It's probably more challenging that it is worth.
In order to help you be able to figure out if an event is a duplicate, we have now added to each event an event_id that will be unique. It is our intention that the event_id will allow you to de-duplicate the responses you receive from subsequent GET /events calls.
You can see this reflected in the updated documentation here, including example payloads.