Can't fetch the ledger time from DAML ledger through the JSON API - daml

I see the following error when trying to fetch the ledger time via the JSON API:
UNIMPLEMENTED: Method not found:
com.digitalasset.ledger.api.v1.testing.TimeService/GetTime

The availability of the TimeService depends on the ledger implementations. If you are using the sandbox, the TimeService is only available if you start it in static time (i.e. time advances only via the TimeService) but not when you're running it in wall-clock time.
As of version 0.13.41 of the SDK, the sandbox starts in static time by default and you have to explicitly start it with the -w flag to run it in wall clock time, regardless of whether you start it with daml sandbox or with daml start.
Please note that the TestService is meant to be used exclusively for testing and demos, in any other case the returned value is not particularly useful for you.

Related

DialogFlow Testing Cloud Function concurrency

I have a Google Assistant action with fulfilment through Firebase Cloud Functions. I understand that Cloud Functions may share instances between invocations, and that you can use the Global scope to do heavy lifting and preparation. My function instantiates a global class that has serialised some JSON and handles returning data and other tasks in my function. I have variables in this class that are set when the function is called, and I have been careful to make sure that the variables are all set using the conv.data session data object that is unique to the current conversation. The hope is that although the class instance may exist between different invocations, and possibly by different users, it will still be contextualised to the local scope, and I wont see any variables being overwritten by other sessions.
Which brings me to the question, which is, how can I test this? I have tried to test on my mobile device using the Google Assistant app, at the same time as testing in the browser console. I witnessed the two sessions getting merged together, and it was an unholy mess, but I am not sure if that was the global scope, or just that I was testing two sessions with the same user account.
Can anyone enlighten me on whether it is possible to run two of the same action using the same user account? It looked like the conv.data object had a mix of the two different sessions I was running which suggests it was using the same conversation token for both sessions.
Another question would be, do you think using a global class to store state across invocations is going to be an issue with different users? The docs do state that only one invocation of the function can ever happen at a time. So there shouldn't be any race condition type scenarios.
Dialogflow should keep the data in conv.data isolated to a single session, even sessions from the same user. When you're using Dialogflow, this data is stored in a Context, which is session specific.
You can verify this by turning StackDriver logging on, which will let you examine the exact request and response that Dialogflow is using with your fulfillment, and this will include the session ID for tracking. (And if you think it is mixing the two, posting the request and response details would help figure out what is going on.)
Very roughly, it sounds like you're getting something mixed into your global, or possibly something set in one session that isn't cleared or overwritten by a different one. Again - seeing the exact requests and responses should help you (and/or us) figure that out.
My attitude is that a global such as this should be treated as read-only. If you want to have some environment object that contains the relevant information for just this session - I'd keep that separate, just from a philosophical design.
Certainly I wouldn't use this global state to store information between sessions. While a function will only be invoked, I'm not sure how that would work with Promises - which you'll need once you start any async operations. It also runs the risk that subsequent invocations might be on different instances.
My approach, in short, (which I make pretty firm in multivocal):
Store all state in a Context (which conv.data should so).
Access this via the request, conv, or some other request-specific object that you create.
Global information / configuration should be read-only.

VSTS Release Phase Condition

I have a release pipeline including 3 phases. The first phase has some load testing I use to warm-up a website. When I run out of VUM, the load testing of course fails.
I configured an agentless phase (second) to warm-up the site by hand (run only when a previous phase failed).
Then after the warmup (either by hand or by load test) I want to swap some azure slot and call some api's in the last (third) phase. I can't find a condition for this phase. It needs to run only when the manual phase is approved (and not when rejected) or when the load test did work (aka got enough VUM)
BTW; I tried creating a manual condition using a variable. But i couldn't find a way (except maybe by hand) to set the variable to true when approving the serverless phase.
(sorry i could think of a better short title)
You can’t do it through Release Phase Condition, you can put the necessary logical in the same phase.

Get status of 'newly-launched' EMR cluster programmatically

I'm following official docs guide to write a Scala script for launching EMR cluster using AWS Java SDK. I'm able to identify 3 major steps needed here:
Instantiating an EMR Client
I do this using AmazonElasticMapReduceClientBuilder.defaultClient()
Creating a JobFlowRequest
I create a RunJobFlowRequest object and supply it with JobFlowInstancesConfig (both objects are supplied with appropriate parameters depending on the requirement)
Running JobFlowRequest
This is done by calling emrClient.runJobFlow(runJobFlowRequest) which returns a RunJobFlowResult object
But RunJobFlowResult object doesn't provide any clue as to whether the cluster was launched successfully or not (with all the given configurations)
Now I'm aware that listClusters() method of the emrClient can be used to get cluster id of the newly-launched cluster through which we can query the state of the cluster using describeCluster() call. However since I'm using a Scala script to perform all this stuff, I need the process to be automated (here looking up the cluster id in the result of getClusters() will have to be done manually)
Is there any way this could be achieved?
You have all the pieces there but haven't quite stitched them together.
The cluster's id can be retrieved from RunJobFlowResult.getJobFlowId(). (It is a string starting with "j-".) Then you can pass this jobFlowId to DescribeCluster.
I don't blame you for your confusion though, since it's called "jobFlowId" for some methods (mainly older API methods) and "clusterId" in other methods. They are really the same thing though.

Connecting Ethereum nodes that are on different machines

I am experimenting with Ethereum. I have successfully setup a private testnet via the instructions on the site. However, I am having trouble adding peers from different machines. On any node I create, the admin.nodeInfo.NodeUrl parameter is undefined. I have gotten the enode address by calling admin.nodeInfo and when I try the admin.addPeer("enode://address") command (with the public IP,) it returns true but the peers are not listed when calling admin.peers.
I read on another thread (here) that the private testnet is only local, but I am seeing plenty of documentation that suggests otherwise (here and here.) I have tried the second tutorial adding the command-line flags for custom networkid and genesis block.
Any advice would be much appreciated. Please let me know if I can provide more details.
It is difficult to find in the available documentation but a key function is admin.addPeer().
https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console
There are a few ways you could do it I suppose, but I have 1 node running on my local PC and one node running on a remote server. This saves me Ether while testing contracts and keeps me from polluting the Ethereum blockchain with junk. The key when running the admin.addPeer() is to find the "enode" for each of the notes such that you will run the function to look something like this on one of the nodes: admin.addPeer(enode#ipaddress:port). If you run admin.peers and see something other than an empty list, you were probably successful. The main thing to check for is that the enode ID and ip address from admin.peers match what you were expecting.
The geth configuration settings are a little tricky as well. You will have to adopt it for your particular uses, but here are some of the parameters I use:
geth --port XYZ --networkid XYZ --maxpeers X
Replace XYZ and X with the numbers you want to use and make sure you run the same parameters when starting both notes. There could be more parameters involved, but that should get you pretty far.
Disclaimer: I'm new to Geth myself as well as using computers for anything more than facebook, so take my answer with a grain of salt. Also, I haven't given you my full command line with starting up Geth because I'm not 100% sure on whether some of the parameters are related to a private testnet and which are not. I've only given you the ones that I'm sure are related to running a private testnet.
Also, you may find that can't execute any transactions which running a private test net. That's because you need one of them to start mining. So run: miner.start(X) when you are ready to start deploying contracts.
I apologize for this not being fully complete, but just passing on my experience after spending 1-2 weeks trying to figure out myself because the documentation isn't full clear on how to do this. I think it should be actively discouraged in the spirit of Ethereuem, but in my case, I run primarily not to pollute the blockchain.
PS. As I was just getting ready to hit submit, I found this that also sheds more light.
connecting to the network

Workflow tool for job execution

I have a number of ETL jobs that I need to executed in a certain order with certain logic. what is the best workflow /BPM/ orchestration tool suitable for that? I have the following general requirements:
Monitoring: To understand the status of a job
Exception handling: if a job fails an alert is sent or some sort of action is taken.
Alert: an email alert is sent based on certain conditions
Approvals: occasionally a coworker of mine needs to approve a job before it execute.
My jobs are written in python and Java, but they can run as executables.
I am considering tools such as ProcessMaker, MuleSoft, etc.
thanks.
Take a look at BonitaSoft. It offers BPMN exception handling, an open (source) structure based on REST api's and written in Java.