I want to write a code for reverse of the DVP settlement (it should happen automatically after certain amount of time) in DAML - daml

I want to write a code for reverse of the DVP settlement (it should happen automatically after certain amount of time) in DAML. Is there any similar program available in daml document or can anyone help me with the code?

Do you mean reversing a DVP settlement after it has settled? Under what conditions would this reversal be permitted? What if the underly assets were no longer there? :)
Note that DAML cannot initiate actions - it can only react to choices on contracts (technically - made via the Ledger API). There is no scheduler in DAML, so you can't say "at 10:00AM do this" or "after 10 minutes do that". You can only make choices available for a Party to initiate an action, such as the reversal of a DVP settlement, within a prescribed period of time. For example you could say "within 10 minutes of settlement the buying Party may reverse the settlement". Or "if the selling Party does not confirm the settlement within 10 minutes, the buyer may reverse it". Of course you can add your own authorization workflow to put any prerequisites on this action that you desire for your use case.

Related

State definition in Reinforcement learning

When defining state for a specific problem in reinforcement learning, How to decide what to include and what to leave for the definition, and also how to set difference between an observation and a state.
For example assuming that the agent is in the context of human resource and planning where it needs to hire some workers based on the demand of jobs, considering the cost of hiring them (assuming the budget is limited) is a state in the format of (# workers, cost) a good definition of state?
In total I don't know what information is needed to be in state and what should be left as it's rather observation.
Thank you
I am assuming you are formulating this as an RL problem because the demand is an unknown quantity. And, maybe [this is optional criteria] the Cost of hiring them may take into account a worker's contribution towards the job which is unknown initially. If however, both these quantities are known or can be approximated beforehand then you can just run a Planning algorithm to solve the problem [or just some sort of Optimization].
Having said this, the state in this problem could be something as simple as (#workers). Note I'm not including the cost, because cost must be experienced by the agent, and therefore is unknown to the agent until it reaches a specific state. Depending on the problem, you might need to add another factor of "time", or the "job-remaining".
Most of the theoretical results on RL hinge on a key assumption in several setups that the environment is Markovian. There are several works where you can get by without this assumption, but if you can formulate your environment in a way that exhibits this property, then you would have much more tools to work with. The key idea being, the agent can decide which action to take (in your case, an action could be : Hire 1 more person. Other actions could be Fire a person) based on the current state, say (#workers = 5, time=6). Note that we are not distinguishing between workers yet, so firing "a" person, instead of firing "a specific" person x. If the workers have differing capabilities, you may need to add several other factors each representing which worker is currently hired, and which are currently in the pool, yet to be hired so like a boolean array of a fixed length. (I hope you get the idea of how to form a state representation, and this can vary based on the specifics of the problem, which are missing in your question).
Now, once we have the State definition S, the action definition A (hire / fire), we have the "known" quantities for an MDP-setup in an RL framework. We also need an environment that can supply us with the cost function when we query it (Reward Function / Cost Function), and tell us the outcome of taking a certain action on a certain state (Transition). Note that we don't necessarily need to know these Reward / Transition function beforehand, but we should have a means of getting these values when we query for a specific (state, action).
Coming to your final part, the difference between observation and state. There are much better resources to dig deep into it, but in a crude sense, observation is an agent's (any agent, AI, human etc) sensory data. For example, in your case the agent has the ability to count number of workers currently employed (but it does not have an ability to distinguish between workers).
A state, more formally, a true MDP state must be something that is Markovian and captures the environment at its fundamental level. So, maybe in order to determine the true cost to the company, the agent needs to be able to differentiate between workers, working hours of each worker, jobs they are working at, interactions between workers and so on. Note that, much of these factors may not be relevant to your task, for example a worker's gender. Typically one would like to form a good hypothesis on which factors are relevant beforehand.
Now, even though we can agree that a worker's assignment (to a specific job) maybe a relevant feature which making a decision to hire or fire them, your observation does not have this information. So you have two options, either you can ignore the fact that this information is important and work with what you have available, or you try to infer these features. If your observation is incomplete for the decision making in your formulation we typically classify them as Partially Observable Environments (and use POMDP frameworks for it).
I hope I clarified a few points, however, there is huge theory behind all of this and the question you asked about "coming up with a state definition" is a matter of research. (Much like feature engineering & feature selection in Machine Learning).

How can I have users cover the cost of Chainlink's Oracle service in Solidity?

What's the standard way to cover the costs for a Chainlink oracle?
Let's say that I have an NFT mint function that requires the use of external data, or of an external VRF. It costs, say, a fee of 1 LINK. What's the best practice to make the user cover for that cost? (ideally, the best one from both a UX standpoint and a decentralization one)
I see 4 scenarios, all with serious drawbacks:
embedding into the mint function a transfer of amount of LINKs, something like this: require(linkContract.transferFrom(msg.sender, this, amount));. Two issues with this, though: a. you need the user to pre-approve the transfer (linkContract.approve(myContract, amount);), adding one step to the funnel; b. users need to actually have enough LINK in their wallet, which is even more difficult to explain to non-advanced users and makes the funnel even longer
slightly improving from previous point, use ERC677 transferAndCall function: the user calls it on the LINK contract itself, which in turn triggers the mint function by calling onTokenTransfer. Issue: same as previous point, the user needs to have LINK in their wallet - not exactly the best UX
embedding into the contract a logic such that the contract itself figures out on the fly the equivalent of the LINK fee in ETH (using Chainlink's price feeds). The, it "just" require(msg.value == feeEquivalentInEth); - this way, the user only interacts in ETH, and the whole LINK thing would be handled automagically behind the scenes. Issue: need to constantly swap ETH with LINKs, which can lead to either price slippage or(/and) gas costs going ballistic
slightly different from previous point, but still same idea: use LINK meta transaction, where you act as a relay on behalf of the user, cover the costs, and send back an invoice — explained here. Issue: the relay needs to cover for the costs first, and then demand payment back, which is the opposite of a trustless scenario. One could mitigate this by asking for payment first, but gas costs cannot be forecasted easily so not really a solution either.
What's the industry standard? It seems to me highly unlikely that nobody figured this out already, and I'd hate to re-invent the wheel

Dealing with exceptions in an event driven world

I'm trying to understand how exceptions are handled in an event driven world using micro-services (using apache kafka). For example, if you take the following order scenario whereby the following actions need to happen before the order can be completed.
1) Authorise the payment with the payment service provider
2) Reserve the item from stock
3.1) Capture the payment with the payment service provider
3.2) Order the item
4) Send a email notification accepting the order with a receipt
At any stage in this scenario, there could be a failure such as:
The item is no longer in stock
The payment information was incorrect
The account the payee is using doesn't have the funds available
External calls such as those to the payment service provider fail, such as downtime
How do you track that each stage has been called for and/or completed?
How do you deal with issues that arise? How would you notify the frontend of the failure?
Some of the things you describe are not errors or exceptions, but alternative flows that you should consider in your distributed architecture.
For example, that an item is out of stock is a perfectly valid alternative flow in your business process. One that possibly requires human intervention. You could move the message to a separate queue and provide some UI where a human operator can deal with the problem, solve it and cause the flow of events to continue.
A similar thing could be said of the payment problems you describe. If an order cannot successfully be settled, a human operator will need to investigate the case and solve it. For that matter, your design must contemplate that alternative flow as part of it, and make it so a human can intervene somehow when the messages end up in a queue that requires a person to review them.
Those cases should be differentiated from errors or exceptions being thrown by the program. Those cases, depending on the circumstance, might in fact require to move the message to a dead letter queue (DLQ) for an engineer to take a look at them.
This is a very broad topic and entire books could written about this.
I believe you could probably benefit from gaining more understanding of concepts like:
Compensating Transactions Pattern
Try/Cancel/Confirm Pattern
Long Running Transactions
Sagas
The idea behind compensating transactions is that every ying has its yang: if you have one transaction that can place an order, then you could undo that with a transaction that cancels that order. This latter transaction is a compensating transaction. So, if you carry out a number of successful transactions and then one of them fails, you can trace back your steps and compensate every successful transaction you did and, as a result, revert their side effects.
I particularly liked a chapter in the book REST from Research to Practice. Its chapter 23 (Towards Distributed Atomic Transactions over RESTful Services) goes deep in explaining the Try/Cancel/Confirm pattern.
In general terms it implies that when you do a group of transactions, their side effects are not effective until a transaction coordinator gets a confirmation that they all were successful. For example, if you make a reservation in Expedia and your flight has two legs with different airlines, then one transaction would reserve a flight with American Airlines and another one would reserve a flight with United Airlines. If your second reservation fails, then you want to compensate the first one. But not only that, you want to avoid that the first reservation is effective until you have been able to confirm both. So, initial transaction makes the reservation but keeps its side effects pending to confirm. And the second reservation would do the same. Once the transaction coordinator knows everything is reserved, it can send a confirmation message to all parties such that they confirm their reservations. If reservations are not confirmed within a sensible time window, they are automatically reversed by the affected system.
The book Enterprise Integration Patterns has some basic ideas on how to implement this kind of event coordination (e.g. see process manager pattern and compare with routing slip pattern which are similar ideas to orchestration vs choreography in the Microservices world).
As you can see, being able to compensate transactions might be complicated depending on how complex is your distributed workflow. The process manager may need to keep track of the state of every step and know when the whole thing needs to be undone. This is pretty much that idea of Sagas in the Microservices world.
The book Microservices Patterns has an entire chapter called Managing Transactions with Sagas that delves in detail on how to implement this type of solution.
A few other aspects I also typically consider are the following:
Idempotency
I believe that a key to a successful implementation of your service transactions in a distributed system consists in making them idempotent. Once you can guarantee a given service is idempotent, then you can safely retry it without worrying about causing additional side effects. However, just retrying a failed transaction won't solve your problems.
Transient vs Persistent Errors
When it comes to retrying a service transaction, you shouldn't just retry because it failed. You must first know why it failed and depending on the error it might make sense to retry or not. Some types of errors are transient, for example, if one transaction fails due to a query timeout, that's probably fine to retry and most likely it will succeed the second time; but if you get a database constraint violation error (e.g. because a DBA added a check constraint to a field), then there is no point in retrying that transaction: no matter how many times you try it will fail.
Embrace Error as an Alternative Flow
As mentioned at the beginning of my answer, not everything is an error. Some things are just alternative flows.
In those cases of interservice communication (computer-to-computer interactions) , when a given step of your workflow fails, you don't necessarily need to undo everything you did in previous steps. You can just embrace error as part of you workflow. Catalog the possible causes of error and make them an alternative flow of events that simply requires human intervention. It is just another step in the full orchestration that requires a person to intervene to make a decision, resolve an inconsistency with the data or just approve which way to go.
For example, maybe when you're processing an order, the payment service fails because you don't have enough funds. So, there is no point in undoing everything else. All we need is to put the order in a state that some problem solver can address it in the system and, once fixed, you can continue with the rest of the workflow.
Transaction and Data Model State are Key
I have discovered that this type of transactional workflows require a good design of the different states your model has to go through. As in the case of Try/Cancel/Confirm pattern, this implies initially applying the side effects without necessarily making the data model available to the users.
For example, when you place an order, maybe you add it to the database in a "Pending" status that will not appear in the UI of the warehouse systems. Once payments have been confirmed the order will then appear in the UI such that a user can finally process its shipments.
The difficulty here is discovering how to design transaction granularity in way that even if one step of your transaction workflow fails, the system remains in a valid state from which you can resume once the cause of the failure is corrected.
Designing for Distributed Transactional Workflows
So, as you can see, designing a distributed system that works in this way is a bit more complicated than individually invoking distributed transactional services. Now every service invocation may fail for a number of reasons and leave your distributed workflow in a inconsistent state. And retrying the transaction may not always solve the problem. And your data needs to be modeled like a state machine, such that side effects are applied but not confirmed until the entire orchestration is successful.
That‘s why the whole thing may need to be designed in a different way than you would typically do in a monolithic client–server application. Your users may now be part of the designed solution when it comes to solving conflicts, and contemplate that transactional orchestrations could potentially take hours or even days to complete depending on how their conflicts are resolved.
As I was originally saying, the topic is way too broad and it would require a more specific question to discuss, perhaps, just one or two of these aspects in detail.
At any rate, I hope this somehow helped you with your investigation.

Can we use mysql with ethereum?

We are thinking of building a dapp for finding salon near me and allowing booking. In such application end users has to be shown the salons which are at a certain distance from their current location. Where would be query that data from. Cause I don't think so that kind of querying is possible in solidity. Do we need to bring in any RDBMS in such a scenario to store salon data so that we can query it easily and booking information can be sent to blockchain.
Is hybrid application the only way out? People are talking about IPFS should be used for storing the images, videos and other data. Is that the solution, if yes then how would we query it from there? Moreover, would it be fast enough?
TL;DR: Short answer: you might, but you shouldn't.
The real question here is what do you need ethereum for in this project?
In Ethereum, each write operation is costly whereas reading data isn't. Write operations are transactions, read are calls.
It means that "uploading" your salon list will cost you money (i.e. gas), but also each data update (opening hours, booking ..).
For the mysql specific part of your question, well Ethereum is not designed for that kind of operation.
Something with an oracle might do the trick but it's truly not designed for. See Ethereum like a way to intermediate transactions between peers that stores every transaction publicly and permanently.
According to the Wikipedia page blockchains are basically a "continuously growing list of records". Ethereum has the possibility to make workers run some code in exchange of gas.
This code (or Smart Contract) only purpose "to facilitate, verify, or enforce the negotiation or performance of a contract" (here contract is a legally binding contract).
From what you described, IMO a simple web application with "standard" SQL is more than enough.
You just have to store the salons' GPS coordinates and do the closest match(es) from the user's GPS coordinates.
You likely want to separate your application into two parts:
- The part which shows the available booking times
- The part which makes a new booking

How does one deal with multiple TimeZones in applications that store dates and times?

I realize this is a bit subjective, but I'm hoping to pick everyones brain here on how they deal with multiple timezones? There are a variety of similar questions here and an equally wide variety of accepted answers.
How have you delt with this in apps you've built, and what issues did you have to overcome?
You always store date/time in one time zone(9 out of 10 it is London time) and convert it on display to the timezone of your user. This is strictly application level issue not the db.
At work we manage several clocks at once, not only time zones but also some more esoteric clocks used for spacecraft navigation.
The only things that really matter are: that you are consistent with ONE AND ONLY ONE CLOCK, whichever you pick, and that you have the APPROPRIATE CLOCK CONVERSIONS for when you need a VIEW OF TIME in a CLOCK DIFFERENT FROM YOUR ONE AND ONLY ONE CLOCK.
So:
One and Only One Clock: Pick the simplest one which will solve your problem, most likely this will be UTC (what some people would -incorrectly- call Greenwich, but the point remains: the zero line).
Appropriate clock conversions: This depends on your application, but you need to ask and answer the following 2 questions: How much resolution do I need? Do I need to make sure I account for leap seconds? Once you answered, you may be able to pick standard libraries or more estoreric ones. Again, you must ask these questions.
View of time: when someone picks a view of time (say, Pacific Time) simply call the appropriate clock conversion on demand.
Really, that's it.
As for libraries, I use Python for scripting, but NAIF Spice Library for mission design, and in-house code for spacecraft navigation. The difference between them is simply the resolution and the reliability on having accounted for everything you need to account for (Earth rotation, Relativity, time dilation, leap seconds, etc.) Of course, you will pick the library that fits your needs.
Good luck.
Edit:
I forgot to mention: don't try to implement your own time management library - use an off the shelve one. If you try, you may be successful, but your real project will die and you will only have one average date time library to show for it. Maybe I'm being exaggerated, but making a solid, general purpose date time library is far from trivial, i.e., it is a project in itself.
One of the projects i'm working on i'm using SQL Server 2005 and GETUTCDATE() to store the date in UTC format.
You need to maintain the user timezone information and all time zone information.
And always store the time in UTC.And when you want to display that information to particular user then just access this user's timezone(which is stored for this user) and add or subtract that amount of time from the time stored in UTC and display this.That will be in the timezone of the user who is viewing the information.
I think the new best practice with sql server 2008 is to always use the datetimeoffset datatype. normalizing dates to UTC is also good practices; but not always feasible or desirable.
See this blog post for more thoughts:
Link
+1 for #kubal5003.
Display of dates and times is always complicated by culture and time zone, so its always best to use the layer closest to the user (e.g. the browser or local application) to do this. It also moves some of the load from the database to the user's machine.
There is an exception for server-generated reports though. So I store the time zone name/ID (occasionally just the offset/bias) to find the start of day. This may be system-wide or on a per client/brand basis.
For web applications I usually detect a user's default time zone via geolocation (this is rarely wrong since geo data is quite accurate now).