Data duplication between internal database and Solidity - ethereum

There is a flow I want to achieve in my dapp, and I would appreciate some opinion.
Flow:
User sees a list of products and picks one to buy it. The user has their metamask unlocked and has enough balance.
Setup:
Rails on the backend, React on the frontend, ganache-cli, truffle, metamask (web3js).
Database structure:
In the app's internal PostgresDB, there's a products table. In the blockchain, there's a dynamic array products like below:
Internal Postgres:
products
name
price
owner_id
owners
name
id
address
Blockchain (contract storage)
Product[] products
struct Product {
name
}
mapping(uint => address) public productIdToOwner;
mapping(uint => uint) public productIdToPrice;
The following function onBuy runs when the user clicks "Buy this product" button:
onBuy = (product) => {
const { id, external_id, name, price, meta } = product
this.ContractInstance.methods.buy(external_id).send({
from: this.state.currentUserAddress,
gas: GAS_LIMIT,
value: web3.utils.toWei(price.toString(), "ether"),
}).then((receipt) => {
// What to do before getting a receipt?
console.log(receipt)
}).catch((err) => {
console.log(err.message)
})
}
Questions:
On the mainnet, how long does it take for me to get the receipt for the transaction? Is it sane to make the user wait on the same page after clicking the onBuy button with a loading wheel until the receipt arrives? If not, what's the conventional way to deal with this?
Is my DB structure a reasonable way to connect to the blockchain? I am worried about data integrity (i.e. having to sync address field between my internal DB and the blockchain) but I find it useful to store the blockchain data inside the internal DB, and read mostly from the internal DB instead of the blockchain.

On the mainnet, how long does it take for me to get the receipt for the transaction? Is it sane to make the user wait on the same page after clicking the onBuy button with a loading wheel until the receipt arrives? If not, what's the conventional way to deal with this?
When you send a transaction, you will get the transaction hash pretty quickly. However, the receipt is only returned once the transaction is mined. The length of time it takes to have your transaction mined varies greatly depending on how much you're willing to pay for gas (can be several seconds for very high gas prices or can be several hours if you're paying < 10 Gwei). You can get an idea by using Ropsten/Rinkeby (the test networks will probably be faster than MainNet). Chances are (and based on the system you're describing), it won't be reasonable to have your user wait.
I don't know if there is a "conventional" way to deal with this. You can provide the user with your own confirmation number (or use the transaction hash as the confirmation number), send an email when the transaction is mined, push a notification if on a mobile app, etc. If you present the user with some sort of confirmation number based on the transaction hash, you'll have to decide how you want to resolve cases when the transaction fails.
Is my DB structure a reasonable way to connect to the blockchain? I am worried about data integrity (i.e. having to sync address field between my internal DB and the blockchain) but I find it useful to store the blockchain data inside the internal DB, and read mostly from the internal DB instead of the blockchain.
This is purely an opinion, so take this with a grain of salt. I try to avoid having duplicate data, but if you're going to have multiple persistence layers you'll probably need to maintain some sort of referential integrity between them. So, it's certainly ok to store ids and addresses.
The part of your question that confuses me is why do you prefer to read mostly from the DB? Have you tried measuring the latency of using a fully sync'ed node on your server and retrieving data from your contract through constant functions? I would test that first before duplicating my data. In your case, I would look to use the blockchain to store purchases while using the DB just for inventory management. But, that's based on very little knowledge of you business case.

Related

Stuck with deciding the working of a Smart Contract Project

So I'm doing a project which basically carries out the medical insurance claim process with the help of the smart contract. How it works is:
User signs up to the website run by the insurer.
They file a claim by entering the concerned hospital, insured amount and a pdf file containing the bills which are to be verified.
The hospital uses the website to approve/deny the claim based on the info provided.
The insurer uses the website approve/deny the claim based on the info provided and pays the user.
Any time someone files, approves/denies, pays the claim, an event is emitted.
This is my first ethereum/solidity project and I'm unable to figure out how to pull this together.
This is the structure of a Claim:
struct Record {
uint id; // unique id for the record
address patientAddr;
address hospitalAddr;
string billId; // points to the pdf stored somewhere
uint amount;
mapping (address => RecordStatus) status; // status of the record
bool isValid; // variable to check if record has already been created or not
}
Some of my questions were:
How do I link a Record to a specific user? As a single user can use multiple Metamask wallets.
Is it possible to fetch all the events linked to a specific record with an id so I can display all the approvals/denials happened with the Record to the user?
For the hospital, is there a better way to get associated Records other than to get all Records from the smart contract and then filter it on the front end?
Thanks a lot for your help.
To link a Record to specific user you will need to add a mapping (I am assuming that you have a user ID) that will look like this:
mapping(uint => Record[]) recordsByUserID;
Then you will be able to get an array of Records knowing the user id by:
Records userRecords[] = recordsByUserID[user_id];
About the event logging it's actually kind of easy because we have the indexed keyword, let me show you an example:
event Approved(uint indexed userId, uint indexed recordId);
With an event like this you are able to query all the events using the user id and the record id.
About the third question I suggest you to use the graph https://thegraph.com/en/. It basically creates your own GraphQL backend by indexing all the events for you in a very easy way. Then you are able to run your graphql queries and make something efficient.

I have a chat application but so far I'm not saving the messages to the database. Which of the 2 ways is better?

Right now I have a table called servers which contains all the servers. Every server has chat rooms whose names and chat history are saved in a JSON column called rooms. The column contains an array of room objects like this:
[
{
name: 'General',
history: []
},
{
name: 'RandomRoomName',
history: []
}
]
Right now, whenever a user sends a message, I just push it into the history of the respective room on the server side, however, I don't actually save it in the database so whenever I restart the server, I lose all the history. Now my question is what is the better way of handling this?
Whenever a user sends a message, I get the rooms of the server, push the message into the correct room's history and UPDATE the database with the updated objects.
or
I rework the database by removing the JSON rooms column, creating a table called rooms and another one called messages, then creating the respective relationships/associations between the tables
Honestly, both implementations feel a bit weird and not really optional.
You really need multiple tables. Jamming stuff into a JSON column is not sustainable and will have lots of race condition issues. As the number of messages grows, the expense of re-writing grows as well, making race conditions more likely. This leads into a death spiral where the whole system will melt down under load.
Option 2 is the only realistic way to go. That's the relational form you're looking for. The cost of insertion will grow relatively slowly over time and is usually easily handled up to the billions of records before index sizes get too big to fit in memory.

How to implement "SQL Transactions" in "Clean Architecture"?

I am working on an Express-based (Nodejs) API that uses MySQL for data persistence. I have tried to follow the CLEAN ARCHITECTURE proposed by Sir R.C. Martin.
Long story short:-
There are some crop vendors and some users. A user can request an order of some crops with a defined quantity from a vendor. This puts the order in PENDING state. Then the vendor will confirm the orders he/she gets from the user.
Domain/Entity -> CROP, Use-case -> add, remove, edit, find, updateQty
Domain/Entity -> ORDER, Use-case -> request, confirm, cancel
I have to implement a confirm order functionality
I have an already recorded order with ordered item list in my DB (order in the pending state)
Now on confirming order action I need to subtract each item quantity from respective crop present in the DB record, with a check that no value turns negative (i.e. no ordered qty is more than present qty)
If it is done for all the items under a "transaction cover" then I have to commit the transaction
Else revert back to the previous state (i.e rollback)
I know how to run Mysql specific transactions using "Sequelize", but with a lot of coupling and poor source code architecture. (If I do it that way, then DB won't be like plugin anymore)
I am not able to understand how to do this while maintaining the architecture and at what layer to implement this transaction thing, use-case/data-access, or what?
Thanks in advance
I recommend keeping the transaction in the "adapters layer" by using "unit of work" pattern. This way the database remains a plug-in to the business logic.

Storing userID and other data and using it to query database

I am developing an app with PhoneGap and have been storing the user id and user level in local storage, for example:
window.localStorage["userid"] = "20";
This populates once the user has logged in to the app. This is then used in ajax requests to pull in their information and things related to their account (some of it quite private). The app is also been used in web browser as I am using the exact same code for the web. Is there a way this can be manipulated? For example user changes the value of it in order to get info back that isnt theirs?
If, for example another app in their browser stores the same key "userid" it will overwrite and then they will get someone elses data back in my app.
How can this be prevented?
Before go further attack vectors, storing these kind of sensitive data on client side is not good idea. Use token instead of that because every single data that stored in client side can be spoofed by attackers.
Your considers are right. Possible attack vector could be related to Insecure Direct Object Reference. Let me show one example.
You are storing userID client side which means you can not trust that data anymore.
window.localStorage["userid"] = "20";
Hackers can change that value to anything they want. Probably they will changed it to less value than 20. Because most common use cases shows that 20 is coming from column that configured as auto increment. Which means there should be valid user who have userid is 19, or 18 or less.
Let me assume that your application has a module for getting products by userid. Therefore backend query should be similar like following one.
SELECT * FROM products FROM owner_id = 20
When hackers changed that values to something else. They will managed to get data that belongs to someone else. Also they could have chance to remove/update data that belongs to someone else agains.
Possible malicious attack vectors are really depends on your application and features. As I said before you need to figure this out and do not expose sensitive data like userID.
Using token instead of userID is going solved that possible break attemps. Only things you need to do is create one more columns and named as "token" and use it instead of userid. ( Don't forget to generate long and unpredictable token values )
SELECT * FROM products FROM owner_id = iZB87RVLeWhNYNv7RV213LeWxuwiX7RVLeW12

Store static transaction details as JSON in MySQL

My application generates payments to contractors based on a number of different objects on our platform. For example, we have jobs and line items per job. Each of those have their own details.
After we pay out a contractor, sometimes job details are changed around, for whatever reason, so sometimes we'll have a generated (email) receipt that says one thing but the job details queried from the database say something else.
I've been thinking about saving the current job details as JSON as storing it in a text field for each payment, so that we have a "snapshot" of the object at the time it was paid.
a) Does something like that make sense to do? Or would it be better to lock old jobs so they aren't editable anymore?
b) If it does make sense, would it be worth it/useful to store it in something like MongoDB? Would it be less complicated to just keep it all in mysql?