Stuck with deciding the working of a Smart Contract Project - ethereum

So I'm doing a project which basically carries out the medical insurance claim process with the help of the smart contract. How it works is:
User signs up to the website run by the insurer.
They file a claim by entering the concerned hospital, insured amount and a pdf file containing the bills which are to be verified.
The hospital uses the website to approve/deny the claim based on the info provided.
The insurer uses the website approve/deny the claim based on the info provided and pays the user.
Any time someone files, approves/denies, pays the claim, an event is emitted.
This is my first ethereum/solidity project and I'm unable to figure out how to pull this together.
This is the structure of a Claim:
struct Record {
uint id; // unique id for the record
address patientAddr;
address hospitalAddr;
string billId; // points to the pdf stored somewhere
uint amount;
mapping (address => RecordStatus) status; // status of the record
bool isValid; // variable to check if record has already been created or not
}
Some of my questions were:
How do I link a Record to a specific user? As a single user can use multiple Metamask wallets.
Is it possible to fetch all the events linked to a specific record with an id so I can display all the approvals/denials happened with the Record to the user?
For the hospital, is there a better way to get associated Records other than to get all Records from the smart contract and then filter it on the front end?
Thanks a lot for your help.

To link a Record to specific user you will need to add a mapping (I am assuming that you have a user ID) that will look like this:
mapping(uint => Record[]) recordsByUserID;
Then you will be able to get an array of Records knowing the user id by:
Records userRecords[] = recordsByUserID[user_id];
About the event logging it's actually kind of easy because we have the indexed keyword, let me show you an example:
event Approved(uint indexed userId, uint indexed recordId);
With an event like this you are able to query all the events using the user id and the record id.
About the third question I suggest you to use the graph https://thegraph.com/en/. It basically creates your own GraphQL backend by indexing all the events for you in a very easy way. Then you are able to run your graphql queries and make something efficient.

Related

Securing MySQL id numbers so they are not sequential

I am working on a little package using PHP and MySQL to handle entries for events. After completing an entry form the user will see all his details on a page called something like website.com/entrycomplete.php?entry_id=15 where the entry_id is a sequential number. Obviously it will be laughably easy for a nosey person to change the entry_id number and look at other people's entries.
Is there a simple way of camouflaging the entry_id? Obviously I'm not looking to secure the Bank of England so something simple and easy will do the job. I thought of using MD5 but that produces quite a long string so perhaps there is something better.
Security through obscurity is no security at all.
Even if the id's are random, that doesn't prevent a user from requesting a few thousand random id's until they find one that matches an entry that exists in your database.
Instead, you need to secure the access privileges of users, and disallow them from viewing data they shouldn't be allowed to view.
Then it won't matter if the id's are sequential.
If the users do have some form of authentication/login, use that to determine if they are allowed to see a particular entry id.
If not, instead of using a url parameter for the id, store it in and read it from a cookie. And be aware that this is still not secure. An additional step you could take (short of requiring user authentication) is to cryptographically sign the cookie.
A better way to implement this is to show only the records that belong to that user. Say the id is the unique identifier for each user. Now store both entry_id and id in your table (say table name is entries).
Now when the user requests for record, add another condition in the mysql query like this
select * from entries where entry_id=5 and id=30;
So if entry_id 5 does not belong to this user, it will not have any result at all.
Coming towards restricting the user to not change his own id, you can implement jwt tokens. You can give a token on login and add it to every call. You can then decrypt the token in the back end and get the user's actual id out of it.

Data sync from MySQL to NoSQL key-value store

I am having a legacy system with the MySQL at the backend and python as the primary programming language.
Recently we have a scenario where we need to display a dashboard with the information present in the MySQL database. The data in the table changes every second.
This can be thought of similar to a bid application where people bid constantly. Every time a user bids a record goes in to the database. When an user updates his bid it updates the previous value.
I also have few clients who monitor this dashboard which updates the statistics.
I need to order this data in realtime as people bid in real time.
I don't prefer to run queries against MySQL because at any second I may have few 1000 clients querying the database. This will create load on database.
Please advice.
If you need to collect and order data in realtime you should be looking at the atomic ordered map and ordered list operations in Aerospike.
I have examples of using KV-ordered maps at rbotzer/aerospike-cdt-examples.
You could use a similar approach with the user's ID being the key, the bid being a list with the structure [1343, { foo: bar, ts: 1234, this: that} ]. The bid amount in cents (integer) is the first element of the list, all the other information is in a map in the second element position.
This will allow you to update a user bid with a single map operation, get back the user's bid with a single operation, order by rank (on the bid amount) to get the top bids ordered, get all the bids in a particular range, etc. You would have one record per item, with all the bids in this KV-sorted map.

Data duplication between internal database and Solidity

There is a flow I want to achieve in my dapp, and I would appreciate some opinion.
Flow:
User sees a list of products and picks one to buy it. The user has their metamask unlocked and has enough balance.
Setup:
Rails on the backend, React on the frontend, ganache-cli, truffle, metamask (web3js).
Database structure:
In the app's internal PostgresDB, there's a products table. In the blockchain, there's a dynamic array products like below:
Internal Postgres:
products
name
price
owner_id
owners
name
id
address
Blockchain (contract storage)
Product[] products
struct Product {
name
}
mapping(uint => address) public productIdToOwner;
mapping(uint => uint) public productIdToPrice;
The following function onBuy runs when the user clicks "Buy this product" button:
onBuy = (product) => {
const { id, external_id, name, price, meta } = product
this.ContractInstance.methods.buy(external_id).send({
from: this.state.currentUserAddress,
gas: GAS_LIMIT,
value: web3.utils.toWei(price.toString(), "ether"),
}).then((receipt) => {
// What to do before getting a receipt?
console.log(receipt)
}).catch((err) => {
console.log(err.message)
})
}
Questions:
On the mainnet, how long does it take for me to get the receipt for the transaction? Is it sane to make the user wait on the same page after clicking the onBuy button with a loading wheel until the receipt arrives? If not, what's the conventional way to deal with this?
Is my DB structure a reasonable way to connect to the blockchain? I am worried about data integrity (i.e. having to sync address field between my internal DB and the blockchain) but I find it useful to store the blockchain data inside the internal DB, and read mostly from the internal DB instead of the blockchain.
On the mainnet, how long does it take for me to get the receipt for the transaction? Is it sane to make the user wait on the same page after clicking the onBuy button with a loading wheel until the receipt arrives? If not, what's the conventional way to deal with this?
When you send a transaction, you will get the transaction hash pretty quickly. However, the receipt is only returned once the transaction is mined. The length of time it takes to have your transaction mined varies greatly depending on how much you're willing to pay for gas (can be several seconds for very high gas prices or can be several hours if you're paying < 10 Gwei). You can get an idea by using Ropsten/Rinkeby (the test networks will probably be faster than MainNet). Chances are (and based on the system you're describing), it won't be reasonable to have your user wait.
I don't know if there is a "conventional" way to deal with this. You can provide the user with your own confirmation number (or use the transaction hash as the confirmation number), send an email when the transaction is mined, push a notification if on a mobile app, etc. If you present the user with some sort of confirmation number based on the transaction hash, you'll have to decide how you want to resolve cases when the transaction fails.
Is my DB structure a reasonable way to connect to the blockchain? I am worried about data integrity (i.e. having to sync address field between my internal DB and the blockchain) but I find it useful to store the blockchain data inside the internal DB, and read mostly from the internal DB instead of the blockchain.
This is purely an opinion, so take this with a grain of salt. I try to avoid having duplicate data, but if you're going to have multiple persistence layers you'll probably need to maintain some sort of referential integrity between them. So, it's certainly ok to store ids and addresses.
The part of your question that confuses me is why do you prefer to read mostly from the DB? Have you tried measuring the latency of using a fully sync'ed node on your server and retrieving data from your contract through constant functions? I would test that first before duplicating my data. In your case, I would look to use the blockchain to store purchases while using the DB just for inventory management. But, that's based on very little knowledge of you business case.

How to post the data into multiple tables using Talend RestFul Services

I have 3 tables called PATIENT, PHONE and PATIENT_PHONE.
The PATIENT table contains the columns: id, firstname, lastname, email and dob.
The PHONE table contains the columns: id, type and number.
The PATIENT_PHONE table contains the columns: patient_id, phone_id.
The PATIENT and PHONE tables are mapped by the PATIENT_PHONE table. So I have to join these 3 tables to post firstname, lastname, email and number fields to the database.
I tried like this:
Schema for first_xmlmap
[
Schema mapping for Patient and Patient_phone
[
I'm assuming you want to write the same data to multiple database tables within the same database instance for each request against the web service.
How about using the tHashOutput and tHashInput components?
If you can't see the tHash* components in your component Pallete, go to:
File > Edit project properties > Designer > Pallete settings...
Highlight the filtered components, click the arrow to move them out of the filter and click OK.
The tHash components allow you to push some data to memory in order to read it back later. Be aware that this data is written to volatile memory (RAM) and will be lost once the JVM exits.
Ensure that "append" in the tHashOutput component is unchecked and that the tHashInput components are set not to clear their cache after reading.
You can see some simple error handling written into my example which guarantees that a client will always get some sort of response from the service, even when something goes wrong when processing the request.
Also note that writing to the database tables is an all-or-nothing transaction - that is, the service will only write data to all the specified tables when there are no errors whilst processing the request.
Hopefully this gives you enough of an idea about how to extend such functionality to your own implementation.

Storing userID and other data and using it to query database

I am developing an app with PhoneGap and have been storing the user id and user level in local storage, for example:
window.localStorage["userid"] = "20";
This populates once the user has logged in to the app. This is then used in ajax requests to pull in their information and things related to their account (some of it quite private). The app is also been used in web browser as I am using the exact same code for the web. Is there a way this can be manipulated? For example user changes the value of it in order to get info back that isnt theirs?
If, for example another app in their browser stores the same key "userid" it will overwrite and then they will get someone elses data back in my app.
How can this be prevented?
Before go further attack vectors, storing these kind of sensitive data on client side is not good idea. Use token instead of that because every single data that stored in client side can be spoofed by attackers.
Your considers are right. Possible attack vector could be related to Insecure Direct Object Reference. Let me show one example.
You are storing userID client side which means you can not trust that data anymore.
window.localStorage["userid"] = "20";
Hackers can change that value to anything they want. Probably they will changed it to less value than 20. Because most common use cases shows that 20 is coming from column that configured as auto increment. Which means there should be valid user who have userid is 19, or 18 or less.
Let me assume that your application has a module for getting products by userid. Therefore backend query should be similar like following one.
SELECT * FROM products FROM owner_id = 20
When hackers changed that values to something else. They will managed to get data that belongs to someone else. Also they could have chance to remove/update data that belongs to someone else agains.
Possible malicious attack vectors are really depends on your application and features. As I said before you need to figure this out and do not expose sensitive data like userID.
Using token instead of userID is going solved that possible break attemps. Only things you need to do is create one more columns and named as "token" and use it instead of userid. ( Don't forget to generate long and unpredictable token values )
SELECT * FROM products FROM owner_id = iZB87RVLeWhNYNv7RV213LeWxuwiX7RVLeW12