How to post the data into multiple tables using Talend RestFul Services - mysql

I have 3 tables called PATIENT, PHONE and PATIENT_PHONE.
The PATIENT table contains the columns: id, firstname, lastname, email and dob.
The PHONE table contains the columns: id, type and number.
The PATIENT_PHONE table contains the columns: patient_id, phone_id.
The PATIENT and PHONE tables are mapped by the PATIENT_PHONE table. So I have to join these 3 tables to post firstname, lastname, email and number fields to the database.
I tried like this:
Schema for first_xmlmap
[
Schema mapping for Patient and Patient_phone
[

I'm assuming you want to write the same data to multiple database tables within the same database instance for each request against the web service.
How about using the tHashOutput and tHashInput components?
If you can't see the tHash* components in your component Pallete, go to:
File > Edit project properties > Designer > Pallete settings...
Highlight the filtered components, click the arrow to move them out of the filter and click OK.
The tHash components allow you to push some data to memory in order to read it back later. Be aware that this data is written to volatile memory (RAM) and will be lost once the JVM exits.
Ensure that "append" in the tHashOutput component is unchecked and that the tHashInput components are set not to clear their cache after reading.
You can see some simple error handling written into my example which guarantees that a client will always get some sort of response from the service, even when something goes wrong when processing the request.
Also note that writing to the database tables is an all-or-nothing transaction - that is, the service will only write data to all the specified tables when there are no errors whilst processing the request.
Hopefully this gives you enough of an idea about how to extend such functionality to your own implementation.

Related

Batch Processing - Odata

I want to make requests to allow grouping of multiple operations into a single HTTP request payload
I have an API Key that allows me to make Get Requests and return tables in a Database as JSON blocks. Certain attributes are 'expandable' and OData (Open Data Protocol) allows you to 'expand' multiple attributes within the "CompanyA" table (ie Marketing, Sales, HR)
http://api.blahblah.com/odata/CompanyA?apikey=b8blachblahblachc&$expand=Marketing,Sales,HR
I would like to select multiple tables, (the request above only contains 1 table which was Company A) and understand this is possible via "Batch Requests"
https://www.odata.org/documentation/odata-version-3-0/batch-processing/
The documentation above alongside Microsoft's is hard to translate into my noted desire.
I wanted it to be as simple as, but I know it is not and can't figure out how to get there:
http://api.blahblah.com/odata/CompanyA,CompanyB,CompanyC?apikey=b8blachblahblachc
The end goal is to have one JSON file that contains detail about each table in the DB , rather than have to write each individual query and save it file as below:
http://api.blahblah.com/odata/CompanyA?apikey=b8blachblahblachc
http://api.blahblah.com/odata/CompanyB?apikey=b8blachblahblachc
http://api.blahblah.com/odata/CompanyC?apikey=b8blachblahblachc

Storing userID and other data and using it to query database

I am developing an app with PhoneGap and have been storing the user id and user level in local storage, for example:
window.localStorage["userid"] = "20";
This populates once the user has logged in to the app. This is then used in ajax requests to pull in their information and things related to their account (some of it quite private). The app is also been used in web browser as I am using the exact same code for the web. Is there a way this can be manipulated? For example user changes the value of it in order to get info back that isnt theirs?
If, for example another app in their browser stores the same key "userid" it will overwrite and then they will get someone elses data back in my app.
How can this be prevented?
Before go further attack vectors, storing these kind of sensitive data on client side is not good idea. Use token instead of that because every single data that stored in client side can be spoofed by attackers.
Your considers are right. Possible attack vector could be related to Insecure Direct Object Reference. Let me show one example.
You are storing userID client side which means you can not trust that data anymore.
window.localStorage["userid"] = "20";
Hackers can change that value to anything they want. Probably they will changed it to less value than 20. Because most common use cases shows that 20 is coming from column that configured as auto increment. Which means there should be valid user who have userid is 19, or 18 or less.
Let me assume that your application has a module for getting products by userid. Therefore backend query should be similar like following one.
SELECT * FROM products FROM owner_id = 20
When hackers changed that values to something else. They will managed to get data that belongs to someone else. Also they could have chance to remove/update data that belongs to someone else agains.
Possible malicious attack vectors are really depends on your application and features. As I said before you need to figure this out and do not expose sensitive data like userID.
Using token instead of userID is going solved that possible break attemps. Only things you need to do is create one more columns and named as "token" and use it instead of userid. ( Don't forget to generate long and unpredictable token values )
SELECT * FROM products FROM owner_id = iZB87RVLeWhNYNv7RV213LeWxuwiX7RVLeW12

Couchbase - Splitting a JSON object into many key-value entries - performance improvement?

Say my Couchbase DB has millions of user objects, each user object contains some primitive fields (score, balance etc.)
And say I read & write most of those fields on every server request.
I see 2 options of storing the User object in Couchbase:
A single JSON object mapped to a user key (e.g. user_555)
Mapping each field into a separate entry (e.g. score_555 and balance_555)
Option 1 - Single CB lookup, JSON parsing
Option 2 - Twice the lookups, less parsing if any
How can I tell which one is better in terms of performance?
What if I had 3 fields? what if 4? does it make a difference?
Thanks
Eyal
Think about your data structure and access patterns first before worrying if json parsing or extra lookups will add overhead to your system.
From my perspective and experience I would try to model documents based upon logical object groupings, I would store 'user' attributes together. If you were to store each field separately you'd have to do a series of lookups if you ever wanted to provide a client or service with a full overview of the player profile.
I've used Couchbase as the main data store for a social mobile game, we store 90% of user data in a user document, this contains all the relevant fields such as score,level,progress etc. For the majority of operations such as a new score or upgrades we want to be dealing with the whole User object in the application layer so it makes sense to inflate the user object from the cb document, alter/read what we need and then persist it again if there have been changes.
The only time we have id references to other documents is in the form of player purchases where we have an array of ids that each reference a separate purchase. We do this as we wanted to have richer information on each purchase (date of transaction,transaction id,product type etc) that isn't relevant to the user document as when a purchase is made we verify it's legitimate and then add to the User inventory and create the separate purchase document.
So our structure is:
UserDoc:
-Fields specific to a User (score,level,progress,friends,inventory)
-Arrays of IDS pointing to specific purchases
The only time I'd consider splitting out some specific fields as you outlined above would be if your user document got seriously large but I think it'd be best to divide documents up per groupings of data as opposed to specific fields.
Hope that helped!

concrete5 database workflow

I'm in the planning stages for a web-based project management/collaboration app similar to Copper Project or PHP Collab, using concrete5 as my framework.
There are a couple of features I want to integrate, but I'm not entirely sure how to accomplish this, looking at how DB tables are generated with blocks.
The functionality I have in mind is as follows:
1) When a new client is created by an account manager or project manager, they have to assign a three-character prefix for the client. Example: if (by some wild stroke of luck) I add Diesel as a client, I would want to assign them the prefix DSL.
2) When an account manager or project manager creates a new project, the project ID should be directly related to the client, and not to the total number of projects for all clients. In other words, the project ID for Diesel's first project with me should be DSL001, and not DSL016, because there were fifteen other projects for other clients before this one (c.f. both Copper and PHP Collab, which follow the global project ID logic, as opposed to the per-client project ID logic). This project ID would be visible on the front-end project page that's been created by the AM/PM, and would also be used as a reference ID for things like cost estimates, invoices and so on.
So this is where I run into a problem from a workflow planning point of view. My understanding of MySQL is such that if I want to follow my own project ID logic, a new table would have to be created for each and every client, to contain all of the data concerning their projects, so that the DB could correctly output the unique ID number.
However, my understanding of C5 is that if, for example, in the course of creating this app, I decide to create the project form as a block to be inserted in a front-end template, the db.xml file would create a generic project data table in the DB for all clients, not one per client.
Any suggestions how I can accomplish what I'm looking to do in the context of C5's framework?
If something's unclear, I can show some mock-ups of how a project page would look.
Thanks!
This is a general database schema issue, and has nothing to do with Concrete5 specifically. Your idea about needing a separate table for each client just so MySQL could generate unique ID numbers is way off.
There is a general principle with database schemas that says the "ID" number of a record should only be used to uniquely identify records internally (within your application and database code) -- you should almost never use the primary id numbers for actual "business logic". In your case, you have a project id that has both letters and numbers in it, so even if you wanted to use the MySQL-generated ID for this, you couldn't (because those id's are integer numbers only, not letters).
Also, creating separate tables for the same kind of data is the exact opposite of how databases work. Instead what you want to do is have one table for clients and another for projects. The client table would have an "id" field (the auto-increment number), and a client prefix field (the "DSL" in your example). Then the projects table has its own "id" field (again, the auto-increment number), and a "client id" which ties that project to a record in the client table. Then you'd have another field in the projects table for "project number". This project number field is what you'd display to users (you'd combine it with the client's 3-letter prefix -- so really you are storing two separate values in the database, but your users would see just one combined value because that is how you would output it to the page).
This "project number" field should not be an auto-increment number, because as you've discovered, MySQL only has one numbering sequence per table. So instead you will have some code in your application somewhere that generates this number for you when you have a new project. That code would be something like this:
function save_new_project($client_id, $project_data) {
$db = Loader::db();
//Determine the highest existing project number for this client
$sql = "SELECT MAX(project_number) FROM projects WHERE client_id = ?";
$vals = array($client_id);
$max_project_number = $db->GetOne($sql, $vals);
if (empty($max_project_number)) {
$max_project_number = 0; //first project for this client
}
//Insert new project with next-highest number
$new_project_number = $max_project_number + 1;
$sql = "INSERT INTO projects (client_id, project_number, some_field, another_field) VALUES (?, ?, ?, ?)";
$vals = array($client_id, $new_project_number, $project_data['some_field'], $project_data['another_field']);
$db->Execute($sql, $vals);
}
By the way, Concrete5 is probably not a good framework to use for this kind of project. You might want to look into a more general framework that is suitable for web applications such as CodeIgniter, Symfony, CakePHP, Kohana, etc.

SQL - adding fields to query to sorty by

I'm working with a third party software package that is on it's own database. We are using it for the user management back bone on our application. We have an API to retrieve data and access info.
Due to the nature of information changing daily, we can only use the user_id as a pseudo FK in our application, not storing info like their username or name. The user information can change (like person name...don't ask).
What I need to do is sort and filter (paging results) one of my queries by the person's name, not the user_id we have. I'm able to get an array of the user info before hand. Would my best bet be creating a temporary table that adds an additional field, and then sorts by that?
Using MySQL for the database.
You could adapt the stored procedure on this page here to suit your needs the stored procedure is a multi purpose one and is very dynamic, but you could alter it to suit your needs for filtering the person table.
http://weblogs.asp.net/pwilson/archive/2003/10/10/31456.aspx
You could combine the data into an array of objects, then sort the array.
Yes, but you should consider specifically where you will make the temporary table. If you do it in your web application then your web server is stuck allocating memory for your entire table, which may be horrible for performance. On the other hand, it may be easier to just load all your objects and sort them as suggested by eschneider.
If you have the user_id as a parameter, you can create a user defined function which retrieves the username for you within the stored procedure.
Database is on different servers. For all purposes, we access it via an API and the data is then turned into an array.
For now, I've implemented the solution using LINQ to filter and out the array of objects.
Thanks for the tips and helping me go in the right direction.