Using Apex Data Loader to load records into object with master details relationship - csv

I need to load data into two objects. I am able to load data into one object using the data loader. The second object has a master-details relationship with the first object so I need to have the unique record id of the records of first object in the CSV file. How can I add those record id's to my CSV file?

You could download the "master" records after initial upload and perform some mapping similar to (Name -> Id). In Excel this could be achieved with VLOOKUP. Once you have generated new list of "detail" objects, there should be no problem uploading them. The mapping "ID->uploaded records" is also available in the success log file created by Apex Data Loader.
But the better way is to say loudly "screw the Salesforce ID, I don't need no stinking ID" :)
Think if your "master" has some unique field. It can even be the "ID" from your existing system from which you import to Salesforce. Create this field in Salesforce (if you didn't do it already) and mark it as "External ID". Afterwards you will be able to use this external ID instead of normal Salesforce ID as a way to make the link between source and target. In pseudocode:
with normal Salesforce ID you must
INSERT INTO detail_object (Name, SomeValue, master_ID) VALUES ("foo", "bar", [some valid salesforce id])
With external IDs you can have it easy and tell salesforce to do all the heavy lifting
INSERT INTO detail_object (Name, SomeValue, master_ID) VALUES ("foo", "bar", (SELECT Id from master_object where some_field_marked_as_external_id = "123")
Check out the Data Loader user guide for quick start and play with external ids if you can (in the free developer edition maybe?). It's easier to use than to describe it.

If you are using the Apex Data loader then you will have to do 3 things:
1: insert the master record(s). this will give them IDs
2: export those master records again including their IDs, and integrate that into your details data. A VLOOKUP is most useful for that sort of thing.
Or if there is only one master record, even easier, just copy the ID out of the URL and add it in on every detail record in your spreadsheet.
3: then insert the detail records with the master IDs

Related

Backendless + Zapier. Create record with foreign relationship

I'm new to the world of Low Code app development, and so far I'm pulling my hair out.
I'm using a third party web app to submit JSON formatted data to Zapier via webhook, and then submit that to Backendless with codeless API that creates a record. I'm running into two issues that I can't figure out how to solve.
Backendless record creation with foreign key relationship. I'm creating a record in Table A, but that needs to have a relationship to Table B. I have it set up as such in Backendless, but in Zapier I don't see an option to populate the table_b_id in the Table A record I'm creating. What am I missing here?
After creating the Table A record, I want to create multiple records in Table C that are children of the Table A record. How on earth do I do this? With Python + SQL, I could do it in 2 minutes, but for the life of me I can't figure out how to do it the LowCode way using either Zapier or Backendless.
Any help would be appreciated! I'm totally stumped.
Backendless actions for Zapier let you save/update/delete an object in a single table. These are distinct API operations. Creating a relationship is a separate API call that doesn't have a corresponding action in Zapier's Backendless integration. However, you can create a relation between the object you're saving and a related "parent" (or "child") table using an API event handler in business logic. It can be done with Java, JS or Codeless. The event handler you'd be creating is afterSave.
You can save multiple objects with a single call using Codeless. The simplest way to do this by using the Bulk Create block: https://prnt.sc/x6cwp4. The objects connector should be a list of objects to save in the table.

Querying a database record from flowfile content to retrive data using apache-nifi

My scenario is as followed.
From one process I retrieve data from a table.
id,user_name
1,sachith
2,nalaka
I need to retrieve account details from account_details table for these ids.
I have tried to use various database related processors. But none of them support flowfile content.
How can I retrieve records only for these id?
use below:
ExecuteSQL( account_details)
-> convertAvroToJSON
-> EvaluateJsonPath
->AttributesToJson
( here you take only id and ignore test)
Take a look at the LookupRecord using a DatabaseRecordLookupService controller service. That should allow you to use the id field to look up additional fields from a database and add them to the outgoing records. This is a common "enrichment" pattern, where the lookups can be done against databases, CSV files, etc.
You can use QueryRecord processor to query data from flowfiles. You will need to set a reader and a writer inside this processor to open your file properly and write as well. To create a query, you must create a property with the name of the query and put the query itself as the value for this property. After that, you can create an output stream for this property.
The query syntax is Apache Calcite.
You can find further explanation here

How to post the data into multiple tables using Talend RestFul Services

I have 3 tables called PATIENT, PHONE and PATIENT_PHONE.
The PATIENT table contains the columns: id, firstname, lastname, email and dob.
The PHONE table contains the columns: id, type and number.
The PATIENT_PHONE table contains the columns: patient_id, phone_id.
The PATIENT and PHONE tables are mapped by the PATIENT_PHONE table. So I have to join these 3 tables to post firstname, lastname, email and number fields to the database.
I tried like this:
Schema for first_xmlmap
[
Schema mapping for Patient and Patient_phone
[
I'm assuming you want to write the same data to multiple database tables within the same database instance for each request against the web service.
How about using the tHashOutput and tHashInput components?
If you can't see the tHash* components in your component Pallete, go to:
File > Edit project properties > Designer > Pallete settings...
Highlight the filtered components, click the arrow to move them out of the filter and click OK.
The tHash components allow you to push some data to memory in order to read it back later. Be aware that this data is written to volatile memory (RAM) and will be lost once the JVM exits.
Ensure that "append" in the tHashOutput component is unchecked and that the tHashInput components are set not to clear their cache after reading.
You can see some simple error handling written into my example which guarantees that a client will always get some sort of response from the service, even when something goes wrong when processing the request.
Also note that writing to the database tables is an all-or-nothing transaction - that is, the service will only write data to all the specified tables when there are no errors whilst processing the request.
Hopefully this gives you enough of an idea about how to extend such functionality to your own implementation.

Storing unconfirmed and confirmed data to a database

I am creating a web application using Strongloop using a MySQL database connector.
I want it to be possible, that a user can modify data in the application - but that this data will not be 'saved' until a user expressly chooses to save the data.
On the other hand, this is a web application and I don't want to keep the data in the user's session or local storage - I want this data to be immediately persisted so it can be recovered easily if the user loses their session.
To implement it I am thinking of doing the following, but I'm not sure if this is a good idea, or if there is a better way to be doing this.
This is one was I can implement it without doing too much customization on an existing relation:
add an new generated index as the primary key for the table
add a new generated index that represents the item in the row
this would be generated for new items, or set to an old item for edits
add a boolean attribute 'saved'
Data will be written as 'saved=false'. To 'save' the data, the row is marked saved and the old row is deleted. The old row can be looked up by it's key, the second attribute in the row.
The way I was thinking of implementing it is to create a base entity called Saveable. Then every Database entity that extends Saveable will also have the 'Saveable' property.
Saveable has:
A generated id number
A generated non id number - the key for the real object
A 'saved' attribute
I would then put a method in Savable.js to perform the save operation and expose it via the API, and a method to intercept new writes and store them as unsaved.
My question is - is this a reasonable way to achieve what I want?

Neo4J custom load CSV

I asked a question a few days ago to know how to import an existing database into Neo4J. Thanks to the person who explained me how to do that. I decided to create a CSV file from my database (around 1 million entries) and to load it from the Neo4j webadmin to test it. The problem is that each row of this database contains redundant data, for example my database contains actions from different users but each user can do mutliple actions. The structure of my graph would be to create a node for each user that is linked to each action he does. That's why I have to create only one node for each user even if his name appears in several rows of my CSV file (because he made several actions). What is the method to do that ? I guess it's possible to do that in Cypher right ?
Thanks a lot
Regards
Sam
In case you have references that might or might not exist, you should use the MERGE statement. MERGE either finds something or creates something in your database.
Please refer to the respective section in the reference manual: http://docs.neo4j.org/chunked/stable/cypherdoc-importing-csv-files-with-cypher.html. Here the country is shared my multiple users there the country is merged wheres the users and their relationships to countries are unconditionally created.