Batch Processing - Odata - json

I want to make requests to allow grouping of multiple operations into a single HTTP request payload
I have an API Key that allows me to make Get Requests and return tables in a Database as JSON blocks. Certain attributes are 'expandable' and OData (Open Data Protocol) allows you to 'expand' multiple attributes within the "CompanyA" table (ie Marketing, Sales, HR)
http://api.blahblah.com/odata/CompanyA?apikey=b8blachblahblachc&$expand=Marketing,Sales,HR
I would like to select multiple tables, (the request above only contains 1 table which was Company A) and understand this is possible via "Batch Requests"
https://www.odata.org/documentation/odata-version-3-0/batch-processing/
The documentation above alongside Microsoft's is hard to translate into my noted desire.
I wanted it to be as simple as, but I know it is not and can't figure out how to get there:
http://api.blahblah.com/odata/CompanyA,CompanyB,CompanyC?apikey=b8blachblahblachc
The end goal is to have one JSON file that contains detail about each table in the DB , rather than have to write each individual query and save it file as below:
http://api.blahblah.com/odata/CompanyA?apikey=b8blachblahblachc
http://api.blahblah.com/odata/CompanyB?apikey=b8blachblahblachc
http://api.blahblah.com/odata/CompanyC?apikey=b8blachblahblachc

Related

How to post the data into multiple tables using Talend RestFul Services

I have 3 tables called PATIENT, PHONE and PATIENT_PHONE.
The PATIENT table contains the columns: id, firstname, lastname, email and dob.
The PHONE table contains the columns: id, type and number.
The PATIENT_PHONE table contains the columns: patient_id, phone_id.
The PATIENT and PHONE tables are mapped by the PATIENT_PHONE table. So I have to join these 3 tables to post firstname, lastname, email and number fields to the database.
I tried like this:
Schema for first_xmlmap
[
Schema mapping for Patient and Patient_phone
[
I'm assuming you want to write the same data to multiple database tables within the same database instance for each request against the web service.
How about using the tHashOutput and tHashInput components?
If you can't see the tHash* components in your component Pallete, go to:
File > Edit project properties > Designer > Pallete settings...
Highlight the filtered components, click the arrow to move them out of the filter and click OK.
The tHash components allow you to push some data to memory in order to read it back later. Be aware that this data is written to volatile memory (RAM) and will be lost once the JVM exits.
Ensure that "append" in the tHashOutput component is unchecked and that the tHashInput components are set not to clear their cache after reading.
You can see some simple error handling written into my example which guarantees that a client will always get some sort of response from the service, even when something goes wrong when processing the request.
Also note that writing to the database tables is an all-or-nothing transaction - that is, the service will only write data to all the specified tables when there are no errors whilst processing the request.
Hopefully this gives you enough of an idea about how to extend such functionality to your own implementation.

Different companies listed in Exact Online REST API system divisions and XML API Administrations

When I download the list of companies using the end point Administrations either through the user front end or directly using a HTTP GET, I receive an XML with contents such as:
<?xml version="1.0" encoding="UTF-8"?>
<eExact xsi:...>
<Administrations>
<Administration>
...
</Administration>
I can receive the list of companies also using the REST API system/divisions.
In general the number and names companies listed in both are equal, although some fields are present in XML API that are not present in the REST API and vice versa.
However, sometimes the contents are different. For instance, today I had scenario where there were only 2 companies listed in the XML topic, but over 900 in system/divisions.
This occurs both using the APIs directly as through Invantive SQL.
Why is the outcome different?
You can also use one of the four views:
AllAdministrations (similar to Administrations)
AllAdministrationCustomers (-)
AllAdministrationClassifications (similar to AdministrationClassifications)
AllAdministrationAssignedTypes (similar to AdministrationAssignedTypes)
These query the administrations across all subscriptions an accountant has access too.
All topic are read using a specific company (in the URL named division) to retrieve the data from.
System/divisions REST API returns ALL companies accessible for the current user, so the outcome does not depend on the division used in the URL request.
However, the XML topic Administrations returns ONLY companies accessible for the current user that are of the SAME customer account of the division used in the URL request.
A customer account is number of 1 or more companies which is independently billed. For entrepreneur licenses, this is generally the same list of companies.
However, for an accountant it differs what company is used, since they may have hundreds of different customers each with their own licenses plus many companies under their own customer code.
In general, it is more wise to use the system/divisions.
However, when you need additional fields or for instance the classifications of a company, you will need to use the XML API. The easiest way to determine the minimum number of companies to retrieve the XML API Administrations data for is to:
First retrieve all system/divisions.
For every different value of customercode, find one division, for instance the minimum value.
For each of these divisions, access the end point Administrations.
Combine the output of each of those.

Couchbase - Splitting a JSON object into many key-value entries - performance improvement?

Say my Couchbase DB has millions of user objects, each user object contains some primitive fields (score, balance etc.)
And say I read & write most of those fields on every server request.
I see 2 options of storing the User object in Couchbase:
A single JSON object mapped to a user key (e.g. user_555)
Mapping each field into a separate entry (e.g. score_555 and balance_555)
Option 1 - Single CB lookup, JSON parsing
Option 2 - Twice the lookups, less parsing if any
How can I tell which one is better in terms of performance?
What if I had 3 fields? what if 4? does it make a difference?
Thanks
Eyal
Think about your data structure and access patterns first before worrying if json parsing or extra lookups will add overhead to your system.
From my perspective and experience I would try to model documents based upon logical object groupings, I would store 'user' attributes together. If you were to store each field separately you'd have to do a series of lookups if you ever wanted to provide a client or service with a full overview of the player profile.
I've used Couchbase as the main data store for a social mobile game, we store 90% of user data in a user document, this contains all the relevant fields such as score,level,progress etc. For the majority of operations such as a new score or upgrades we want to be dealing with the whole User object in the application layer so it makes sense to inflate the user object from the cb document, alter/read what we need and then persist it again if there have been changes.
The only time we have id references to other documents is in the form of player purchases where we have an array of ids that each reference a separate purchase. We do this as we wanted to have richer information on each purchase (date of transaction,transaction id,product type etc) that isn't relevant to the user document as when a purchase is made we verify it's legitimate and then add to the User inventory and create the separate purchase document.
So our structure is:
UserDoc:
-Fields specific to a User (score,level,progress,friends,inventory)
-Arrays of IDS pointing to specific purchases
The only time I'd consider splitting out some specific fields as you outlined above would be if your user document got seriously large but I think it'd be best to divide documents up per groupings of data as opposed to specific fields.
Hope that helped!

Where does aggregated data fit in in a REST JSON API and a mobile client?

I have an iOS app talking with a REST JSON API. I have mapped the model resources 1-1 with the controllers/endpoints in the API. E.g. User (/users), Friendship (/friendships), Rating (/ratings), RatedObject (/rated_objects) etc. And on the device I'm using RestKit/CoreDate to store/sync all objects.
Now I start to get the need for different kind of aggregated data, e.g. different rating averages and rating counts on the RatedObject depending on the friendship type. I have solved it now by adding the data to the RatedObject type:
RatedObject
name
size
ratingAverageFromCloseFriends
ratingCountFromCloseFriends
ratingAverageFromFamily
ratingCountFromFamily
ratingAverageFromAllFriends
ratingCountFromAllFriends
But when it starts to get more kinds of aggregated data on different kind of objects, it is getting over hand. And I also sometimes need to get the average from only one specific friend, and that can't be added to the model.
I store all data locally on the iOS device, and the aggregated data should be easy to update from the server.
How should I solve this and make it easy and natural for both the client and the server?

Tridion 2009 embedded metadata storage format in the broker

I'm fairly new to Tridion and I have to implement functionality that will allow a content editor to create a component and assign multiple date ranges (available dates) to it. These will need to be queried from the broker to provide a search functionality.
Originally, this was only require a single start and end date and so were implemented as individual meta data fields.
I am proposing to use an embedded schema within the schema's 'available dates' metadata field to allow multiple start and end dates to be assigned.
However, as the field is now allowing multiple values, the data is stored in the broker as comma separated values in the 'KEY_STRING_VALUE' column rather than as a date value in the 'KEY_DATE_VALUE' column as it was when it was only allowed a single start and end values.
eg.
KEY_NAME | KEY_STRING_VALUE
end_date | 2012-04-30T13:41:00, 2012-06-30T13:41:00
start_date | 2012-04-21T13:41:00, 2012-06-01T13:41:00
This is now causing issues with my broker querying as I can no longer use simple query logic to retrieve the items I require for the search based on the dates.
Before I start to write C# logic to parse these comma separated dates and search based on those, I was wondering if anyone had had similar requirements/experiences in the past and had implemented this in a different way to reduce the amount of code parsing required and to use the broker querying to complete the search.
I'm developing this on Tridion 2009 but using the 5.3 Broker (for legacy reasons) so the query currently looks like this (for the single start/end dates):
query.SetCustomMetaQuery((KEY_NAME='end_date' AND KEY_DATE_VALUE>'" + startDateStr + "') AND (ITEM_ID IN(SELECT ITEM_ID FROM CUSTOM_META WHERE KEY_NAME='start_date' AND KEY_DATE_VALUE<'" + endDateStr + "')))";
Any help is greatly appreciated.
Just wanted to come back and give some details on how I finally approached this should anyone else face the same scenario.
I proposed the set number of fields to the client (as suggested by Miguel) but the client wasn't happy with that level of restriction.
Therefore, I ended up implementing the embeddable schema containing the start and end dates which gave most flexibility. However, limitations in the Broker API meant that I had to access the Broker DB directly - not ideal, but the client has agreed to the approach to get the functionality required. Obviously this would need to be revisited should any upgrades be made in the future.
All the processing of dates and the available periods were done in C# which means the performance of the solution is actually pretty good.
One thing that I did discover that caused some issues was that if you have multiple values for the field using the embedded schema (ie in this case, multiple start and end dates) then the meta data is stored in the KEY_STRING_VALUE column in the CUSTOM_META table. However, if you only have a single value in the field (i.e. one start and end date) then these are stored as dates in the KEY_DATE_VALUE column in the same way as if you'd just used single fields rather than an embeddable schema. It seems a sensible approach for Tridion to take but it serves to make it slightly more complicated when writing the queries and the parsing code!
This is a complex scenario, as you will have to go throughout all the DCPs and parse those strings to determine if match the search criteria
There is a way you could convert that metadata (comma separated) in single values in the broker, but the name of the fields need to be different Range1, Range2, ...., RangeN
You can do that with a deployer extension where you change the XML Structure of the package and convert each those strings in different values (1,2, .., n).
This extension can take some time if you are not familiar with deployer extensions and doesn't solve 100% your scenario.
The problem of this is that you still have to apply several conditions for retrieve those values and there is always a limit you have to set (Versus the User that can add as may values as wants)
Sample:
query.SetCustomMetaQuery((KEY_NAME='end_date1'
query.SetCustomMetaQuery((KEY_NAME='end_date2'
query.SetCustomMetaQuery((KEY_NAME='end_date3'
query.SetCustomMetaQuery((KEY_NAME='end_date4'
Probably the fastest and easiest way to achieve that is instead to use an multi-value field, use different fields. I understand that is not the most generic scenario and there are Business Requirements implications but can simplify the development.
My previous comments are in the context of use only the Broker API, but you can take advantage of a search engine if is part of your architecture.
You can index the Broker Database and massage the data.
Using the Search Engine API you can extract the ids of the Components/Component Templates and use the Broker API to retrieve the proper information