How can I get data in the nodes in the nodes like "01_01_2017_01_00_AM" which are dynamic and even there are multiple children with auto generated push IDs. I need to process results matching the current date in the above format and then display.
By the way I am working with Firebase and Cloud Functions.
Database preview
You can only query one level at a time in Firebase.
You could do a query like this if you know the mail_data key.
admin.database().ref('mail_data/e...IG2')
.orderByKey()
.equal to('01_01_2017_01_00_AM')
.once('value')
.then(snapshot => { ... });
Which would get you the snapshot of your object.
Related
What is best Java based approach to return dynamic data as JSON to front end ? Example : API call return 10 columns and their values but that can change to 20 columns in next API call.... looking for best rest API approach. thanks
I though of keeping queries in database table, pick up from there and take it as collection and return as JSON. whenever there is change in query then update the query with modified columns
I am trying to accomplish something that is pretty easy to do in SQL, but seemingly very challenging to do in SSIS without using SQL. Basically, I need to consolidate and concatenate a field of a many-to-one relationship.
Given entities: [Contract Item] (many) to (one) [Account]
There is a field [ari_productsummary] that contains the product listed on the Contract Item entity. We want to write that value to the Account as [ari_activecontractitems]. However, an Account may have more than one Contract Item record associated to it, in which case, we want to concatenate those values. We also only want the distinct values to be concatenated (distinct rows already solved within my data flow).
This can be accomplished by writing to a temporary table, and then using a query or view to obtain the summarized results as followed. I created a SQL table called TESTTABLE that contains the [ari_productsummary] from the Contract Item entity along with the referring [accountid] to map it back to Account. I then wrote the following query as a view:
SELECT distinct accountid,
(SELECT TT2.ari_productsummary + '; '
FROM TESTTABLE TT2
WHERE TT2.accountid = TT.accountid
FOR XML PATH ('')
) AS 'ari_activecontractitems'
FROM TESTTABLE TT
Executing that Query provides me the results that I want, which I can then use for importing into the Account entity as shown below:
But how do I do this in a SSIS dataflow without writing to a SQL table as a temporary placeholder for the data?? I want to do the entire process inside one dataflow container, without using a temporary SQL table/view. The whole summarization process needs to be done on the fly:
Does anyone have a solution that doesn't require a temporary SQL table/view/query, but is contained entirely within the data flow?
I am using VS 2017 and the KingswaySoft Dynamic CRM 365 ETL toolset to develop my solution/package.
Spit balling here as I don't Dynamics nor do I have the custom components.
Data Flow 1 - Contract aggregation
The purpose of this data flow is to replicate your logic in the elegant query you provided and shove that into a Cache Connection Manager (see Notes for 2008+ at the end)
KingswaySoft Dynamics Source -> Script Task -> Cache Transform
If you want to keep the sort in there, do it before the script task. The implementation I'll take with the Script Task is that it's fully blocking - that is all the rows must arrive before it can send any on. Tasks like the Merge Join are only partially blocking because the requirement of sorted data means that once you no longer have a match for the current item, you can send it on down the pipeline.
The Script Task is going to be asynchronous transformation. You'll have two output columns, your key accountid and your new derived column of ari_activecontractitems. That column will might need to be big - you'll know your data best but if it's a blob type in Dynamics (> 4k unicode or > 8k ascii characters) then you'll have to define the data type as DT_TEXT/DT_NTEXT
As inputs, you'll select accountid and ari_productsummary from your source.
The code should be pretty easy. We're going to accumulate the inbound data into a Dictionary.
// member variable
Dictionary<string, List<string>> accumulator;
The PreProcess method, we'll tack this in there to initialize our variable
// initialize in PreProcess method
accumulator = new Dictionary<string, List<string>>();
In the OnBufferRowSent (name approx)
// simulate the inbound queue
// row_id would be something like Rows.row_id
if (!accumulator.ContainsKey(row_id))
{
// Create an empty dictionary for our list
accumulator.Add(row_id, new List<string>());
}
// add it if we don't have it
if (!accumulator[row_id].Contains(invoice))
{
accumulator[row_id].Add(invoice);
}
Once you get the signal sent of no more data available, that's when you start buffering output data. The auto generated code will have placeholders for all this.
// This is how we shove data out the pipe
foreach(var kvp in accumulator)
{
// approximately thus
OutputBuffer1.AddRow();
OutputBuffer1.row_id = kvp.Key;
OutputBuffer1.ari_productsummary = string.Join("; ", kvp.Value);
}
We have an upcoming release that comes with a component that does exactly what you are trying to achieve without the need of writing custom code. The feature is currently under preview, please reach out to us for private access to the feature. You can find our contact information on our website.
UPDATE - June 5, 2020, we have made the components available for public access at https://www.kingswaysoft.com/products/ssis-productivity-pack/ as a result of our 2020 Release Wave 1. We have two components available that serve this kind of purpose. The Composition component will take input values and transform into a composite value in a SSIS column. The Decomposition component does the opposite, it would take an input value and split it into multiple rows using either delimiter-based text splitting or XML/JSON array splitting.
I am trying to obtain a data set via json and url, using SODA's API. The issue is that the data set is greater then 50K, and I need to sort the data set using multiple keys. Sorting by multiple keys is not something that is permitted by SODA's API. The question is how could I get around that?
Example (This table is small, but for illustrative purposes I have included it):
https://data.medicare.gov/resource/apyc-v239.json?$Limit=1000&$Order=measure_id
but once I attempt to add state to the order the API errors out.
The data set above is only 3800 recs, however there are other datasets with over 250000 recs, which require the same approach - sorting, then paging through the results...
Any assistance would be greatly appreciated.
Try the new API endpoint for that dataset: http://dev.socrata.com/foundry/#/data.medicare.gov/apyc-v239
It'll allow you to sort by multiple columns and there's no maximum for $limit on the new endpoints, so you can do stuff like this:
https://data.medicare.gov/resource/q7p2-jxeh.json?$order=state,measure_name&$limit=100000000
I am currently designing a Rest API and is a little stuck on performance matters for 2 of the use cases in the system:
List all campaigns (api/campaigns) - needs to return campaign data needed for listing and paging campaigns. Maybe return up to 1000 records and would take ages to retreive and return detailed data. The needed data can be returned in a single DB call.
Retrieve campaign item (api/campaigns/id) - need to return all data about the campaign and may take up to a second to run. Multiple DB calls is needed to get all campaign data for a single campaign.
My question is: Is it valid to return different json-responses to those 2 calls (if well documented) even if it regards the same resource? I am thinking that the list response is a sub set of the retreive-response. The reason for this is to make to save DB calls and bandwitdh + parsing.
Thanks in advance!
I think it's both fine and expected for /campaigns and /campaigns/{id} to return different information. I would suggest using query parameters to limit the amount of information you need to return. For instance, only return a URI to each player unless you see a ?expand=players query parameter, in which case you return detailed player information.
I'm working on a C# program that retrieves data from a ServiceNow database and converts that data into C# .NET objects. I'm using the JSON Web Service to return my data in JSON format.
What I want to achieve is as follows: If there is a relational mapping between a value (for
example: I have a table called Company, where CEO is not a TEXT field but an sys_id to a Employee Table) I want to be able to output that data not with an sys_id (or just displaying the name property by using the 'displayvariable' parameter) but by an object displayed in JSON.
This means that the value of a property should be an object in JSON instead of just a single value.
A few examples:
// I don't want the JSON like this
{"Company":{"CEO":"b181e841c9212c008aeb36850331fab2"}}
// Or by displaying the name of the sys_id table
{"Company":{"CEO":"James Henderson" }}
// I want the data as follows, so I can have all the data I need inside a single JSON record.
{"Company":{"CEO":{"name":"James Henderson", "age":34, "sex":"male", "office":"SBN Left Floor 23"}}}
From reading the documentation I couldn't find anything in the JSON Web Service that allowed me to display the information like this nor
find any other alternative. It should have something to do with joining the tables and displaying it all in the right format.
I have been using SNC for almost three years and have not found you can automatically join tables in a web service. Your best option would be to use a scripted web service which possibly takes a query parameter and table parameter. Then you can json serialized your result as you see fit.
Or, another option would be to generate a new processor that will traverse the GlideRecord object. The ?JSON parameter you pass in to the URL is merely a flag to pass your request to a particular processor. Unfortunately the OOB one I believe is a Java class not a JS script, so you would need to write a script much like I mentioned earlier to traverse the object path serializing the object graph as far down as your want to go.