import into Firebase and create push ID - json

Is there a way/tool to import a json file containing a list of objects and have Firebase push IDs created for each on one the way in? What I'd like is for every things in
{"Top Things" :
[{
"Thingnum": 1,
"place": "place 1"
},
{
"Thingnum": 2,
"place": "place 2"
}]
}
to have it's own push ID created.
I've tried firebase-import but it doesn't create push IDs.
Or will I have to write a script?
Cheers

I ended up writing node scripts. Some kind of DBAadmin tool would be nice.

Related

How to send a lot of POST request in JSON format through JMeter?

So I have this huge file of json requests that I need to send to an API through POST, they are about 4000 different requests. I tried the CSV method and reference the JSON_FILE in code but it didn't work due to a timeout error, I think 4000 files is just too much for this method
I could create 4000 thread groups, each one with it's individual json request but that would be a huge manual labor
Is there anyway to automatize this process?
The json looks basically like this
{
"u_id": "00",
"u_operation": "Address",
"u_service": "Fiber",
"u_characteristic": 2,
"u_name": "Address #1"
},
{
"u_id": "01",
"u_operation": "Address",
"u_service": "TV",
"u_characteristic": 2,
"u_name": "Address #2"
}
All the way up to 4000
What is the anticipated usage of the API endpoint? If it's supposed to process 4000 files at once and it doesn't - this sounds like a bug or a bottleneck and you need to report it.
If you have a large file with 4000 objects like this:
{
"u_id": "00",
"u_operation": "Address",
"u_service": "Fiber",
"u_characteristic": 2,
"u_name": "Address #1"
}
and want to send them one by one with arbitrary number of users/iterations - you can play the following trick
Add setUp Thread Group to your Test Plan
Add JSR223 Sampler to the setUp Thread Group
Put the following code into "Script" area:
def entries = new groovy.json.JsonSlurper().parse(new File('/path/to/your/large/file.json'))
entries.eachWithIndex { entry, index ->
props.put('entry_' + (index + 1), new groovy.json.JsonBuilder(entry).toPrettyString())
}
it will create 4000 JMeter Properties like entry_1, entry_2, each one holding one entry from your large file:
Then in the main Thread Group you will be able to use __P() and __counter() functions combination so each user would take the next "entry" on each iteration like:
${__P(entry_${__counter(FALSE,)},)}

Modifying JSON Data Structure in Data Factory

I have a JSON file that I need to move to Cosmos DB. I currently have a PowerShell script that will modify this file to a proper format to be used in a Data Flow or Copy activity in Azure Data Factory. However, I was wondering if there is a way to do all these modification in Azure Data Factory without using the Powershell script.
The Powershell script can manipulate a 50MB file in a matter of seconds. I would also like a similar speeds if we build something directly in Azure Data Factory.
Without the modification, I get a error because of the "#" sign. Furthermore, if I want to use companyId as my partition key, it is not allowed because it is inside of an array.
The current JSON file looks similar to the below:
{
"Extract": {
"consumptionInfo": {
"Name": "Test Stuff",
"createdOnTimestamp": "20200101161521Z",
"Version": "1.0",
"extractType": "Incremental",
"extractDate": "20200101113514Z"
},
"company": [{
"company": {
"#action": "create",
"companyId": "xxxxxxx-yyyy-zzzz-aaaa-bbbbbbbbbbbb",
"Status": "1",
"StatusName": "1 - Test - Calendar"
}
}]
}
}
I would like to be converted to the below:
{
"action": "create",
"companyId": "xxxxxxx-yyyy-zzzz-aaaa-bbbbbbbbbbbb",
"Status": "1",
"StatusName": "1 - Test - Calendar"
}
Create a new data flow that reads in your JSON file. Add a Select transformation to choose the properties you wish to send to CosmosDB. If some of those properties are embedded inside of an array, then first use Flatten. You can also use the Select transformation rename "#action" to "action".
Data Factory or Data Flow doesn't works well with nested JSON file. Per my experience, the workaround may be a little complexed but works well:
Source 1 + Flatten active 1 to flat the data in key 'Extract'.
Source 2(same with source 1) + Flatten active 2 to flat the data in
key 'company'.
Add a Union active 1 in source 1 flow to join the data after
flatten active 2
create a Dervied Column to filter the column/key you want after
union active1
Then create the Azure Cosmos DB as sink.
The Data flow overview should like this:

branch.io rest API for bulk link creation doesn't preserver order of request

We use POST /v1/url/bulk/:branch_key for batch deep link generation for some of our items.
The response returns an array of URL's alone. The links are working fine, but its not returned in the order of our items send as request.
Is there any way to identify which branch link belongs to which item?
At least if the response had item's id or some other custom data returned with it, we could identify the link correctly.
Any hope? Thanks.
At the most basic level, this information is available to you via the Links tab on the Branch dashboard's Liveview & Export page. You can see the last 100 links created on this tab. To see more, you can use the "Export Links" button that appears in the upper right hand corner of the page.
If you need this for more information than can be retrieved via "Export Links," you can have the app whitelisted for the Data Export API (see: https://dev.branch.io/methods-endpoints/data-export-api/guide/). This provides access to a daily collection of .csv files that would include links created and their metadata. To whitelist the app for the Data Export API you send a request to integrations#branch.io. Be sure to include the app's key and to send the request from an email address on the Team tab (https://dashboard.branch.io/settings/team).
You can also query links. For a single link, append "?debug=true" and enter this value into the address bar of your browser.
You can also script the lookup of link data using the HTTP API: https://github.com/BranchMetrics/branch-deep-linking-public-api#viewing-state-of-existing-deep-linking-urls
The Branch API also allows you to specify a custom alias (the URL slug), so if you simply want an easy way to tie specific bulk-created URLs to the data inside without querying a second time, you could use this as a workaround. Details here
The bulk creation link API would return the links in that specific order.
You can test out the same via creating 3 links and using a particular parameter to differentiate.
E.G :
curl -XPOST https://api2.branch.io/v1/url/bulk/key_live_xxxxxxxxxxx -H "Content-Type: application/json" \
-d '[
{
"channel": "facebook",
"feature": "onboarding",
"campaign": "new product",
"stage": "new user",
"tags": ["one", "two", "three"],
"data": {
"$canonical_identifier": "content/123",
"$og_title": "Title1",
"$og_description": "Description from Deep Link",
"$og_image_url": "http://www.lorempixel.com/400/400/",
"$desktop_url": "http://www.example.com",
"custom_boolean": true,
"custom_integer": 1243,
"custom_string": "everything",
"custom_array": [1,2,3,4,5,6],
"custom_object": { "random": "dictionary" }
}
},
{
"channel": "facebook",
"feature": "onboarding",
"campaign": "new product",
"stage": "new user",
"tags": ["one", "two", "three"],
"data": {
"$canonical_identifier": "content/123",
"$og_title": "Title2",
"$og_description": "Description from Deep Link",
"$og_image_url": "http://www.lorempixel.com/400/400/",
"$desktop_url": "http://www.example.com"
}
},
{
"channel": "facebook",
"feature": "onboarding",
"campaign": "new product",
"stage": "new user",
"tags": ["one", "two", "three"],
"data": {
"$canonical_identifier": "content/123",
"$og_title": "Title3",
"$og_description": "Description from Deep Link",
"$og_image_url": "http://www.lorempixel.com/400/400/",
"$desktop_url": "http://www.example.com"
}
}
]'
As you can see, we have used og_title as a unique parameter and the links created for your app would be in the same order.
Yes, You can identify link belongs to which item by using data of branch.io link , you can pass branch.io config parameter as well as your custom parameters.
Every Branch link includes a dictionary of key : value pairs that is specified by you at the time the link is created. Branch’s SDKs make this data available within your app whenever the app is opened via a Branch link click.

Rails 4 Javascript autocomplete a json file

I would like to build an air cargo app. I want each cargo to be attached to one destination airport.
I found this JSON file. Sample:
"iata": "FOB",
"lon": "-123.79444",
"iso": "US",
"status": 1,
"name": "Fort Bragg Airport",
"continent": "NA",
"type": "airport",
"lat": "39.474445",
"size": "small"
Where should I put the JSON file in a rails 4 app?
How can I autocomplete airports both in "iata" and "name" field?
Given the size(~1.7mb) of the file which method other than "filter method" should I use, preferably in reactjs?
First, I would create a rake task or something similar to run the JSON to the dedicated database table (for example model called Airport). Here is some examples for running JSON to the database. This way you can also update the airport data when it has changed and the searching becomes much easier since you can use ActiveRecord for it.
Second, I would probably place the JSON file under config/ folder.
And finally about autocomplete. Since you haven't too explicitly told what you wish from the autocomplete, you could for example use jQuery-Autocomplete with what you could write something like this
$('#autocomplete').autocomplete({
lookup: function (query, done) {
// Do ajax call with the query
$.ajax("www.your-api.com/search?query=" + query).done(function (data) {
done({result: data});
});
},
onSelect: function (suggestion) {
alert('You selected: ' + suggestion.value + ', ' + suggestion.data);
}
});
It is quite impossible to give better instructions on how to do the autocomplete, but this way you can anyway autocomplete by 2 different fields.

Get file ID of a given path

is there a direct method to get file ID by giving a path (e.g. /some/folder/deep/inside/file.txt)? I know this can be done by recursively checking folder's contents, but a simple call would be much better.
Thanks
We currently don't have support for this, but the feedback will definitely be considered as we continue building out the v2 API.
An alternative to this would be to extract the target file/folder name from the path and search for it using the search API
like this: https://api.box.com/2.0/search?query=filename.txt
This gives back all the matching entries with their path_collections which provides the whole hierarchy for every entry. Something like this:
"path_collection": {
"total_count": 2,
"entries": [
{
"type": "folder",
"id": "0",
"sequence_id": null,
"etag": null,
"name": "All Files"
},
{
"type": "folder",
"id": "2988397987",
"sequence_id": "0",
"etag": "0",
"name": "dummy"
}
]
}
Path for this entry can be reverse engineered as /dummy/filename.txt
Just compare this path against the path you're looking for. If it matches, then that's the search result you're looking for. This is just to reduce the number of ReST calls you need to make to arrive at the result. Hope it makes sense.
Here is my approach on how to get a folder id based on a path, without recursively going through the whole tree, this can be easily adapted for file as well. This is based on PHP and CURL, but it's very easy to use it in any other application as well:
//WE SET THE SEARCH FOLDER:
$search_folder="XXXX/YYYYY/ZZZZZ/MMMMM/AAAAA/BBBBB";
//WE NEED THE LAST BIT SO WE CAN DO A SEARCH FOR IT
$folder_structure=array_reverse (explode("/",$search_folder));
// We run a CURL (I'm assuming all the authentication and all other CURL parameters are already set!) to search for the last bit, if you want to search for a file rather than a folder, amend the search query accordingly
curl_setopt($curl, CURLOPT_URL, "https://api.box.com/2.0/search?query=".urlencode($folder_structure[0])."&type=folder");
// Let's make a cine array out of that response
$json=json_decode(curl_exec($curl),true);
$i=0;
$notthis=true;
// We need to loop trough the result, till either we find a matching element, either we are at the end of the array
while ($notthis && $i<count($json['entries'])) {
$result_info=$json['entries'][$i];
//The path of each search result is kept in a multidimensional array, so we just rebuild that array, ignoring the first element (that is Always the ROOT)
if ($search_folder == implode("/",array_slice(array_column($result_info['path_collection']['entries'],'name'),1))."/".$folder_structure[0])
{
$notthis=false;
$folder_id=$result_info['id'];
}
else
{
$i++;
}
}
if ($notthis) {echo "Path not found....";} else {echo "Folder id: $folder_id";}