I have this JSON structure:
{
"Devices" : {
"1EJ8QmEQBJfBez7PMADbftCjVff1" : {
"Device1" : {
"Category" : "Others",
"Description" : "",
"DeviceName" : "",
"ImageUrl" : ""
},
"Device2" : {
"Category" : "Chairs",
"Description" : "",
"DeviceName" : "",
"ImageUrl" : ""
},
"Device3" : {
"Category" : "Others",
"Description" : "",
"DeviceName" : "",
"ImageUrl" : ""
}
},
"97PUAcUC5UYxLpBOLnC4yQjxiEf2" : {
"Device1" : {
"Category" : "Others",
"Description" : "",
"DeviceName" : "",
"ImageUrl" : ""
},
"Device2" : {
"Category" : "Books",
"Description" : "",
"DeviceName" : "",
"ImageUrl" : ""
},
"Device3" : {
"Category" : "Chairs",
"Description" : "",
"DeviceName" : "",
"ImageUrl" : ""
}
}
},
"UserProfile" : {
"1EJ8QmEQBJfBez7PMADbftCjVff1" : {
"city" : "",
"email" : "",
"name" : "",
"phone" : ""
},
"97PUAcUC5UYxLpBOLnC4yQjxiEf2" : {
"city" : "",
"email" : "",
"name" : "",
"phone" : ""
},
}
}
I want to access to the Category node in each device?
Is that possible?
Because I want to retrieve data for all nodes that have the same category!
let ref = FIRDatabase.database().reference().child("Devices")
ref.queryOrderedByChild("Category")
I have tried to access to devices! then direct to the category!
It seems like this is not the correct way!
Don't tell me that I need to change the structure of the JSON!
Is there any way to access to the user ids'?
In NoSQL databases you will often have to model your data for the use-cases that you app needs. With your current data structure you can efficiently load a user's devices.
To also efficiently allow to load the devices for a category, you will have to store that information too. Here's one way to do that:
"Categories": {
"Others": {
"1EJ8QmEQBJfBez7PMADbftCjVff1_Device1": "1EJ8QmEQBJfBez7PMADbftCjVff1/Device1",
"1EJ8QmEQBJfBez7PMADbftCjVff1_Device3": "1EJ8QmEQBJfBez7PMADbftCjVff1/Device3",
"97PUAcUC5UYxLpBOLnC4yQjxiEf2_Device1": "97PUAcUC5UYxLpBOLnC4yQjxiEf2/Device1"
},
"Chairs": {
"1EJ8QmEQBJfBez7PMADbftCjVff1_Device2": "1EJ8QmEQBJfBez7PMADbftCjVff1/Device2",
"97PUAcUC5UYxLpBOLnC4yQjxiEf2_Device3": "97PUAcUC5UYxLpBOLnC4yQjxiEf2/Device3"
},
"Books": {
"97PUAcUC5UYxLpBOLnC4yQjxiEf2_Device2": "97PUAcUC5UYxLpBOLnC4yQjxiEf2/Device2"
}
}
So the above structure uses <uid>_<devicekey> as the keys and then has the path to that item under /devices as its value.
I recommend reading this great article on NoSQL data modeling.
Related
I'm trying to edit an endpoint on a REST API that gives me an array of objects. I want to edit the json file but I've been having trouble with formatting the HTTP request.
the output of the endpoint is something like
"result" : [
{
"MAC" : "00:08:00:4A:A1:B3",
"available" : true,
"bridge" : "br0",
"ipv4" : {
"dns1" : "",
"dns2" : "",
"gateway" : "",
"ip" : "",
"mask" : "",
"mode" : ""
},
"ipv6" : {
"delegatedPrefixLength" : 64,
"dns1" : "",
"dns2" : "",
"enabled" : false,
"fixedIp" : [],
"gateway" : "",
"ip" : [],
"linkLocalIp" : [],
"mode" : "DELEGATED",
"prefixDelegationEnabled" : false
},
"name" : "eth0",
"nitype" : "ETHER",
"type" : "LAN"
},
{
"available" : false,
"bridge" : "br0",
"ipv4" : {
"dns1" : "",
"dns2" : "",
"gateway" : "",
"ip" : "",
"mask" : "",
"mode" : ""
},
"ipv6" : {
"delegatedPrefixLength" : 64,
"dns1" : "",
"dns2" : "",
"enabled" : false,
"fixedIp" : [],
"gateway" : "",
"ip" : [],
"linkLocalIp" : [],
"mode" : "DELEGATED",
"prefixDelegationEnabled" : false
},
"name" : "eth1",
"nitype" : "ETHER",
"type" : "LAN"
}
]
I need to be able to append some fields in the first object in the array. I have tried
curl -k -X PUT -H "Content-Type: application/json" -d '{[{"ipv4":{"mode":"DHCP"},"name": "eth0", "type":WAN}]"}' https://192.168.2.1/api/ni?token=$token1
but I keep getting an error saying that it's expecting a and object/value/array.
Any suggestions?
It looks like your PUT data does not have a top-level JSON field:
{[{"ipv4":{"mode":"DHCP"}, ...
The API is probably expecting something like this:
{"request": [{"ipv4":{"mode":"DHCP"}, ...
Check your documentation & examples for that API.
I have a number of TSV files as Azure blobs that have following as the first four tab-separated columns:
metadata_path, document_url, access_date, content_type
I want to index them as described here: https://learn.microsoft.com/en-us/azure/search/search-howto-index-csv-blobs
My request for creating an indexer has the following body:
{
"name" : "webdata",
"dataSourceName" : "webdata",
"targetIndexName" : "webdata",
"schedule" : { "interval" : "PT1H", "startTime" : "2017-01-09T11:00:00Z" },
"parameters" : { "configuration" : { "parsingMode" : "delimitedText", "delimitedTextHeaders" : "metadata_path,document_url,access_date,content_type" , "firstLineContainsHeaders" : true, "delimitedTextDelimiter" : "\t" } },
"fieldMappings" : [ { "sourceFieldName" : "document_url", "targetFieldName" : "id", "mappingFunction" : { "name" : "base64Encode", "parameters" : "useHttpServerUtilityUrlTokenEncode" : false } } }, { "sourceFieldName" : "document_url", "targetFieldName" : "url" }, { "sourceFieldName" : "content_type", "targetFieldName" : "content_type" } ]
}
I am receiving an error:
{
"error": {
"code": "",
"message": "Data source does not contain column 'document_url', which is required because it maps to the document key field 'id' in the index 'webdata'. Ensure that the 'document_url' column is present in the data source, or add a field mapping that maps one of the existing column names to 'id'."
}
}
What do I do wrong?
What do I do wrong?
In your case, you supply the json format is invalid. The following is the request for creating an indexer. Detail info we could refer to this document
{
"name" : "Required for POST, optional for PUT. The name of the indexer",
"description" : "Optional. Anything you want, or null",
"dataSourceName" : "Required. The name of an existing data source",
"targetIndexName" : "Required. The name of an existing index",
"schedule" : { Optional. See Indexing Schedule below. },
"parameters" : { Optional. See Indexing Parameters below. },
"fieldMappings" : { Optional. See Field Mappings below. },
"disabled" : Optional boolean value indicating whether the indexer is disabled. False by default.
}
If we want to create an indexer with Rest API. We need 3 steps to do that. I also do a demo for it.
If Azure search SDK is acceptable, you also could refer to another SO thread.
1.Create datasource.
POST https://[service name].search.windows.net/datasources?api-version=2015-02-28-Preview
Content-Type: application/json
api-key: [admin key]
{
"name" : "my-blob-datasource",
"type" : "azureblob",
"credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" },
"container" : { "name" : "my-container", "query" : "<optional, my-folder>" }
}
2.Create an index
{
"name" : "my-target-index",
"fields": [
{ "name": "metadata_path","type": "Edm.String", "key": true, "searchable": true },
{ "name": "document_url", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false },
{ "name": "access_date", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false },
{ "name": "content_type", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false }
]
}
3. Create an indexer.
Below is the request body that works:
{
"name" : "webdata",
"dataSourceName" : "webdata",
"targetIndexName" : "webdata",
"schedule" :
{
"interval" : "PT1H",
"startTime" : "2017-01-09T11:00:00Z"
},
"parameters" :
{
"configuration" :
{
"parsingMode" : "delimitedText",
"delimitedTextHeaders" : "document_url,content_type,link_text" ,
"firstLineContainsHeaders" : true,
"delimitedTextDelimiter" : "\t",
"indexedFileNameExtensions" : ".tsv"
}
},
"fieldMappings" :
[
{
"sourceFieldName" : "document_url",
"targetFieldName" : "id",
"mappingFunction" : {
"name" : "base64Encode",
"parameters" : {
"useHttpServerUtilityUrlTokenEncode" : false
}
}
},
{
"sourceFieldName" : "document_url",
"targetFieldName" : "document_url"
},
{
"sourceFieldName" : "content_type",
"targetFieldName" : "content_type"
},
{
"sourceFieldName" : "link_text",
"targetFieldName" : "link_text"
}
]
}
I am trying to load in a TSV in druid using this ingestion speck:
MOST UPDATED SPEC BELOW:
{
"type" : "index",
"spec" : {
"ioConfig" : {
"type" : "index",
"inputSpec" : {
"type": "local",
"baseDir": "quickstart",
"filter": "test_data.json"
}
},
"dataSchema" : {
"dataSource" : "local",
"granularitySpec" : {
"type" : "uniform",
"segmentGranularity" : "hour",
"queryGranularity" : "none",
"intervals" : ["2016-07-18/2016-07-22"]
},
"parser" : {
"type" : "string",
"parseSpec" : {
"format" : "json",
"dimensionsSpec" : {
"dimensions" : ["name", "email", "age"]
},
"timestampSpec" : {
"format" : "yyyy-MM-dd HH:mm:ss",
"column" : "date"
}
}
},
"metricsSpec" : [
{
"name" : "count",
"type" : "count"
},
{
"type" : "doubleSum",
"name" : "age",
"fieldName" : "age"
}
]
}
}
}
If my schema looks like this:
Schema: name email age
And actual dataset looks like this:
name email age Bob Jones 23 Billy Jones 45
Is this how the columns should be formatted^^ in the above dataset for a TSV? Like name email age should be first (the columns) and then the actual data. I am confused how Druid will know how to map the columns to the actual dataset in TSV format.
TSV stands for tab separated format, so it looks the same as csv but you will use tabs instead of commas e.g.
Name<TAB>Age<TAB>Address
Paul<TAB>23<TAB>1115 W Franklin
Bessy the Cow<TAB>5<TAB>Big Farm Way
Zeke<TAB>45<TAB>W Main St
you will use frist line as header to define your column names - so you can use "name", "age" or "email" in dimensions in your spec file
as for the gmt and utc, they are basically the same
There is no time difference between Greenwich Mean Time and
Coordinated Universal Time
first one is time zone, the other one is a time standard
btw don`t forget to include a column with some time value in your tsv file!!
so e.g. if you will have tsv file that looks like:
"name" "position" "office" "age" "start_date" "salary"
"Airi Satou" "Accountant" "Tokyo" "33" "2016-07-16T19:20:30+01:00" "162700"
"Angelica Ramos" "Chief Executive Officer (CEO)" "London" "47" "2016-07-16T19:20:30+01:00" "1200000"
your spec file should look like this:
{
"spec" : {
"ioConfig" : {
"inputSpec" : {
"type": "local",
"baseDir": "path_to_folder",
"filter": "name_of_the_file(s)"
}
},
"dataSchema" : {
"dataSource" : "local",
"granularitySpec" : {
"type" : "uniform",
"segmentGranularity" : "hour",
"queryGranularity" : "none",
"intervals" : ["2016-07-01/2016-07-28"]
},
"parser" : {
"type" : "string",
"parseSpec" : {
"format" : "tsv",
"dimensionsSpec" : {
"dimensions" : [
"position",
"age",
"office"
]
},
"timestampSpec" : {
"format" : "auto",
"column" : "start_date"
}
}
},
"metricsSpec" : [
{
"name" : "count",
"type" : "count"
},
{
"name" : "sum_sallary",
"type" : "longSum",
"fieldName" : "salary"
}
]
}
}
}
I want to construct an app of hotel and rooms.
Every hotel can have more rooms, I retrieve this data from external server in XML, I parse it and now I have divided into two arrays: hotel and rooms like this:
hotel.json
[
{
"id": "1",
"name": "Hotel1"
},
{
"id": "2",
"name": "Hotel2"
},
{
"id": "3",
"name": "Hotel3"
}
]
rooms.json
[
{
"id" : "r1",
"hotel_id" : "1",
"name" : "Singola",
"level" : "1"
},
{
"id" : "r1_1",
"hotel_id" : "1",
"name" : "Doppia",
"level" : "2"
},
{
"id" : "r1_3",
"hotel_id" : "1",
"name" : "Doppia Uso singol",
"level" : "1"
},
{
"id" : "r2",
"hotel_id" : "2",
"name" : "Singola",
"level" : "1"
},
{
"id" : "r2_1",
"hotel_id" : "2",
"name" : "Tripla",
"level" : "1"
}
]
Into my backbone app I have to make some controller and some parse to retrieve rooms for its hotel.
I want to know if is better for backbone to construct a Json like that:
[
{
"id": "1",
"name": "Hotel1",
"rooms": [
{
"id" : "r1",
"hotel_id" : "1",
"name" : "Singola",
"level" : "1"
},
{
"id" : "r1_1",
"hotel_id" : "1",
"name" : "Doppia",
"level" : "2"
}
]
},
{
"id": "2",
"name": "Hotel2",
"rooms": [
{
"id" : "r2",
"hotel_id" : "2",
"name" : "Singola",
"level" : "1"
},
{
"id" : "r2_1",
"hotel_id" : "1",
"name" : "Doppia",
"level" : "2"
}
]
},
{
"id": "3",
"name": "Hotel3"
}
]
Which is the better mode for backbone in terms of efficiency and parsing?
I thinked the first case but after construct the app I'm not sure.
I would recommend keeping the data structures flat, as Backbone doesn't really support nested collections without some extra effort. Keeping the data model flat will also make it easier for you to map to REST endpoints (ie. '/hotels/1/rooms', 'rooms/1', etc.).
Just to demonstrate the complexities, here is an example of how one would have to associate a collection to a model:
HotelModel = Backbone.Model.extend({
initialize: function() {
// because initialize is called after parse
_.defaults(this, {
rooms: new RoomCollection
});
},
parse: function(response) {
if (_.has(response, "rooms")) {
this.rooms = new RoomCollection(response.rooms, {
parse: true
});
delete response.rooms;
}
return response;
},
toJSON: function() {
var json = _.clone(this.attributes);
json.rooms = this.rooms.toJSON();
return json;
}
});
With a flat data structure, you could do something like this:
HotelModel = Backbone.Model.extend({
idAttribute:'hotel_id',
urlRoot:'/hotels'
});
RoomModel = Backbone.Model.extend({
idAttribute:'room_id',
urlRoot:'/rooms'
});
HotelCollection = Backbone.Collection.extend({
url: '/hotels',
model:HotelModel
});
RoomCollection = Backbone.Collection.extend({
url: '/rooms',
model:RoomModel,
getByHotelId: function(hotelId){
return this.findWhere({hotel_id:hotelId});
}
});
I've got a MongoDB which I query and the result I serialize and this string I send to my ftl template. Below is the serialized result:
[
{
"id" : "10",
"title" : "Test Title 1",
"partner" : {
"id" : "1",
"name" : "partner 1 ",
"location" : [{
"locationname" : "locationname 1a",
"city" : ""
},{
"locationname" : "locationname 1b",
"city" : ""
}]
}
},
{
"id" : "6",
"title" : "Test Title 2",
"partner" : {
"id" : "1",
"name" : "partner 2 ",
"location" : [{
"locationname" : "locationname 2b",
"city" : ""
}]
}
}
]
How would I use this in my ftl template?
Thanks for any help.
If you really have to serialize before giving the result to FreeMarker... The JSON syntax for maps and lists happens to be a subset of FTL, so assuming the serialized result is in res, res?eval will give you a list of maps.