DataTables Uncaught SyntaxError: Unexpected token : - json

I try to use the DataTables component with data provided by a REST API. Chrome reports the following error Uncaught SyntaxError: Unexpected token : on line 2 (see JSON below) when I use server-side data but it works if I use a text file. The setup is:
$('#table_id')
.dataTable({
"bProcessing": true,
"bServerSide": true,
"sAjaxSource": "http://mylocalhost:8888/_ah/api/realestate/v1/properties/demo",
//"sAjaxSource": "data.txt",
"sAjaxDataProp": "items",
"aoColumns": [{
"mData": "id"
}],
"fnServerData": function (
sUrl,
aoData,
fnCallback,
oSettings) {
oSettings.jqXHR = $
.ajax({
"url": sUrl,
"data": aoData,
"success": fnCallback,
"dataType": "jsonp",
"cache": false
});
}
}
}
The JSON returned by the server or in the data.txt file:
{
"iTotalRecords" : 10,
"iTotalDisplayRecords" : 10,
"sEcho" : "1",
"items" : [ {
"id" : "0"
}, {
"id" : "1"
}, {
"id" : "2"
}, {
"id" : "3"
}, {
"id" : "4"
}, {
"id" : "5"
}, {
"id" : "6"
}, {
"id" : "7"
}, {
"id" : "8"
}, {
"id" : "9"
} ]
}
Changing the sAjaxSource to data.txt works but not when the data comes from the server whereas the data are the same.

Related

Array of objects nested in an Array of objects

What is the correct way to represent this structure in JSON
Its Array of strings, with identifiers (A is identifier Printer is the array item)
then there is nested list of strings, with identifiers
A Printer
A0010 Not printing
A0020 Out of ink
A0030 No power
A0040 Noise
A0300 Feedback
A0500 Other
B PC Issues
B0010 No power
B0020 BSOD
B0030 Virus related
B0300 Feedback
B0500 Other
Thank you for your help
Does this work making it easy for you to filter for things?
you can use Object.keys to find the corresponding message
const json = {
data: [{
identifier: 'A',
itemType: 'Printer',
error: [
{
'A0010': 'Not printing'
},
{
'A0020': 'Out of ink'
},
{
'A0030': 'No power',
},
{
'A0040': 'Noise',
},
{
'A0300': 'Feedback',
},
{
'A0500': 'Other'
}
]
},
{
identifier: 'B',
itemType: 'PC Issues',
error: [
{
'B0010': 'No power'
},
{
'B0020': 'BSOD',
},
{
'B0030': 'Virus related'
}, {
'B0300': 'Feedback'
},
{
'B0500': 'Other'
},
]
}
]
}
I'm not totally sure what you mean by identifier unless you mean via javascript 👇
var a = {
"Printer":[
{
"identifier" : "A0010",
"reason" : "Not printing"
},
{
"identifier" : "A0020",
"reason" : "Out of ink"
},
{
"identifier" : "A0030",
"reason" : "No power"
},
{
"identifier" : "A0040",
"reason" : "Noise"
},
{
"identifier" : "A0300",
"reason" : "Feedback"
},
{
"identifier" : "A0500",
"reason" : "Other"
}]
}
var b = {
"PC Issues":[
{
"identifier" : "B0010",
"reason" : "No power"
},
{
"identifier" : "B0020",
"reason" : "BSOD"
},
{
"identifier" : "B0030",
"reason" : "Virus related"
},
{
"identifier" : "B0300",
"reason" : "Feedback"
},
{
"identifier" : "B0500",
"reason" : "Other"
}]
}

Indexing CSV blobs does not work in Azure Search

I have a number of TSV files as Azure blobs that have following as the first four tab-separated columns:
metadata_path, document_url, access_date, content_type
I want to index them as described here: https://learn.microsoft.com/en-us/azure/search/search-howto-index-csv-blobs
My request for creating an indexer has the following body:
{
"name" : "webdata",
"dataSourceName" : "webdata",
"targetIndexName" : "webdata",
"schedule" : { "interval" : "PT1H", "startTime" : "2017-01-09T11:00:00Z" },
"parameters" : { "configuration" : { "parsingMode" : "delimitedText", "delimitedTextHeaders" : "metadata_path,document_url,access_date,content_type" , "firstLineContainsHeaders" : true, "delimitedTextDelimiter" : "\t" } },
"fieldMappings" : [ { "sourceFieldName" : "document_url", "targetFieldName" : "id", "mappingFunction" : { "name" : "base64Encode", "parameters" : "useHttpServerUtilityUrlTokenEncode" : false } } }, { "sourceFieldName" : "document_url", "targetFieldName" : "url" }, { "sourceFieldName" : "content_type", "targetFieldName" : "content_type" } ]
}
I am receiving an error:
{
"error": {
"code": "",
"message": "Data source does not contain column 'document_url', which is required because it maps to the document key field 'id' in the index 'webdata'. Ensure that the 'document_url' column is present in the data source, or add a field mapping that maps one of the existing column names to 'id'."
}
}
What do I do wrong?
What do I do wrong?
In your case, you supply the json format is invalid. The following is the request for creating an indexer. Detail info we could refer to this document
{
"name" : "Required for POST, optional for PUT. The name of the indexer",
"description" : "Optional. Anything you want, or null",
"dataSourceName" : "Required. The name of an existing data source",
"targetIndexName" : "Required. The name of an existing index",
"schedule" : { Optional. See Indexing Schedule below. },
"parameters" : { Optional. See Indexing Parameters below. },
"fieldMappings" : { Optional. See Field Mappings below. },
"disabled" : Optional boolean value indicating whether the indexer is disabled. False by default.
}
If we want to create an indexer with Rest API. We need 3 steps to do that. I also do a demo for it.
If Azure search SDK is acceptable, you also could refer to another SO thread.
1.Create datasource.
POST https://[service name].search.windows.net/datasources?api-version=2015-02-28-Preview
Content-Type: application/json
api-key: [admin key]
{
"name" : "my-blob-datasource",
"type" : "azureblob",
"credentials" : { "connectionString" : "DefaultEndpointsProtocol=https;AccountName=<account name>;AccountKey=<account key>;" },
"container" : { "name" : "my-container", "query" : "<optional, my-folder>" }
}
2.Create an index
{
"name" : "my-target-index",
"fields": [
{ "name": "metadata_path","type": "Edm.String", "key": true, "searchable": true },
{ "name": "document_url", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false },
{ "name": "access_date", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false },
{ "name": "content_type", "type": "Edm.String", "searchable": true, "filterable": false, "sortable": false, "facetable": false }
]
}
3. Create an indexer.
Below is the request body that works:
{
"name" : "webdata",
"dataSourceName" : "webdata",
"targetIndexName" : "webdata",
"schedule" :
{
"interval" : "PT1H",
"startTime" : "2017-01-09T11:00:00Z"
},
"parameters" :
{
"configuration" :
{
"parsingMode" : "delimitedText",
"delimitedTextHeaders" : "document_url,content_type,link_text" ,
"firstLineContainsHeaders" : true,
"delimitedTextDelimiter" : "\t",
"indexedFileNameExtensions" : ".tsv"
}
},
"fieldMappings" :
[
{
"sourceFieldName" : "document_url",
"targetFieldName" : "id",
"mappingFunction" : {
"name" : "base64Encode",
"parameters" : {
"useHttpServerUtilityUrlTokenEncode" : false
}
}
},
{
"sourceFieldName" : "document_url",
"targetFieldName" : "document_url"
},
{
"sourceFieldName" : "content_type",
"targetFieldName" : "content_type"
},
{
"sourceFieldName" : "link_text",
"targetFieldName" : "link_text"
}
]
}

DataTables warning: table id=data-table - Ajax error

I am getting table id=data-table error. I have even converted my text file to json file with parse command. Any ideas what is causing this. I have tried to search every where for this issue.
Please help.
My Text File
{
"Orders": [
{
"Address": "124, KAPADIA INDUSTRIAL",
"Dabba Type": "LUNCH AND DINNER",
"Email": "jamesbond#gmail.com",
"End date": "29/4/2017",
"Phone number": "9619582277",
"Session": "VEG",
"Start date": "15/3/2017",
"User id": "CaXXchcC3XhAifuk7P4YprzSGqB3",
"Username": "james bond"
},
...Still more data is there below
]
}
Script file
$(document).ready(function(){
$('#data-table').DataTable({
"ajax" : {
"dataType" : 'json',
"contentType" : "application/json; charset=utf-8",
"type" : "POST",
"url" : "data/mumbaidabbawala.txt",
"dataSrc" : function (json) {
return $.parseJSON(Orders);
}
},
"columns" : [
{ "Orders" : "Address"},
{ "Orders" : "Dabba Type"},
{ "Orders" : "Email"},
{ "Orders" : "End Date"},
{ "Orders" : "Phone number"},
{ "Orders" : "Session"},
{ "Orders" : "Start date"},
{ "Orders" : "User id"},
{ "Orders" : "Username"}
]
});
});

Custom analyzer appearing in type mapping but not working in Elasticsearch

I'm trying to add a custom analyzer to my index while also mapping that analyzer to a property on a type. Here is my JSON object for doing this:
{ "settings" : {
"analysis" : {
"analyzer" : {
"test_analyzer" : {
"type" : "custom",
"tokenizer": "standard",
"filter" : ["lowercase", "asciifolding"],
"char_filter": ["html_strip"]
}
}
}
},
"mappings" : {
"test" : {
"properties" : {
"checkanalyzer" : {
"type" : "string",
"analyzer" : "test_analyzer"
}
}
}
}
}
I know this analyzer works because I've tested it using /wp2/_analyze?analyzer=test_analyzer -d '<p>Testing analyzer.</p>' and also it shows up as the analyzer for the checkanalyzer property when I check /wp2/test/_mapping. However, if I add a document like {"checkanalyzer": "<p>The tags should not show up</p>"}, the HTML tags don't get stripped out when I retrieve the document using the _search endpoint. Am I misunderstanding how the mapping works or is there something wrong with my JSON object? I'm dynamically creating the wp2 index and also the test type when I make this call to Elasticsearch, not sure if that matters.
The html doesn't get removed from the source, it gets removed from the terms generated by that source. You can see this if you use a terms aggregation:
POST /test_index/_search
{
"aggs": {
"checkanalyzer_field_terms": {
"terms": {
"field": "checkanalyzer"
}
}
}
}
{
"took": 77,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "test_index",
"_type": "test",
"_id": "1",
"_score": 1,
"_source": {
"checkanalyzer": "<p>The tags should not show up</p>"
}
}
]
},
"aggregations": {
"checkanalyzer_field_terms": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "not",
"doc_count": 1
},
{
"key": "should",
"doc_count": 1
},
{
"key": "show",
"doc_count": 1
},
{
"key": "tags",
"doc_count": 1
},
{
"key": "the",
"doc_count": 1
},
{
"key": "up",
"doc_count": 1
}
]
}
}
}
Here's some code I used to test it:
http://sense.qbox.io/gist/2971767aa0f5949510fa0669dad6729bbcdf8570
Now if you want to completely strip out the html prior to indexing and storing the content as is, you can use the mapper attachment plugin - in which when you define the mapping, you can categorize the content_type to be "html."
The mapper attachment is useful for many things especially if you are handling multiple document types, but most notably - I believe just using this for the purpose of stripping out the html tags is sufficient enough (which you cannot do with the html_strip char filter).
Just a forewarning though - NONE of the html tags will be stored. So if you do need those tags somehow, I would suggest defining another field to store the original content. Another note: You cannot specify multifields for mapper attachment documents, so you would need to store that outside of the mapper attachment document. See my working example below.
You'll need to result in this mapping:
{
"html5-es" : {
"aliases" : { },
"mappings" : {
"document" : {
"properties" : {
"delete" : {
"type" : "boolean"
},
"file" : {
"type" : "attachment",
"fields" : {
"content" : {
"type" : "string",
"store" : true,
"term_vector" : "with_positions_offsets",
"analyzer" : "autocomplete"
},
"author" : {
"type" : "string",
"store" : true,
"term_vector" : "with_positions_offsets"
},
"title" : {
"type" : "string",
"store" : true,
"term_vector" : "with_positions_offsets",
"analyzer" : "autocomplete"
},
"name" : {
"type" : "string"
},
"date" : {
"type" : "date",
"format" : "strict_date_optional_time||epoch_millis"
},
"keywords" : {
"type" : "string"
},
"content_type" : {
"type" : "string"
},
"content_length" : {
"type" : "integer"
},
"language" : {
"type" : "string"
}
}
},
"hash_id" : {
"type" : "string"
},
"path" : {
"type" : "string"
},
"raw_content" : {
"type" : "string",
"store" : true,
"term_vector" : "with_positions_offsets",
"analyzer" : "raw"
},
"title" : {
"type" : "string"
}
}
}
},
"settings" : { //insert your own settings here },
"warmers" : { }
}
}
Such that in NEST, I will assemble the content as such:
Attachment attachment = new Attachment();
attachment.Content = Convert.ToBase64String(File.ReadAllBytes("path/to/document"));
attachment.ContentType = "html";
Document document = new Document();
document.File = attachment;
document.RawContent = InsertRawContentFromString(originalText);
I have tested this in Sense - results are as follows:
"file": {
"_content": "PGh0bWwgeG1sbnM6TWFkQ2FwPSJodHRwOi8vd3d3Lm1hZGNhcHNvZnR3YXJlLmNvbS9TY2hlbWFzL01hZENhcC54c2QiPg0KICA8aGVhZCAvPg0KICA8Ym9keT4NCiAgICA8aDE+VG9waWMxMDwvaDE+DQogICAgPHA+RGVsZXRlIHRoaXMgdGV4dCBhbmQgcmVwbGFjZSBpdCB3aXRoIHlvdXIgb3duIGNvbnRlbnQuIENoZWNrIHlvdXIgbWFpbGJveC48L3A+DQogICAgPHA+wqA8L3A+DQogICAgPHA+YXNkZjwvcD4NCiAgICA8cD7CoDwvcD4NCiAgICA8cD4xMDwvcD4NCiAgICA8cD7CoDwvcD4NCiAgICA8cD5MYXZlbmRlci48L3A+DQogICAgPHA+wqA8L3A+DQogICAgPHA+MTAvNiAxMjowMzwvcD4NCiAgICA8cD7CoDwvcD4NCiAgICA8cD41IDA5PC9wPg0KICAgIDxwPsKgPC9wPg0KICAgIDxwPjExIDQ3PC9wPg0KICAgIDxwPsKgPC9wPg0KICAgIDxwPkhhbGxvd2VlbiBpcyBpbiBPY3RvYmVyLjwvcD4NCiAgICA8cD7CoDwvcD4NCiAgICA8cD5qb2c8L3A+DQogIDwvYm9keT4NCjwvaHRtbD4=",
"_content_length": 0,
"_content_type": "html",
"_date": "0001-01-01T00:00:00",
"_title": "Topic10"
},
"delete": false,
"raw_content": "<h1>Topic10</h1><p>Delete this text and replace it with your own content. Check your mailbox.</p><p> </p><p>asdf</p><p> </p><p>10</p><p> </p><p>Lavender.</p><p> </p><p>10/6 12:03</p><p> </p><p>5 09</p><p> </p><p>11 47</p><p> </p><p>Halloween is in October.</p><p> </p><p>jog</p>"
},
"highlight": {
"file.content": [
"\n <em>Topic10</em>\n\n Delete this text and replace it with your own content. Check your mailbox.\n\n  \n\n asdf\n\n  \n\n 10\n\n  \n\n Lavender.\n\n  \n\n 10/6 12:03\n\n  \n\n 5 09\n\n  \n\n 11 47\n\n  \n\n Halloween is in October.\n\n  \n\n jog\n\n "
]
}

Backbone how to construct json correctly

I want to construct an app of hotel and rooms.
Every hotel can have more rooms, I retrieve this data from external server in XML, I parse it and now I have divided into two arrays: hotel and rooms like this:
hotel.json
[
{
"id": "1",
"name": "Hotel1"
},
{
"id": "2",
"name": "Hotel2"
},
{
"id": "3",
"name": "Hotel3"
}
]
rooms.json
[
{
"id" : "r1",
"hotel_id" : "1",
"name" : "Singola",
"level" : "1"
},
{
"id" : "r1_1",
"hotel_id" : "1",
"name" : "Doppia",
"level" : "2"
},
{
"id" : "r1_3",
"hotel_id" : "1",
"name" : "Doppia Uso singol",
"level" : "1"
},
{
"id" : "r2",
"hotel_id" : "2",
"name" : "Singola",
"level" : "1"
},
{
"id" : "r2_1",
"hotel_id" : "2",
"name" : "Tripla",
"level" : "1"
}
]
Into my backbone app I have to make some controller and some parse to retrieve rooms for its hotel.
I want to know if is better for backbone to construct a Json like that:
[
{
"id": "1",
"name": "Hotel1",
"rooms": [
{
"id" : "r1",
"hotel_id" : "1",
"name" : "Singola",
"level" : "1"
},
{
"id" : "r1_1",
"hotel_id" : "1",
"name" : "Doppia",
"level" : "2"
}
]
},
{
"id": "2",
"name": "Hotel2",
"rooms": [
{
"id" : "r2",
"hotel_id" : "2",
"name" : "Singola",
"level" : "1"
},
{
"id" : "r2_1",
"hotel_id" : "1",
"name" : "Doppia",
"level" : "2"
}
]
},
{
"id": "3",
"name": "Hotel3"
}
]
Which is the better mode for backbone in terms of efficiency and parsing?
I thinked the first case but after construct the app I'm not sure.
I would recommend keeping the data structures flat, as Backbone doesn't really support nested collections without some extra effort. Keeping the data model flat will also make it easier for you to map to REST endpoints (ie. '/hotels/1/rooms', 'rooms/1', etc.).
Just to demonstrate the complexities, here is an example of how one would have to associate a collection to a model:
HotelModel = Backbone.Model.extend({
initialize: function() {
// because initialize is called after parse
_.defaults(this, {
rooms: new RoomCollection
});
},
parse: function(response) {
if (_.has(response, "rooms")) {
this.rooms = new RoomCollection(response.rooms, {
parse: true
});
delete response.rooms;
}
return response;
},
toJSON: function() {
var json = _.clone(this.attributes);
json.rooms = this.rooms.toJSON();
return json;
}
});
With a flat data structure, you could do something like this:
HotelModel = Backbone.Model.extend({
idAttribute:'hotel_id',
urlRoot:'/hotels'
});
RoomModel = Backbone.Model.extend({
idAttribute:'room_id',
urlRoot:'/rooms'
});
HotelCollection = Backbone.Collection.extend({
url: '/hotels',
model:HotelModel
});
RoomCollection = Backbone.Collection.extend({
url: '/rooms',
model:RoomModel,
getByHotelId: function(hotelId){
return this.findWhere({hotel_id:hotelId});
}
});