Slow responce whie parsing large JSON responce - json

I have developed a web application based on RESTFul design where the application takes JSON responce from JAVA based web service and displays in UI and it refreshes the data in every 5 seconds.
The application uses Bootstrap for UI design, Backbone and require.js for implementing an MVC stucture where JSON response is parsed as Backbone collection.
When an admin is using this application the JSON response size is too large(from 800 to 1100 objects).
This is where things get messy. As per my analysis the browser is taking up too much resource.So rest of the application is very slow. For eg if I try to open a modal, system freezes for some time and opens slowly thus giving a very poor user experience.
As per my analysis time is being taken in parsing the data
As a remedy I am removing all comments in code and trying to implement Gzip compression for JSON files/html/css/js.
Sample of the JSON object is pasted below
{
"name": "TEST",
"state": "Lunch",
"time": "00:00:09",
"manager": "TEST",
"site": "C",
"skill": "TEST",
"center": "TEST",
"teamLead": "TEST",
"workGroup": "TEST",
"lanId": "TEST",
"dbID": "TETS",
"loginId": "TEST",
"avgAcwTime": "nn",
"avgHandleTime": "nn",
"avgTalkTime": "nn",
"callsAnswered": "nn",
"dispSkill": "-",
"errCode": null,
"errDesc": null,
"avgAcwTimeth": "medium",
"avgHandleTimeth": "high",
"avgTalkTimeTh": "medium",
"callsAnsweredTh": "medium",
"stateTh": "high"
}
Pagenation can't be done due to some requirements.
Can any one suggest something to improve the perfomance
Also I am fetching data using Backbone.Collection.fetch()
getAgentMetric(){
this.metrices.fetch({
url : (isLocal) ? ('http://localhost:8080/jsons/agent.json') : (prev_this.url + '/agentstat'),
data: JSON.stringify(param),
type: "POST",
dataType: "JSON",
contentType: "application/json",
})
.done(function() {
// passing the datasource from ajax call
prev_this.agentLoacalSource.localdata = prev_this.metrices.toJSON();
});
timeout = setTimeout(_.bind(this.getAgentMetric, this), 5000);
},

Browsers can handle a heck of a lot more than a thousand objects without any strain, so I don't think it's the fact that you are simply requesting a large amount of data from the backend. It's more likely that some of your parsing or rendering code is slow.
A few possibilities without seeing any more of your code:
It really depends on what you're doing here, but I'm going to assume that you aren't using a templating library (hoganjs, handlebarsjs, etc). You should definitely look into using one as they speed things up quite a bit and make generating html a lot easier.
Are you running .append() for each individual model that you render? This will really slow things down. You should generate all of the html that needs to be generated, and then run .append() once.
What kind of event listeners are you adding for each model (if any)? Listing to scroll events without a debounce ends up slowing down your browser, especially if you add a bunch of them.
Unrelated to your slowness issues, there are a few problems that I see with this code:
Your timeout should be called from an .always() function in ajax to prevent concurrent requests from going out if for whatever reason a request is slow.
this.metrices.fetch(...)
.always(function() {
timeout = setTimeout(...);
}.bind(this));
Requests that are simply fetching data should use a GET instead of a POST request type. You can see https://stackoverflow.com/a/3477374/5780021 for more info about this.
I would recommend timing some of your code to see where the slowness is actually happening. This will allow you to actually determine how long things are taking between to points in code.
Firefox console.time
Chrome console.time
IE console.time

Related

What is a useful Azure IoT Hub JSON message structure for consumption in Time Series Insights

The title sounds quite comprehensive, but my baseline question is quite simple, I guess.
Context
I Azure, I have an IoT hub, which I am sending messages to. I use a modified version one of the samples from the Azure Iot SDK for python.
Sending works fine. However, instead of a string, I send a JSON structure.
When I watch the events flowing into the IoT hub, using the Cloud shell, it looks like this:
PS /home/marcel> az iot hub monitor-events --hub-name weathertestiothub
This extension 'azure-cli-iot-ext' is deprecated and scheduled for removal. Please remove and add 'azure-iot' instead.
Starting event monitor, use ctrl-c to stop...
{
"event": {
"origin": "raspberrypi-zero-wh",
"payload": "{ \"timestamp\": \"1608643863720\", \"locationDescription\": \"Attic\", \"temperature\": \"21.941\", \"relhumidity\": \"71.602\" }"
}
}
Issue
The data seems fine, except the payload looks strange here. BUT, the payload is literally what I send from the device, using the SDK sample.
Is this the correct way to do it? At the end, I have a very hard time to actually get the data into the Time Series Insights model. So I guess, my structure is to be blamed.
Question
What is a recommended JSON data structure to send to the IoT hub for later use?
You should add the following 2 lines to your message in your python SDK sample:
msg.content_encoding = "utf-8"
msg.content_type = "application/json"
This should resolve your formatting concern.
We've also updated our samples to reflect this: https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/samples/sync-samples/send_message.py
I ended up using the tip by #elhorton, but it was not the key change. Nonetheless, the formatting in the Azure Shell Monitor looks now much better:
"event": {
"origin": "raspberrypi-zero-wh",
"payload": {
"temperature": 21.543947753906245,
"humidity": 69.22964477539062,
"locationDescription": "Attic"
}
}
The key was:
include the message source time in ISO format
from datetime import datetime
timestampIso = datetime.now().isoformat()
message.custom_properties["iothub-creation-time-utc"] = timestampIso
Using the locationDescription as the Time Series ID Property See https://learn.microsoft.com/en-us/azure/time-series-insights/how-to-select-tsid (Maybe I could also have taken the iothub-connection-device-id, but I did not test that alone specifically)
I guess using "iothub-connection-device-id" will make "raspberrypi-zero-wh" as the name of the time series instance. I agree with your choice of using "locationDescription" as TSID; so Attic becomes the time series instance name, temperature and humidity will be your variables.

Logic App not parsing body from ADF anymore

I'm triggering Logic Apps (around 30) from the Data Factory V2. I am passing a body to the HTTP trigger, which is in JSON in Data Factory V2. The body is different for almost all Logic Apps.
Last week there was an issue that the 'When HTTP Request is received' step is not processing the body from the Data Factory in a correct matter.
Please note that both the Logic Apps and Data Factory haven't changed in months and were working without any problems up to last week.
This happened last week also, but this resolved 'itself', suggesting it was an issue at Logic App side. Currently all Logic Apps keep failing. I've tried rerunning the Logic Apps many times. #AzureSupport redirected me to our CSP, but they are not really helping at the moment.
Body in the ADF pipeline (sanitized the url):
"typeProperties": {
"url": "https://prod-50.westeurope.logic.azure.com:443 /<....>",
"method": "POST",
"body": {
"customer": "#pipeline().parameters.customer",
"token": "#pipeline().parameters.token",
"tennant": "#pipeline().parameters.tennant",
"baseuri": "#pipeline().parameters.baseuri",
"connectorTrans": "#pipeline().parameters.connectorTrans",
"connectorNonTrans": "#pipeline().parameters.connectorNonTrans",
"datum": "#formatDateTime(adddays(utcnow(),-1),'s')"
}
}
The last succesful run parsed the body from the Data Factory as follows (sanitized ofcourse):
"body": {
"customer": "<customerName>",
"token": "<token>",
"tennant": null,
"baseuri": "<baseUri>",
"connectorTrans": "<connectorName>",
"connectorNonTrans": "<connectorName2>",
"datum": "<date>"
}
The runs that are failing are all showing the same problem, the body is not being parsed correctly:
"body": "{\r\n \"customer\": \"<customerName>\",\r\n \"token\": \"<token>\",\r\n \"tennant\": null,\r\n \"baseuri\": \"<baseUri>\",\r\n \"connectorTrans\": \"<connectorName>\",\r\n \"connectorNonTrans\": \"<connectorName2>\",\r\n \"datum\": \"<date>\"\r\n}"
It is all in one single line, including \r\n and escape characters.
This is resulting in the Logic App not being able to use the values in the fields passed on by the Data Factory.
All help or pointers are much appreciated.
Running the Logic App from Postman, with the exact same body as from the Data Factory is working without any problems.
I faced the same issue, you need to add header content type application/json in your web component in ADF which calling logic app.

Should the server respond one json for all content data on the page in SPA or is better to split it?

I'm building API for SinglePageApplication, which handle by Angular in frontend. One thing is not clear to me.
Supose the web applcation has delati journal paige wich display journal,some articles which belongs to this journal and some cool authors which can be not connected to this journal.
Should I build my api urls based on each need page content, for example:
from url /api/journal/<journal_id>
send json:
{
"journal": {
"id": 10,
"name": "new_journal",
"articles": [
{
"name": "cool_article",
"id": 42
},
{
"name": "another_cool_article",
"id": 43
}
]
},
"authors": [
{
"name": "some_name",
"id": 42
},
{
"name": "another_name",
"id": 43
}
]
}
Or I should build my api based on concrete objects and related objects of them.
With urls like this:
/api/journals/<journal_id>
/api/authors/
And frontend side build this page with two GET requests for fetching content.
Sory if my question too broad, I just want to find best bractice for building API to SinglePageApplications.
Does it have any difference of building API enpoints for external web-apps and what I should do if page need to display more objects, which not belong together? Which of the options above is better?
There isn't really a universal right answer for this. It largely depends on the use case for that data you're fetching. I would say to err on the side of splitting this into multiple requests as it grants you flexibility and efficiency in terms of partial updates to the page. That approach also makes exposing an API to the public much easier in terms of being able to just expose what you already have.
If you're dealing with a potentially large (an intentionally relative term) number of concurrent requests though, you may build some composites of related data to mitigate that.
Of course, you could also do a combination of the two as well (first load makes 1 large request, subsequent updates are segmented).

Context Broker, ONTIMEINTERVAL subscribe immediatelly sends request to reference

The problem is even if I put condValues to PT10S, when I send request to contextBroker it requests back the reference url rigth away, not after 10 sec, and then it continues to send requests at 10 sec.
My question: is there a way to avoid the first initial request?
Here is a body of the request that I send to server where contextBroker is installed.
{
"entities": [{
"type": "Cycle",
"isPattern": "false",
"id": "someid"
}],
"attributes": [
...
],
"reference": "someurl"
"duration": "P1M",
"notifyConditions": [{
"type": "ONTIMEINTERVAL",
"condValues": [
"PT10S"
]
}]
}
At the present moment (Orion 1.1) initial notification cannot be avoided. However, being able to configure that behaviour would be an interesting feature to develop in the future and, consecuently, a github issue was created time ago about it.
In addition, note that ONTIMEINTERVAL subscriptions are no longer supported so you should avoid to use them:
ONTIMEINTERVAL subscriptions have several problems (introduce state in CB, thus making horizontal scaling configuration much harder, and makes it difficult to introduce pagination/filtering). Actually, they aren't really needed, as any use case based on ONTIMEINTERVAL notification can be converted to an equivalent use case in which the receptor runs queryContext at the same frequency (and taking advantage of the features of queryContext, such as pagination or filtering).
EDIT: the posibility of avoiding initial notification has been finally implemented at Orion. Details are at this section of the documentation. It is now in the master branch (so if you use fiware/orion:latest docker you will get it) and will be include in next Orion version (2.2.0).

Does anyone know of a webservice for looking up definitions of words that would be able to return results in JSON?

I found http://words.bighugelabs.com/api.php but nothing like this for definitions/dictionary.
Ideally I'd grab a dictionary file and build my own API for this, but this is for a demo and we need something short-term that can be called from within a javascript function.
wiktionary.org provides an API for example:
http://en.wikipedia.org/w/api.php?action=query&list=search&srsearch=Television&format=json
gives back
{
"query": {"searchinfo": {"totalhits": 208862},
"search": [{
"ns": 0,
"title": "Television",
"snippet": "<span class='searchmatch'>Television<\/span> (TV) is a widely used telecommunication medium for transmitting and receiving moving images , either monochromatic (\"black <b>...<\/b> ",
"size": 28228,
"wordcount": 3566,
"timestamp": "2009-10-02T15:09:56Z"},
...
]},
"query-continue": {"search": {"sroffset":10}}
}
I think this is what you are looking for
bighugelabs API - Json fromat
aonaware services - XML format
Not sure if it would fit your needs, but answers.com has webmaster tools that offer various services, including dictionary lookup. Don't know if any can be called from javascript.
At short notice you could set up a reverse-proxy on your server that lets you AJAX your favorite dictionary website and then 'scrape' the definitions from the document that is returned. It's obviously not a long term solution but for a one time thing, you probably won't get into trouble.
This is a web service and have several dictionaries:
http://services.aonaware.com/DictService/DictService.asmx
P.S. I did not notice the JSON part of your question.