I have dash app connected to an AWS RDS. I have a live-updated graph that triggers a callback with a n_interval of 5min to query the database and do some expensive formatting. I store the transformed data (~500 data points) in a dcc.store from which another 6 graphs and a datatable utilize this data (no further processing required. My question is: To further improve the efficiency of the dashboard should I utilize client-side callbacks instead of dcc.store? Since from what I've read the client-side only utilizes the client browser and doesn't need to communicate back to the dash server on callback? Thank you.( I'm secretly hoping it makes little difference as I don't want to learn javascript)
The answer depends a lot on your architecture, how complex are the graphs and how many server side callbacks are currently used.
You have already mentioned the main advantage of clientside callbacks: since the data is on the clientside, it saves some time and network by updating the components directly on the client. The major inconvenience is that those are synchronous calculations and will block the app main thread (dash does not support promises and asynchronous for now), which Can lead to a bad UX if these functions take too much time.
For very simple plots and table, I can guarantee that there is a significant improvement in using clientside callbacks, especially because it avoids the Plotly API that can be much slower than just defining a JSON object with traces and layout.
Related
Can anyone explain when to use protocol buffer instead of JSON for micro-services architecture? And vice-versa? Both on synchronous and asynchronous communication.
When to use JSON
You need or want data to be human readable
Data from the service is directly consumed by a web browser
Your server side application is written in JavaScript
You aren’t prepared to tie the data model to a schema
You don’t have the bandwidth to add another tool to your arsenal
The operational burden of running a different kind of network service
is too great
Pros of ProtoBuf
Relatively smaller size
Guarantees type-safety
Prevents schema-violations
Gives you simple accessors
Fast serialization/deserialization
Backward compatibility
While we are at it, have you looked at flatbuffers?
Some of the aspects are covered here google protocol buffers vs json vs XML
Reference:
https://codeclimate.com/blog/choose-protocol-buffers/
https://codeburst.io/json-vs-protocol-buffers-vs-flatbuffers-a4247f8bda6f
I'd use JSON when the consumer is or could possibly be written in a language with built-in native support for JSON (Javascript is an example), a web browser, or where human readability is wanted. Speaking of which, at least for asynchronous calls, many developers enjoy the convenience of examining the contents of the queue directly for debugging and even during the normal course of development. Depending on the tech stack used, it may or may not be worth the trade off to use protobuf just to reduce network load since any performance increase wont buy you much in the async world. And it's not like we need to write a bunch of boiler plate code anymore like we used to with JSON marshalling and unmarshalling in most languages.
I'd use protobuf for everything else... if there are any other use cases left for it with the considerations above. There are advantages you might see, such as performance, network load, the backwards compatibility offered by its versioning scheme, the lovely documentation that magically comes with proto files, and some validation! If for some reason you have a lot of REST or other synchronous calls between microservices, protobuf can be sent over the wire instead of JSON without many trade offs, if any at all, while offering a heap of advantages.
I am currently working on a web service in Go that essentially takes a request and sends back JSON, rather typical. However, this particular JSON takes 10+ seconds to actually complete and return. Because I am also making a website that depends on the JSON, and the JSON contents are subject to change, I implemented a route that quickly generates and returns (potentially updated or new) names as placeholders that would get replaced later by real values that correspond to the names. The whole idea behind that is the website would connect to the service, get back JSON almost immediately to populate a table, then wait until the actual data to fill in came back from the service.
This is where I encounter an issue, potentially because I am newish to Go and don't understand its vast libraries completely. The previous method that I used to send JSON back through the HTTP requests was ResponseWriter.Write(theJSON). However, Write() terminates the response, so the website would have to continually ping the service which could now and will be disastrous in the future
So, I am seeking some industry knowledge into my issue. Can HTTP connections be continuous like that, where data is sent piecewise through the same http request? Is that even a computationally or security smart feature, or are there better ways to do what I am proposing? Finally, does Go even support a feature like that, and how would I asynchronously handle it for performance optimization?
For the record, my website is using React.js.
i would use https websockets to achieve this effect rather than a long persisting tcp.con or even in addition to this. see the golang.org/x/net/websocket package from the go developers or the excellent http://www.gorillatoolkit.org/pkg/websocket from gorilla web toolkit for use details. You might use padding and smaller subunits to allow interruption and restart of submission // or a kind of diff protocol to rewrite previously submitted JSON. i found websocket pretty stable even with small connection breakdowns.
Go does have a keep alive ability net.TCPConn's SetKeepAlive
kaConn, _ := tcpkeepalive.EnableKeepAlive(conn)
kaConn.SetKeepAliveIdle(30*time.Second)
kaConn.SetKeepAliveCount(4)
kaConn.SetKeepAliveInterval(5*time.Second)
Code from felixqe
You can use restapi as webservice and can sent data as a json.SO you can continously sent data over a communication channel.
I am writing code to collect data from the Steam API (documentation: https://developer.valvesoftware.com/wiki/Steam_Web_API).
Particularly, I use the fromJSON() function from the jsonlite package in [R].
However, I have the feeling that the code is slow, and that the bottleneck is the actual calling of the API. I am currently able to do around 7.500-10.000 calls an hour, resulting in around 2-3 calls per second. This feels slow. Is it possible to speed this up, and if so, how?
Two things I found already is that it may be necessary to close the connection after opening it (cf. http://www.firaja.cc/steam-web-api-right-way.html). Also, the API allows json (what I use now) but also XML and CSV outputs, maybe it is better to use one of the latter two? Any other possible solutions?
I noticed on one of Scott Hanselman's blogs he uses the following code in his Views when using .Net 5 (MVC 6):
#await Html.PartialAsync("_LoginPartial")
vs.
#Html.Partial("_LoginPartial")
Is there any documentation yet on when which one should be used?
This is actually a pretty interesting question and scenario. To a certain extent, async is the new hotness (though it's really not all that new). Entity Framework 6 hit with async methods and every... single... piece... of... documentation... suddenly starting using async for everything. I think we're seeing a little of the same here. MVC 6 supports async for things like rendering partials, so OMG we've all just have to use async now.
Async serves one very specific purpose. It allows the active thread to be returned to the pool to field other tasks while the current task is in a wait state. The key part of that is "wait state". Certain tasks are just flat out incompatible with async. CPU-bound work like complex financial analysis never allows the thread to enter a wait state so everything is effectively run as sync even if you set it up as async. On the other hand, things involving network latency (requesting a resource from a web API, querying a database, etc.) or that are I/O bound (reading/writing files, etc.) can at times have periods where the thread is waiting around for some other process to complete before it continues processing.
Looking specifically at rendering a partial, the only piece that's not entirely CPU-bound is reading the view file itself from the filesystem. While that's technically enough to make it eligible for async, how long is it really going to take to read what's essentially a text file that's probably less than 50KB max. By the time the thread is handed back to the pool, it's probably time to request it back, so you're actually using resources more inefficiently at that point.
Long and short, don't fall into the trap of "it can be done async, so I must do it async". Each use should be evaluated in terms of whether there's actually value in it. Async has a lot of overhead, and if you're only talking about a few milliseconds of wait time, it's probably not worth all that extra overhead.
As per the ASP.NET MVC documentation on partial views.
https://docs.asp.net/en/latest/mvc/views/partial.html
The PartialAsync method is available for partial views containing asynchronous code (although code in views is generally discouraged):
Also the note on the page.
If your views need to execute code, the recommended pattern is to use a view component instead of a partial view.
So you should use Partial and avoid PartialAsync, and if you find yourself with a PartialAsync you should question yourself whether you're doing something wrong, maybe you should be using a ViewComponent instead or move the logic from the view to the controller.
Just to keep the thread up to date for those visiting in the age of asp.net core.
Currently, according to the documentation:
Partial and RenderPartial are the synchronous equivalents of PartialAsync and RenderPartialAsync, respectively. The synchronous equivalents aren't recommended because there are scenarios in which they deadlock. The synchronous methods are targeted for removal in a future release.
Full content:
Partial views in ASP.NET Core
To get an idea about why this is an issue, you can take a look at this github entry:
https://github.com/aspnet/Mvc/issues/7083
But long story short it looks like the synchronous version of Partial simply calls async one with GetResult and that's a recipe for deadlocks in some scenarios.
To sum up, there is no real reason not to use the Async version. There are reasons not to use the sync ones.
Even though there is little chance to actually run into deadlocks without huge load and fancy logic within the views.... But if you do, it will be terribly hard to debug and fix.
var result = htmlHelper.RenderPartialAsync(partialViewName, htmlHelper.ViewData.Model, viewData: null);
result.GetAwaiter().GetResult();
With regards to "await Html.PartialAsync" - this link may help you - http://aspnetwebstack.codeplex.com/workitem/601 (follow the comments too) (as to what exactly was the problem before).
I am working on a public facing website being built on MVC 6 and "await Html.PartialAsync" is faster than "Html.Partial" - especially when the view contains lot of components.
Taking out the "await" out of Html.PartialAsync does not obviously work and Html.PartialAsync spits out the type name (ie, "System.Threading.Tasks.Task`1[Microsoft.AspNet.Mvc.Rendering.HtmlString]") instead of the actual view.
Necromancing.
Option 1:
Your child partial view has some async await code in Razor markup - it might deadlock since MS code just calls GetAwaiter().GetResult() and that's a recipe for deadlocks.
Use PartialAsync
Option 2:
Your partial view does not have any awaits but your model-grneration code does (for example Html.Partial("MyView.cshtml", new MyModel { PropertyX = await XyzXyz() }) then it might also deadlock.
Use PartialAsync
Option 3:
Your partial does not use async/await whatsoever, all the way down.
It's OK to use Html.Partial (sync)
I need to transfer data (objects) between client and server, and Twisted seems a good way to accomplish this. I've been doing a lot searching but still haven't found any example to understand the basic principle. So any simple code would help.
Thanks!
EDIT
Both client and server are written in python
The data may be large, so I need a fast, reliable transmission ( I've taken a look at producers, is that good?)
Flask is great, but I am using another framework, so the whole networking thing relies on Twisted.
It's hard to tell if your question is more about json, python or twisted, but here's an overview, more can follow once the specifics are known. Perhaps you could add some more info to your question so we can offer more assistance :-)
re Json: Json is just a string with a defined structure. If you are working in python and have an object to send as json, then you need to convert the object to a json string by use of
import json
json.dumps(objectName)
If your client is javascript then instead of json.dumps you might use JSON.stringify(objectname).
If you intend to use javascript for clients then some of the frameworks like jQuery make it very easy.
Pythons json.dumps has a lot of optional arguments, most of which you won't need. You can see the options at https://docs.python.org/2/library/json.html
Python is python, I assume you know how to create and populate objects. Will your client be python or javascript or something else? From a javascript client to a python server you would most likely use Ajax to send requests and get responses.
Twisted allows you to easily create a server that will listen on a given port and, when data arrives, an event will occur that supplies the data received. You can then do whatever you need to with the data. Just be careful about doing blocking things like database inserts since the server may miss some data or otherwise misbehave if you interrupt it's event loop. Twisted can be difficult to learn initially, but it is a very powerful and reliable system that is well proven. One alternative to consider, particularly if your clients are not python, is node.js. In my opinion, node is a little bit easier to grasp initially and there are thousands of add-on modules that let you do almost anything you'd want. I use both twisted and node for different things.
Neither node.js nor twisted are software that you can use to just quickly spin up a server or client without some study and experimentation. To use Twisted or Node.js properly confidently, using all their features and goodness, requires a bit of research and work on your part.
There are excellent frameworks like Flask that can be used to build a server that can react to a number of different Ajax calls from a client - you can have a single server be able to respond to several different kinds of requests instead of having a server for each Ajax type.
This is a small library that serializes an object with all its children to JSON and also parses it back to a fully working object:
https://github.com/Toubs/PyJSONSerialization/