I am trying to make an app that reads data from a device over btle and displays this data in a streaming graph. I want to do this with use of polymer. It would be nice to shield the complexity of btle.
To have a html tag for btle that displays a BTLE icon and double click will connect to the device. But ones it is connected I want the (notify)data to connect to my graph. When I look at examples of polymer data binding it only binds to very slow data sources like an input field. So my question is can this be done (2KB/sec) with Polymer or is it to slow and should I keep the data out of Polymer ?
Performance of data-binding has to do with how many bindings there are, and the expense of whatever side-effects you trigger, not data-size or transfer speeds.
Generally anything measured in seconds is much slower than the kind of throughput we worry about in Polymer.
Related
I am creating a webpage in ReactJS for post feed (with texts, images, videos) just like Reddit with infinite scrolling. I have created a single post component which will be provided with the required data. I am fetching the multiple posts from MySQL with axios. Also, I have implemented redux store in my project.
I have also added post voting. Currently, I am storing all the posts from db in redux store. If user upvotes or downvotes, that change will be in redux store as well as in database, and web-page is re-rendering the element at ease.
Is it feasible to use redux-store for this, as the data will be increased soon, maybe in millions and more ?
I previously used useState hook to store all the data. But with that I had issue of dynamic re-rendering, as I had to set state every time user votes.
If anyone has any efficient way, please help out.
Seems that this question goes far beyond just one topic. Let's break it down to the main pieces:
Client state. You say that you are currently using redux to store posts and update the number of upvotes as it changes. The thing is that this state is not actually a state in your case(or at least most of it). This is a common misconception to treat whatever data that is coming from API a state. In most cases it's not a state, it's a cache. And you need a tool that makes work with cache easier. I would suggest trying something like react-query or swr. This way you will avoid a lot of boilerplate code and hand off server data cache management to a library.
Infinite scrolling. There are a few things to consider here. First, you need to figure out how you are going to detect when to preload more posts. You can do it by using the IntersectionObserver. Or you can use some fance library from NPM that does it for you. Second, if you aim for millions of records, you need to think about virtualization. In a nutshell, it removes elements that are outside of the viewport from the DOM so browsers don't eat up all memory and die after some time of doomscrolling(that would be a nice feature tho). This article would be a good starting point: https://levelup.gitconnected.com/how-to-render-your-lists-faster-with-react-virtualization-5e327588c910.
Data source. You say that you are storing all posts in database but don't mention any API layer. If you are shooting for millions and this is not a project for just practicing your skills, I would suggest having an API between the client app and database. Here are some good questions where you can find out why it is not the best idea to connect to database directly from client: one, two.
I'm using multiple atoms within a map called app-state and it's working quite well architecturally so far. The state distributed across those atoms is normalised, reflecting as it is stored in datomic, and of course what the client is initialised with is a specific subset of what's in datomic. This is me preparing the way to try out datascript (which is what gave me the aha moment of why client state is so much better fully normalised, even if not using datascript).
I have a question at this point. We all know that some state in reagent is a reflection of what's in the server's database (typically), but there's also state in reagent concerning solely the current condition of the ui. That state will vanish when the page is re-loaded and there's (typically) no need to store that on the server.
So, I'm looking at my list of atoms and realising that I have some atoms which hold database-record-like maps, i.e. they contain exact reflections of datomic entities, (which arrive by transit), which is great.
But now I notice I also want some ui state per datomic entity.
So the question arises whether to (this seems wrong to me) add some keys to what came from datomic, of the ui state that is irrelevant to datomic, but that the client needs (i.e., dump it into the same nested map). That is entirely possible, but seems wrong, and so suggests.... (this being my idea as of now), How about a parallel atom per "entity", like #<entity-name>-ui, containing a map (or even a vector of maps, if multiple entities), with a set of keys for ui state.
That seems an improvement on what I have ended up with by default as of now, which is separate atom for every piece of ui state (I've avoided component local state up to now). (Currently the ui only holds ui state for one record at a time, so these ui atoms need only be concerned with a single current entity).
But if, say, I made a parallel atom (to avoid mixing ephemeral ui and server state), then ui state could perhaps manageably extend deeper. We could hold, say, ui state per entity so switching current-entity back and forth would remember ui state.
Since this is Stack Overflow, I have to ask a specific question, rather than this just be discussion, so: given what I've described, what are some sensible architectural choices in this case, to store state in reagent?
If you are storing your app state in several component-independent reagent atoms already - you can check https://github.com/day8/re-frame which is a widely-adopted reagent-powered framework exactly for your case. Essentially it stores all the application state in a reagent atom but has a well-developed infrastructure to support coordinated storage and updates. They have brilliant documentation with a great high-level explanation of the idea.
Regarding your initial question about server/ui state separation - I think you should definitely go this way. It'll give you a better way of separating concerns and give you an easier way to update server/ui data separately. It is very easy to achieve this with re-frame by storing both parts of the state under separate top-level keys in re-frame application db. E.g.
{:server {:entity-name ...}
:ui {:entity-name ...}}
and then create suitable subscriptions(see re-frame docs) to retrieve it.
I have a hard time understanding how to use vis.js network with a large amount of data generated dynamically. From what I read in the documentation, there are only two easy ways to import data: from gephi or in dot language; right? Isn't that a bit restrictive?
I have no knowledge of gephi or dot language so I decided to use my mysql database which I am used to working with.
So I query my data with php, and generate javascript to build the nodes and edges for the network.
But so far, I only have about 200 nodes and edges (which is like 1/5 of the data I'll have in the end) and it's already very slow to load, it seems like it takes a lot of ressources to display the network (my MacBook Pro gets really loud anytime I open the network page), when vis.js is supposed to be quick and lightweight.
Is that because all the nodes and edges are "written" in the code of the page? Or is it the fact that I use php to query the mysql data?
I don't refuse the idea to work with a json file, or dot language, I just have no idea how to do that... but if it can get me better performances, I'd like to learn how to do it. Can anyone explain in details how it all works? And with either of these methods, can I get different sizes and colors for the nodes and the edges according to the data I need to show (right now I do that in php after querying the data from the database)?
The format required by Vis Network can be serialized and deserialized using const object = JSON.parse(string); and const string = JSON.stringify(object);. There's no need to use Gephi or DOT to simply store data in the data base.
Nodes have size property to change size and both nodes and edges have color to change color. Edges can also inherit color from connected nodes. For more details see the docs for nodes at https://visjs.github.io/vis-network/docs/network/nodes.html and edges at https://visjs.github.io/vis-network/docs/network/edges.html.
Regarding performance there is not much I can tell you without some sample code and data to play with. I tried putting more that 200 nodes to https://thomaash.github.io/me/#/canvas which was built with Vis Network. As I expected it loads instantly and works just fine but I have no idea how fast or slow is MacBook Pro compared to my machine.
I am writing a web application for mapping Real-time GPS coordinates on Google maps coming from a GPS device, for fleet managment.
Since the flow of data is very fast from the GPS device to web application for database it becomes very heavy and the database is being queried every 5 seconds(via AJAX from web browser running the website) it becomes more heavy.
Keeping the updates in real-time is becoming very difficult a lagging of 30 seconds to 60 seconds is created between the actually update and its visibility on the website.
I am using Django + Apache + MySQL on CentOS 6.4 64 bit.
Any advice in what direction i should move to make the processing/visibility of data in more real-time would be helpful.
I would suggest you to use NoSql database like MongoDB. It would really help you to achieve real time application performance.
Have a look at Django-With-MonoDB.
And if possible try to replace default python interpreter to PyPy.
I think these two are enough to give you best performance. :)
Understanding Django-using-PyPy
Also for front-end you should use KnockoutJS or AngularJS.
Some tipps:
Avoid xml, especially a DOM based xml parser (this blows up data by a factor of 100). (A lat long coordinate without time, needs 8 bytes, not more)
favor a binary represenattion of the coordinates, and parse them by hand, instead using an slow generated parsing code taht probaly uses reflection to parse.
try to minimize the use of databases, especially relational ones.
raise the intervall that clients are sending: e.g evry 20min instead
evry 5.
if you use a db, minimize the transactions, try to do all processing
in one transaction.
Is it possible to speed up mapping with OSM by removing features and details (minor roads, bus stops, etc) -- or is that somewhat irrelevant to the tile download and rendering process.
aka, Are the SVG details added/removed on client or server side.
Further how are those 'church: invisible' type of instructions set
TIA
Perhaps that is a general mapping question; given that an engine (possibly) operates in much the same way at least when it comes to tiles and SVG details. And I simply don't know that process.
It is not quite clear about which specific process your are talking.
The tiles you see on http://openstreetmap.org are PNG, not SVG. The share menu allows to export the current view to SVG and some other formats. This resulting SVG is created server-side. Of course using less detail would speed up the SVG creation process But this process involves several other operations like querying the data base which won't benefit much from reduced details (ignoring the time to transfer the data between the database and the SVG creation process).
The PNG rendering will also depend somehow on the amount of detail but likewise there are a lot more operations necessary for rendering a single tile. I don't expect a large speedup by removing a few features.
Also note that there are several different renderers available and each will behave differently. And there is also the possibility of creating vector tiles which move some of the tile creation load from the server-side to the client-side. Here the amount of detail will slightly influence the server-side and significantly more the client-side, especially low-end systems.
Still I have no idea what these things have to do with mapping - the process of editing maps and adding/updating information.