I have a dashboard application that has PHP backend and javascript frontend. Data is read from multiple sources and I have access to databases of all the sources.
While designing the application, is it a good idea to store remote data locally instead of hitting the remote database everytime the application has a request?
Store locally? Reason being the data is not live. I can write a cron to run in the background to update the data every 5 min and the application always will read the data from local DB thereby giving faster load times.
Read from remote every time? Since I have direct database access to all these remote DB's, I do not notice any performance gain of storing data locally over fetching from remote everytime.
Which approach scales better?
What you're describing is called "caching." It's a common optimization.
Fetching data remotely is much more expensive than getting it out of a local cache.
You should learn the Latency Numbers Every Programmer Should Know.
The tricky part of caching is knowing when you need to discard the local cached copy of data and re-fetch it from the remote database. This is a hard problem with no single answer.
There's an old joke attributed to Phil Karlton:
“There are only two hard things in Computer Science: cache invalidation and naming things.”
In my work I need to revamp the web which need to accept numerous connection always. Before I use the JSON to get the data until now.But now I want to direct call the DB and get the data. As I know use cache is the best way for my web. But in initial the concurrent access to DB is often happen.Any advice for me to handle the situation. Because I want the web that can get the updated data always.
Thanks.
Following are my suggestions
If you want to use cache, you have to automate your cache clear process whenever there is an update in the particular data you hit. But this is practically possible if your data is updated infrequently.
If your budget allows, Put your DB in a cluster (Write in master and read from master&slave)
In worst case,ensure your db is properly indexed.
I am trying to use breeze.js in my SPA application but I am having difficulty understanding the way it works. I have tried to add alerts on onSuccess function and what I can see that it fetches all the records form the online database. It this true? I believe there is some sort of change tracking mechanism that should identify the records changed online and only download those? I can see that happening on the local cache but does something similar happens on the server side (remote data) as well. otherwise this will be a big bandwidth overhead and unusable in large datasets.
Thanks in advance.
The results of a query are determined entirely by the server's response to the client request. If you query for all customers, that's what you get.
Breeze does not have a mechanism for detecting when and which records have changed in your remote data source. That's something only the server can do. You'll have to come up with a mechanism that fits your business need and is supported by your server technology.
It's a big topic and an interesting one. Too broad for a one-size-fits-all reply here I'm afraid.
So I have a small game in node.js(only the server of course) which has map data and player accounts stored in a mysql database. Right now I constructed it in a way that minimizes the amount of queries made by loading data from the database and keeping it in javascript objects/arrays or whatever seems appropriate and only writing to the database when needed.
Now I was thinking: Is this really worth it? In many cases it would be alot better(in terms of data would be more save and WAY more up-to-date) to hardly store data in the server and just loading it from the database when needed(respectively writing when it needs to be changed).
My question is: Is it efficient/save/recommendable to have the server read/write from the database often rather than having data from the database in javascript variables in the server?
Additional info:
-The nodejs server and my mysql server are on the same machine and a query usually takes less than 1ms or maybe 3ms for big queries like loading room data.
-I am using a module simply called mysql.
-If needed I will include extra info, just ask in a comment.
Really depends on your Use-Case. Generally speaking, I would not add another layer of caching in node.js but handle that in your db with a bigger cache and optimized queries.
I'm always with my Access app..
As far as I know, when I execute a sql clause to my back end (accdb file), say
SELECT * FROM tbl WHERE id=1;
It gets filtered on the back end, then just one record is transmitted over the network.
My question is, when I open a form bounded with a query (no where clause) using a filter parameter, like
DoCmd.OpenForm "Form",,, strFilter
how many records are transmitted on the network? They get filtered like that sql clause or they get filtered locally, meaning a big pile of data has to be sent over the network?
I'm concerned about this because I have many subforms bounded to queries, then I open them in the main forms with filter parameter. And of course, the network here is not very good.
EDIT: The environment of my app is on a factory with no local server. All network/information thing is in company's headquarter 300km away, maybe a WAN.
Except upgrading to SQL server alike, do I have other solutions to make it more reliable? I've heard of something 'Citrix', I happened to have a 'Citrix Neighborhood Agent Program' in my sys tray, can it host my app to make it faster?
DoCmd.OpenForm "Form",,, strFilter
how many records are transmitted on the network?
As many as match your strFilter condition. So, if WHERE id=1 returns one row in the earlier SELECT query, and strFiler = "id=1", that OpenForm will open the form with that single row as its record source.
The WhereCondition parameter is also available for DoCmd.OpenReport, and operates the same way as with OpenForm, which you also may find useful.
Edit: You should have an index to support the WHERE criteria whether you build it into the query or do it "ad hoc" with OpenForm WhereCondition. With an index the database engine will read the index to find which rows match, then retrieve those rows. So retrieval will be more efficient, and therefore faster, than forcing the engine to read every row to determine which of them include matches.
When Jet/ACE requests data from a file server, the first thing it needs is the database header information, which has data structures describing the structure of the data file. This is information is requested once in your Access session, so it's really only an issue at startup.
When you then request a record, Jet/ACE uses the metadata it has about the file to request the relevant index pages for the table(s) involved, then uses those indexes to determine the minimum number of data pages to request.
With properly structured indexing and filters on primary keys the amount of data retrieved is actually quite minimal.
However, it's still going to be more than will allow proper response times across a WAN. Access was designed for use across a wired LAN, back in the days when the networking standard was 10BaseT (10Mbps). Anything less than that and you'll have problems. WiFi is right out, as well, but not because of bandwidth, but because of the unreliability of the connections.
When you need to support users remotely, the easiest solution is to host the Access application on a Windows Terminal Server. WTS is built on technology licensed from Citrix, so you'll often see the whole concept described as Citrix, but your default WTS setup is quite different from a Citrix installation. You have to pay extra for Citrix, and it gives you a lot of different features.
I've used WTS without Citrix in many environments and frankly can't see what the justification would be for Citrix (except when you have to support large numbers of remote users, i.e., in the range of 100 or more). WTS is installed on every Windows Server starting with Windows 2000 and is very easy to set up and configure.
The second easiest solution, in my opinion, is to upsize the back end to a server database and then rewrite for efficiency to insure you're using the server as much as possible and not pulling too much data across the wire.
A third solution would be Sharepoint, but I'm not experienced with that. It is definitely the direction that MS is pushing for Access apps in distributed setups, but it's quite complex and has a whole lot of features. I wouldn't recommend plunging into it without lots of preparation and significant corporate support.
Actually, with Access, there is not really a true back-end as there is with a bona-fide client-server engine like SQL Server or Oracle or Postgres. Access uses a shared-file architecture where the client program itself "owns" chunks of the file on disk, as distinct from a message-passing architecture where the client program sends requests for data to a back-end engine process running on a server where that process "owns" the data. With shared-file, all work occurs on the client, so it is possible for freight-train-loads of data to be brought across the wire if the database file resides on a different machine.
When you ask Access for data, it often reads a lot more data from the MDB file on disk and caches at the local client a lot more data than what your statement has asked for. Access tries to do this intelligently, anticipating your needs. "Now that I'm here", Access says, "I might as well make the expensive trip to disk worthwhile and grab a sh*tload of data". Don't get me wrong. I'm not an Access basher and have been using it for more than 10 years, from back in the days when LAN bandwidth was 10mbit/sec. Access is very good for some things. But Access can gobble up bandwidth like you wouldn't believe.
Read up on "keysets" in Access.
P.S. I am not the same Tim as the Tim who left you a comment.
Some useful links:
http://msdn.microsoft.com/en-us/library/dd942824(v=office.12).aspx
http://support.microsoft.com/kb/209126
http://support.microsoft.com/kb/112112
http://support.microsoft.com/kb/128808