Saving local copy of text area entry - mysql

Previously when covering events I've typed my reports into an html file and ftp'd it to the server. If I lose connectivity or the laptop crashes, I've got the last saved copy.
We just switched our event coverage area to a database holding Textile-formatted entries done via a web form text area.
Is it at all possible to create a mechanism whereby a local copy of the textarea is saved so I can keep working during connectivity failure? Using Windows on the laptop. Guess the low tech way would be to type in a word processor and just keep pasting into the web form.

You could take a look at the local storage features the browsers have to offer.

Local storage is a nice idea, cookies might also be a solution (which even works in older browsers) if the texts are not too long.
However, I'd use server-side backups (created automatically via AJAX requests) and only fall back to local backups if there's no connection to the server. You can never know if local backups persist when the browser or even the whole system crashes.

Related

Will an Access DB be faster if I split it - leaving the tables in the server and the rest locally?

I currently have an Access db stored in a server, so every time a user has to used goes to the network location and open the access file.
Queries, forms, exports and everything is really slow. If I save the Access file in my local drive, its speed is extremely improved.
Will the overall experience of my users improve if I split the database and leave the tables in the network location and everything else is stored locally in the hdd of the user?
UPDATE: Different users are not using the DB at the same time.
If you have multiple users opening the same shared file over a network you are asking for corruption and many other issues.
In a multi user set up you should always split your database, and each user should have there own locally saved copy.
This will probably also help with your speed issues, as currently you are dragging all the form information and data across your network. Properly designed forms and queries will only pull in the minimum data required for the task, also reducing network traffic and load times
As a general rule no. (Assuming in both cases the accDB file is on a server folder).
So the fact of being split or not does not really change the “data” that flows down the network pipe.
However, the forms, VBA code etc. when installed on each workstation does NOT have to traverse the network pipe, so in this fashion you can save some network bandwidth and some speed gains. However, in most cases the form load time is VERY small – the pulling of the data represents the bottle neck and slowdown.
So a split system with a front end installed on each workstation will NOT change the data speed, or the amount of data pulled over the network. However since the applcatation part is loaded local, then those parts will load faster since they never traverse the network pipe.
However, when you use Excel, you install that applcatation on each workstation (you might share some data/document file on the server).
And when you use Word, you again install that application on each workstation (you might share some data/document files on the server).
So for the last 30 years of the computer industry, you in general install he applcatation part on EACH desktop. Now that you are writing and building an applcatation, then once again you install that applcatation on each computer like everything else. So you want to keep in mind the difference between some data or data file, and that of application code you create and develop to run on each workstation.
And the above increases reliability by large amounts, since if one user has a code or a form freeze up, then all other users can continue to work. If all users share the same application code, then one mess up can cause everyone to stop working.
So from a data point of view, the answer is no. And there is some "overhead" with linked tables as opposed to non split. So sometimes you see some slowdown over non split. However, from a maintains and reliability point of view, splitting is high recommended. And for form + code heaving applications, then you do see some speed up since forms + code is loaded local as opposed to those applications forms and code being pulled across the network pipe.

HTML files on a distant server don't actualise after modification

I'm working on an assignment for school, my files for the website are stored on a distant server which I access via VPN and remote server connection on macOS.
When I modify my html files, the changes aren't reflected immediately, sometimes after a day or two (in fact quite randomly, can be an afternoon, an hour).
It's a bit problematic when you try to have long code sessions. Sometimes, one page actualises but not the others.
I'm not having any problems with my php files, they actualise immediately.
I've tried several things without any changes:
Emptying the cache
Trying on different web browsers
Disconnecting from the server and VPN
Waiting :)
System infos :
macOS 10.12.2
Safari 10.0.2
Thanks for the help, I personally think it's a problem with the server, but I won't be able to change that, hopefully, it's something I can fix.

Data stored into the browser db using pouch db is lost after the browser history is cleared

Background:
My HTML5 offline application stores a lot of data into the local browser database. I use pouchdb3.3.1 to communicate with the inbrowser database for storing data within the browser. And everything works well in the normal scenario. I am able to store data and retrieve it back when required.
Issue:
When the browser history is cleared manually by the user.Then all the data stored in the browser db is cleared. This issue happens in IE11, Chrome36 (these are the browsers that I have in my machine).
Is there a way i could retain the data stored within the browser db when the browser history is cleared.
Nope, users are always able to clear the IndexedDB/WebSQL/LocalStorage/AppCache data. In different browsers it's exposed in different ways (e.g. in Firefox it's hidden under Advanced -> Network -> Offline Web Content and User Data), but the capability is always there.
In general you shouldn't expect to have any control over when users decide to clear their browser data, so the best policy with PouchDB is to always sync to a remote database so that the user's data isn't lost.
No. (thirty character minimum answer length).
I'll re-post this from my presentation about client side storage:
Data stored on the client side can be lost at any time!

WebSockets on PHP shared hosting

I've been doing some research of the best way to show an "users online" counter which is updated to the second trying to avoid continuos ajax polling.
Obviously WebSockets seems to be the best option. Since this is an intranet I will make it a requirement to use Chrome or Safari so there shouldn't be compatibility issues.
I've been reading some articles about WebSockets since I'm new to it and I think I pretty much understand how it works.
What I'm not so sure is how to implement it with PHP. Node.js seems the natural choice for this because of it's "always running" nature but that's not an option.
Why I'm most confused about is the fact that PHP runs and when it's done, it ends. If PHP ended, wouldn't the socket connection be lost? Or if the php re-runs it will look back the user by ip? (I don't see that likely)
Then I found this library
http://code.google.com/p/phpwebsocket/
but it seems to be a little old (it mentions only Chrome nightly is compatible with WebSockets)
In one point says "From the command line, run the server.php program to listen for socket connections." which means I need SSH, something many shared hosting plans don't have.
And my other doubt is this other line in the source of that library:
set_time_limit(0);
does that mean that the php file will run continuously? Is that allow in shared hosting? From what I know all hostings kill php after a timeout of 1 o2 minutes.
I have a mysql table with online users and I want to use PHP to broadcast via websocket the amount of logged in users to those online users. Can someone please help me or point me somewhere with better information how this could be achieved?
Thanks
Websockets would require lots of thing even on dedicated hosting, put aside shared hosting.
For your requirement server sent events (sse) is the correct choice, since only the server will be pushing data to the client.
SSE can simply call a server script, very much like ajax, but the client side will receive and be able to process data part by part as it comes in. Dom events would be generated whenever some data comes in.
But IE does not support SSE even in version 10. So for IE you have to use some fallback technique like "foreever iframe".
as far as hosting is concerned, ordinary shared hostings (and those which are not very cheap) would allow php scripts to run for long, as long as they are not seen as a problem.

Access 2007 and Terminals

I work for a small company and they asked me to build a simple access database. They only have terminals in the office that I work in (Ottawa) while the server is in Toronto (windows server 2003). When I load Access 2007 the whole program is extremely slow compared to normal speeds of the terminal. Only when I am in any form of Design View does my Terminal speed up. My question is; is there a way to increase the "speed" of Access when I'm trying to build the database and secondly will this effect the end user once the database is built? (everyone uses terminals)
Thanks in advance.
The use of the word terminal can mean many things here, but it does sound like you have a decent setup that should be able to work with good performance.
Also, the fact that you suggest that when in design mode, the application seems to speed up suggests the use of what is called a persistent connection may very well solve your problem.
Given that you using some type of remote desktop technology here than in fact network speeds should not really come into play and slow down the operation of this application by any noticeable amount.
First of all if there's multiple users using this application, as a general rule you should split the database into two parts, a front end part, and a so called back end part. Because you using a terminal technology, then the front ends and back ends will remain on the server, but each individual user logging into the system could have their OWN copy of the front end.
The next thing to do is to check what call the persisting connection, and in fact access is quite sensitive to local network printers. In your case when a user logs into this terminal system, often a local printer is "created" that is part of your local terminal but you still running Access on the server and Access will attempt to "talk" to that local network printer. So this forces additional communication between Access on the server and your default printer which is local.
I would attempt to setting a default printer that is NOT local to your WorkStation, and see if that helps. There's also a great list of other things to check in terms of slowing down performance, and a great FAQ you want to look at is here:
http://www.granite.ab.ca/access/performancefaq.htm
In the above the persistent connection idea is also suggested.