Google Realtime API - when to persist changes to database? - google-drive-api

Scenario:
I have multiple browser clients whose internet connection varies from very fast to super slow. Because of that they might not see same state of a document.
I'm using Google shortcut file since the document is actually being stored in database.
saving document to database is triggered from client-side.
Question: how do I know which client got the most up to date document that should be saved to the database?

You are right that you can't rely on any particular client being the most up to date at a particular time. There is no easy way to determine that, since that can change at any given instant. (Although you can make sure that you don't have any unsaved changes in a particular client by looking at the document save state.)
Rather than trying to do this based on client state, you can use the export capability that is part of the Drive API, which will give you a valid snapshot of the data with a revision number so you can track what version you have.
Note that this is a brand new feature, so its not yet well documented. The response is a json object with the appId, revision number, and a data field which contains a json version of the document. It looks something like this, for a document that has a collab list "list" and collab string "text" in the root:
{"appId":"788242802491","revision":17,
"data":{"id":"root","type":"Map",
"value":{
"list":{"id":"gde9s8z5khjarls7o","type":"List","value":[]},
"text":{"id":"gdef98qdhiq679af","type":"EditableString","value":"This is a test 2."}}}}

Related

How to add another property to big JSON object in firebase real time database

I'm looking to add a new property to a higher level (main level, I forget the terminology) of the json tree in my firebase real time database. I usually make my edits through the console. I've been able to add new properties at deeper levels but going up the json tree the console doesn't let me by saying: Read-only & non-realtime mode activated in the data viewer to improve browser performance
Select a key with fewer records to edit or view in realtime
I thought about exporting, adding, then importing again (I've done this before at lower levels) but this seems a little scary having to essentially reimport the database just to add a new property. I've read the docs and they suggest using set method. How is this normally done?
I was able to figure this out.
You can use the browser URL to create a new node in the json tree if you cannot do it in the console because of it's offline mode.
Just enter the node name in to URL as if you were going to an already existing url and it'll get added
example: http://www..com/newnode

Get history of Document modifications/revisions in Azure Cosmos DB

I am looking for a sample code to get history of Document modifications , in the form of a list of Document versions in Azure Cosmos DB.
By selecting a document version from the list , that version will be displayed.
I also want to restore a previous document version .
I get the last version using the _ts tag.
However want a complete history of document revisions.
1. Maybe using Cosmos DB change feed.
2. Reading the ReadDocumentFeed
I have done some research . However did not come across any code for this.
Any pointers in this direction are most welcome.

UI Testing (casperjs) with good known status of data (mysql database)

I'm using CasperJS for automated UI tests. I've done the basic UI testing and validation with some random data, kind of POC. I've set up this automation using bash script which kicks to start the web server, load MySQL data from SQL file, start CasperJS test cases, stop the web server, check the log files.
Now, I want to start the testing with some good known status of data which are stored in MySQL. So that I can test the list data and form data with detailed field information with some known database status. How should I know the status of data in the database at a moment?
1) Should I use pre-populated JSON dumped file which has status and details about all data?
2) Should I use web service API? (web service APIs are being used to show/save/delete data from the web page)
Let's take an example. I've 5 users in Users table. Now when I open the home page it shows 5 users with some rough details. When I click on any record from the list of users, it shows a form with detailed information about that user. The webpage is requesting to the web application to get the detail about a user with the help of user_id to show the detailed user data in a form. Now I want to check that all the data in that form is populated correctly. So at the next step, what would be the preferred way, should I read content from JSON dumped file or should I use web service API (like webpage does).
Searching this problem online, I also found MYSQL HTTP plugin. Should I consider this as well? and How safe it is to use? (I know from the docs that this plugin is not for the production, it is just for testing purpose only. :) )
For the main question in cases like this I would change the database connection string to your testing database (this is a clone).
In your case use your bash script to change the connection string (file copy?) automatically before you run the tests. And when completed change back.
Your testing database is a direct clone of your dev/live databsae but with ONLY the test data you want. Downside is you need to keep the schema in sync with DEV/LIVE.
Also another point to take into considertion is if your testing changes state (post). If so your testing data might be out of sync. One way is get around this is to drop foreign keys, truncate the data and load in a dump file.
HTH

as3 store an external variable and keep it

I dont know if this is possible, but i have an SWF file which i want it to get info from an xml file only one time and then store it(keep it) there until a "newer" data will push into it
I know it sounds stupid but maybe there is a solution i don't know of...
1) i don't want to load the xml every time because of loads of traffic we have(a lot...will cost a lot to refresh everytime from amazon s3)
but how do we get a newer data without checking the external xml file?
2) if there was a way to broadcast(to "ping") to the swf that an update of the xml is ready to load....
if anything i believe there should be an AS3 script for that.
thanks!
but how do we get a newer data without checking the external xml file?
You could implement data file versioning. For example: urlToXmlData + "?version=" + myCurrentVesion As soon you ship new version of the application with extended data, it will work. It's global solution, so every new version of your product will work with latest data.
As for ping, you could create update strategy, for example: link is valid for 1 day. Idea is the same, concatenate stuff to the link, so browser will evaluate it as new: urlToXmlData + "?stamp=" + timestampForToday Tomorrow will be another timestamp, and browser will download updated version.
Use second xml which will contain the version first xml
I think you don't get the real subject.
You can't update a flash project without using any external update platform.
Because you cannot save all new XML data to a shared object because of "limitation of mb per data", so, flash going to delete all loaded data when application closed.
You cannot make a "permament update" in flash for "big files"...
BUT
You can update your main swf file with "EMBEDDED XML" files. Only that keeps your data updated when application is closed.
BUT that's not enough alone, also you need a "PRE-CHECK for Version of SWF" file, and that's what you can't do alone. You need a web platform featuring that way.
Mochi's "Live Updates" was offering that, but i couldn't make it work. But mochi is shutdown now... So forget about it...
I understand that your XML is BIG and you want to save trafic.
But how about small calls ?
You can have a call just to get the MD5 of your xml which is stored on the server.
Something like: myserver.com/getMD5
Once the server will return you a different md5 than the one you have already stored, then you reload xml and save new MD5.

What is the correct "Document Name" of an Access ADP/MDB to use in the GetObject("Document Name") call?

According to http://support.microsoft.com/kb/288902/en-us
You can attach to a specific instance
if you know the name of an open
document in that instance. For
example, if an instance of Excel is
running with an open workbook named
Book2, the following code attaches
successfully to that instance even if
it is not the earliest instance that
was launched:
Set xlApp = GetObject("Book2").Application
Th example works for Excel, mainly because the "Document Name" is nearly the same as the filename. I need to get this to work for Access.
I have users running multiple instances of different Access Applications (.ADP) and I need to get one with a specific name. I do NOT know the complete filepath, otherwise I could do a simple
Set app = GetObject("c:\my\app\myapp.adp", "Access.Application")
Currently, I go with calling
Set app = GetObject(, "Access.Application")
and check the returned Application.Name. If it's good, I use it to call some functions on it, if not it fails. I have heard about the Running Objects Table, but since I have to get the object inside a VBScript, it's a bit too much API-calls.
Bottom Line:
What is the correct "Document Name" of an Access ADP to use in the GetObject("Document Name") call and what type of object does it return?
Coming back to this question because #phoog posted an answer, it occurs to me that since Access can have only one database open at a time, you'd need to look at each instance of Access and check its CurrentDB.Name. Now, with an ADP, I don't know what you'd check instead, but perhaps it's CurrentProject.Path.
I don't believe any of that actually addresses the original question, which seems to want to find the instance with just the filename, but I don't think that even if that worked it would be a good idea. You could easily have independent Access apps that return the same application name and filename but are stored in different locations in the file system.
You probably want to read this: http://support.microsoft.com/kb/147816
Unfortunately, it looks like "Document Name" has to be the mdb file, so if you don't know the file path, you're not going to be able to use "Document Name".
In addition to the fact that Excel can have multiple documents per instance, Excel also prevents you from opening two documents with the same name, even if they are in different folders and you are opening them in different instances of Excel.
On the other hand (to address David-W-Fenton's comment), Access does allow you to open multiple instances of the same database (not to mention different databases with the same name), as long as you don't open any instance in exclusive mode.
This is further evidence that the way Excel registers its documents in the ROT is very different from the way Access does so.