How to add another property to big JSON object in firebase real time database - json

I'm looking to add a new property to a higher level (main level, I forget the terminology) of the json tree in my firebase real time database. I usually make my edits through the console. I've been able to add new properties at deeper levels but going up the json tree the console doesn't let me by saying: Read-only & non-realtime mode activated in the data viewer to improve browser performance
Select a key with fewer records to edit or view in realtime
I thought about exporting, adding, then importing again (I've done this before at lower levels) but this seems a little scary having to essentially reimport the database just to add a new property. I've read the docs and they suggest using set method. How is this normally done?

I was able to figure this out.
You can use the browser URL to create a new node in the json tree if you cannot do it in the console because of it's offline mode.
Just enter the node name in to URL as if you were going to an already existing url and it'll get added
example: http://www..com/newnode

Related

Changing a value in a .config file based on a user's selection in an InstallShield 2013 install

Sorry - I'm a total newbie with InstallShield. I've inherited an InstallShield 2013 project that presents the user with a dialog that let's the user select a SQL Server and based on their selection sets a value in a config file. That's not working, so I opened the project in IS and looked in the Text File Changes under System Configuration and there's nothing there that would do this. So how do I figure out where this is happening (or not happening in my case), and then how do I get it to work? I need to set both data source and initial catalog in a file called server.config.
So how do I determine what the user selected and then save that in this file? It looks like I can set up a Text File Change, but how do I access the values selected by the user? And how can I figure out where the "code" is that is supposed to be doing this?
Thanks,
Ben
I would try to track this from the dialog and controls in question, or by following the value through a verbose log. Since you say it doesn't work today, there will probably be an interruption in the flow I describe below, and since you don't know the full state of the installation project, it may be hard to identify. So search from what you know.
Top down: what gets configured
First, find the dialog that you fill out as a user making the selection. Then figure out the property that the particular control is associated with. Now you've got a thread; pull on it.
Search in the direct editor for references to the property. If the property is named MYCONFIG search for just that: MYCONFIG. You'll probably find some sort of use that looks like [MYCONFIG] instead, which is typically a format string specifying to use the value of MYCONFIG. You may also have to search all the files related to your project, as Custom Action implementations can be code stored outside of your InstallShield project.
The use may be in a ControlEvent, CustomAction, or some other table. If it's in a ControlEvent, it may be used to set another property. Ditto if it's in a CustomAction that sets properties (type 51) which may be easier to understand in the Custom Actions and Sequences view. In that case, also search for the property that gets set.
If you find it in a table like ISSearchReplace* or ISXml*, or IniFile, it's probably part of the Text Files Changes, XML File Changes, or INI File Changes, and that view should make it easier to understand.
Maybe that thread dead-ends somewhere. A property gets set, but never referenced. So try to search from the other end.
Bottom up: what gets written
If there are text file changes, xml file changes, ini file changes, or custom actions that reference the file you need updated, see where they get their information. Try to follow it back. If they're well written, you should be able to identify the property (noting that one called CustomActionData comes from a property matching the name of the custom action it's used in), and then trace that further back using the same ideas as above, but in the other direction.
Where's the problem?
If the threads don't connect, that's probably the problem. It's also possible that a custom action lacks permissions but doesn't reports a failure, or that the file name or path got misconfigured somewhere along the way. Look for small things like that if things look like they should work but don't.
It turns out that I misunderstood the problem and the project was never set up to change that value, so all I had to do was set up a Text File Change and it works perfectly. Thanks #Michael Urman for the thorough response - I really appreciate it!

as3 store an external variable and keep it

I dont know if this is possible, but i have an SWF file which i want it to get info from an xml file only one time and then store it(keep it) there until a "newer" data will push into it
I know it sounds stupid but maybe there is a solution i don't know of...
1) i don't want to load the xml every time because of loads of traffic we have(a lot...will cost a lot to refresh everytime from amazon s3)
but how do we get a newer data without checking the external xml file?
2) if there was a way to broadcast(to "ping") to the swf that an update of the xml is ready to load....
if anything i believe there should be an AS3 script for that.
thanks!
but how do we get a newer data without checking the external xml file?
You could implement data file versioning. For example: urlToXmlData + "?version=" + myCurrentVesion As soon you ship new version of the application with extended data, it will work. It's global solution, so every new version of your product will work with latest data.
As for ping, you could create update strategy, for example: link is valid for 1 day. Idea is the same, concatenate stuff to the link, so browser will evaluate it as new: urlToXmlData + "?stamp=" + timestampForToday Tomorrow will be another timestamp, and browser will download updated version.
Use second xml which will contain the version first xml
I think you don't get the real subject.
You can't update a flash project without using any external update platform.
Because you cannot save all new XML data to a shared object because of "limitation of mb per data", so, flash going to delete all loaded data when application closed.
You cannot make a "permament update" in flash for "big files"...
BUT
You can update your main swf file with "EMBEDDED XML" files. Only that keeps your data updated when application is closed.
BUT that's not enough alone, also you need a "PRE-CHECK for Version of SWF" file, and that's what you can't do alone. You need a web platform featuring that way.
Mochi's "Live Updates" was offering that, but i couldn't make it work. But mochi is shutdown now... So forget about it...
I understand that your XML is BIG and you want to save trafic.
But how about small calls ?
You can have a call just to get the MD5 of your xml which is stored on the server.
Something like: myserver.com/getMD5
Once the server will return you a different md5 than the one you have already stored, then you reload xml and save new MD5.

Google Realtime API - when to persist changes to database?

Scenario:
I have multiple browser clients whose internet connection varies from very fast to super slow. Because of that they might not see same state of a document.
I'm using Google shortcut file since the document is actually being stored in database.
saving document to database is triggered from client-side.
Question: how do I know which client got the most up to date document that should be saved to the database?
You are right that you can't rely on any particular client being the most up to date at a particular time. There is no easy way to determine that, since that can change at any given instant. (Although you can make sure that you don't have any unsaved changes in a particular client by looking at the document save state.)
Rather than trying to do this based on client state, you can use the export capability that is part of the Drive API, which will give you a valid snapshot of the data with a revision number so you can track what version you have.
Note that this is a brand new feature, so its not yet well documented. The response is a json object with the appId, revision number, and a data field which contains a json version of the document. It looks something like this, for a document that has a collab list "list" and collab string "text" in the root:
{"appId":"788242802491","revision":17,
"data":{"id":"root","type":"Map",
"value":{
"list":{"id":"gde9s8z5khjarls7o","type":"List","value":[]},
"text":{"id":"gdef98qdhiq679af","type":"EditableString","value":"This is a test 2."}}}}

Django database watchdog save signal outside django

I have the following problem:
I Am using a Django framework.
One of the parts in a system (non-django) writes to the database, in the same database that django is using.
I want to have a signal when an object is being saved. It's a django model object but not saved via django, but directly in the mysql database.
Is there a way django can watch save-actions in his database when it's not being saved by django?
The neatest way would be: create an Api, and let the save action run through this api. The save signal can than be django default. (but this depends on some work of externals... so not the prefered route... for future development it sure is).
Another option is to implement celery and create a task that frequently looks whether one of the saved objects has had no follow up..... (also quit some puzzling I guess to get this up and running)
But there might be an easier... for me unknown?
I saw django watchdog solutions for file systems... not for databases (probably because django has this build in... when properly done through django)
to complex it: I test and develop locally with sqlite .... but the save signal I can put in my tests without needing to get this locally working.... as long as it works in mysql, I Am happy.
You can try this solution:
Create a new table 'django_watch' with one column 'object_id' (add other columns like 'created_datetime' etc according to your standards);
Lets say your main table is 'object'. Add a mysql trigger for the INSERT event on this table.
You should add an extra insert query inside the trigger to insert the object_id into 'django_watch' table.
Now you can have a cronjob that will be inpecting the new table 'django_watch' (for updations in Django objects) and perform necessary actions. You can run this cronjob continuously with some 1 minute delay (upto you).
In the end, I wrote an api that can be called by the thirdparty module. I delivered the code to logon on django using c-code to this api and call the GET of this api. (using django rest framework). This api just saves the object (the id given in the url), and from there on it's default django. The only thing the third party had to do is build in my code to call the api as well....
Maybe not the best solution, but the best to implement for my problem....

Test if local database (websql) contains desired new fields, and add them if not

I'm building a crossplatform HTML/Javascript app for iOS and Android using PhoneGap and jQueryMobile, and I am upgrading my app with (among others) a few new fields in one table of the local database (localdatabase/websql).
The challenge
I want to make sure that when the database is expanded with the new table fields, the existing user data, the user data will not be removed or become locked in an inaccesible older version of the database.
The background:
My app has a local database of the user's data (incomes and expenses, plus a few settings). These data need to be persitent, and the way to go, back when I started, was using the HTML5 localDatabase functionality, since that is both persistent, and available for the iOS and Android browsers as well as for most desktop browsers.
I am using a Javascript plugin/library/thingy called persistenceJS to make dealing with the localdb a little easier. But my question is not really specific to persistenceJS.
I am working on a new version of the app, which makes uses of a few new fields in the Settings table. So when these users download the new app and run it, it must test if their Settings table contains this field or not, and if not it must create the field.
How do I do this testing? I see two lines of thought:
Use the database label... that's used in the openDatabase function. This seems to be used by some developers to store a version number.
My trouble with this option is I only know how to use openDatabase to, well, open a database (and create a new one if none exists), and run a callback specifically if the database did not yet exist.
So if I open the table while specifying something like "v2" in the label, will it create a new table? If so, will it copy the old table's values into the new one?
Check for the existence of the table fields...
I could use openDatabase and then test for the existence of the table fields. If they don't, I could add them. The test would be run every time a user opens their app, which seems a little primitive.
By the way:
I know webSQL/localDb has been deprecated by the overlords, but it's still my tool and I want to stick to it for now.
I've found the answer here: http://blog.maxaller.name/2010/03/html5-web-sql-database-intro-to-versioning-and-migrations/.
Basically, you just apply the changeVersion method with the old and the new version label. If you didn't have a label, then the old label is "". While relabeling, webSQL quietly applies the new schema to the old database. Which in my case means adding the new fields.
The tutorial I linked to is really awesome (and so is the functionality).
I'm adding another answer because I've learned more about localDb opendatabase and migrating it.
As a reminder, openDatabase takes these parameters:
name - (string) name of the database
version label - (string) the version you want to open
display label - (string) a pretty useless display name that seems to be used nowhere
max size - (int) largest safe size is 5 * 1024 * 1024
newly created -= (function) to be fired if the db did not previously exist
It's wisest to assign the output of openDatabase to a variable. I.e.
myapp.db = openDatabase('mydb','','My database',5*1024*1024,newlyCreatedCallback);
First off, it seems wise to make use of the 'newly created' callback that's available as the fifth argument of openDatabase. It will fire only if there was no database with the parameters you specified. To prevent this callback from firing when your database did already exist, make sure you have the name, display label and maximum size set to exactly the values that were used to first create the database.
The reason to do this is that if the database was first created, you know for sure that you will not need to do any migrations. You can go straight to a function that adds tables and fields. I recommend using persistenceJS, a tool that helps you read and manipulate the local database.
Before calling openDatabase, it's wise to use jQuery to create a custom event 'dbopen' whose handler will execute migrations. This handler can be triggered by two events. The first is the 'newly created' callback we just discussed. The second is a setInterval that you define after call openDatabase. The interval must check for the existence of the myapp.db variable that you assigned the openDatabase output to.
The reason to create the dbopen custom event is that if you added a 'newly created' callback which triggers a whole bunch of events and continues the flow of your code afterwards, you will want a similar process for the 'not newly created' scenario. There is no callback for openDatabase that does this, so you will have to manually detect the creation of the local database and trigger 'dbopen' as soon as it has come into existence.
I use a window.setInterval for this. Make sure that you create the custom 'dbopen' event using jquery's .one() function, which will fire at most once. Otherwise if the database was newly created, you will fire the open event once when the 'newly created' callback fires, and once when the myapp.db variable comes into existence.