I just uploaded my last project to heroku and it includes in it a jsonfile that behave like database. You can add and delete data through the UI.
If I want to work on the project and push it again, and meanwhile the JSON file was modified (on the server), it won't be updated to the version in the development environment?
At first I thought that if I don't modify it manually, it won't be staged, but what if I do want to update it manually?
In that case I assume you need to clone, modify and push it again, but if meanwhile someone updated it his data will be deleted.
so maybe you should block the UI from updating the jsonfile while you modify it manually.
I got it wrong?
is there any way to do it better?
Related
I have an Angular app where I am loading some static data from a file under assets.
urlDataFile = './assets/data.json';
data: any = null;
constructor(
private _http: HttpClient
) {
this.loadData().subscribe((res) => {
this.data = res;
});
}
private loadData() {
return this._http.get(this.urlDataFile);
}
This works absolutely fine for me for truly static data.
When I build my app for distribution, the data gets packaged into the app.
However, once deployed, I want to be able to publish an updated data file - just (manually) updating a single file into the deployment location.
In an ideal world, I would like to develop with a dummy or sample data file (to be held in source control etc.), to exclude that file from deployment, and once deployed, to push a new data file into the deployed app.
What is the standard convention for accomplishing this?
What you have there should work just fine?
There's two ways of doing JSON stuff; one is as you're doing and dynamically request the file, the other is to literally import dataJson from './assets/data.json' in your file directly.
The second option, I was surprised to find out, actually gets compiled into your code so that the json values are literally part of your, e.g. main.js, app files.
So, yours is good for not being a part of your app, it will request that file on every app load (or whenever you tell it to).
Means it will load your local debug file because that's what it's got, and then the prod file when deployed; it's just requesting a file, after all.
What I foresee you needing to contend with is two things:
Live updates
Unless your app keeps requesting the file periodically, it won't magically get new data from any new file that you push. Until/unless someone F5's or freshly browses to the site, you otherwise won't get that new data.
Caching
Even if you occasionally check that file for new data, you need to handle the fact browsers try to be nice and cache files for you to make things quicker. I guess this would be handled with various cache headers and things that I know exist but have never had to touch in detail myself.
Otherwise, the browser will just return the old, cached data.json instead of actually going and retrieving the new one.
After that, though, I can't see anything wrong with doing what you're doing.
Slap your file request in an interval and put no-cache headers on the file itself and... good enough, probably?
You are already following the right convention, which is to call the data with the HTTP client, not importing the file.
Now you can just gitignore the file, and replace it in a deployment step or whatever suits you.
Just watch for caching. You might want to add a dummy query string with some value based on a time or something to ensure the server sends a new file, based on how often you might update this file.
I'm new to coding and its a lot of try and error. Now I'm struggling with html tables.
For explanation: I am building an Electron Desktop application for stocks. I am able to input the value via GUI in an html table, and also export this as Excel file. But, every time I reload the app, all data from the table are gone. It would be great to save this data permanently, and simply add new data to the existing table after an application restart.
What's the best way to achieve this?
In my mind, it would the best way to overwrite the existing Excel file with the new work (old and new data from the table), because it would be easy to install the tool on a new PC and simply import the Excel file to have all data there. I don't have access to a web server, so I think a local Excel file would be better than a php solution.
Thank you.
<table class="table" id="tblData" >
<tr>
<th>Teilenummer</th>
<th>Hersteller</th>
<th>Beschreibung</th>
</tr>
</table>
This is the actual table markup.
Your question has two parts, it seems to me.
data representation and manipulation
data persistence
For #1, I'd suggest taking a look at Tabulator, in particular its methods of importing and exporting data. In my projects, I use the JSON format with Tabulator and save the data locally so it persists between sessions.
So for #2, how and where to save the data? Electron has built-in methods for getting the paths to common user directories. See app.getPath(name). Since it sounds like you have just one file to save, which does not need to be directly accessible to the user, appData is probably a good place to store it.
As for the "how" to store it – you can just write a file to that path using Node fs, though I like fs-jetpack too. Tabulator can save data as well.
Another way to store data is with electron-store. It works very well, though I've only used it with small amounts of data.
So the gist is that when your app starts, it loads the data and when the app quits, it saves the data, along with any changes which have been made, though I'd suggest saving after every change.
So, there are lots of options depending on your needs.
I am using a json file to load some delivery address in my project.This file is getting updated by the client for several time.But whenever I put the new json file to my project location, it takes too many refresh to reflect the newly added address on the browser even sometime I need to clear the browser cache to reflect properly.
I am using Asp.Net MVC4 .Could I control the version of this JSON file using bundle and minification property so that it gets reflected whenever any change will be made?
Or without using bundling,is there any another process so that it can reflects easily with a single refreash.
I found that the following code fragment worked best for me. Since it uses 'require' to load the package.json, it works regardless the current working directory.
var pjson = require('./package.json');
console.log(pjson.version);
Is there such thing as an event, hook or trigger that gets fired when a new file or file version gets added to HDFS? I'm working on a system where we would like to write JSON files to HDFS. The clients writing to the HDFS should be ignorant of the fact that a trigger gets fired that does downstream work with the JSON file.
Please feel free to tweak my question if the language I'm using doesn't fit the correct Hadoop terms.
I think you can use oozie for this. Check the input events.
I have a CGI program I have written using Perl. One of its functions is to upload pics to the server.
All of it is working well, including adding all kinds of info to a MySQL db. My question is: How can I get the uploaded pic files location and names added to the db?
I would rather that instead of changing the script to actually upload the pics to the db. I have heard horror stories of uploading binary files to databases.
Since I am new to all of this, I am at a loss. Have tried doing some research and web searches for 3 weeks now with no luck. Any suggestions or answers would be greatly appreciated. I would really hate to have to manually add all the locations/names to the db.
I am using: a Perl CGI script, MySQL db, Linux server and the files are being uploaded to the server. I AM NOT looking to add the actual files to the db. Just their location(s).
It sounds like you have your method complete where you take the upload, make it a string and toss it unto mysql similar to reading file in as a string. However since your given a filehandle versus a filename to read by CGI. You are wondering where that file actually is.
If your using CGI.pm, the upload, uploadInfo, the param for the upload, and upload private files will help you deal with the upload file sources. Where they are stashed after the remote client and the CGI are done isn't permanent usually and a minimum is volatile.
You've got a bunch of uploaded files that need to be added to the db? Should be trivial to dash off a one-off script to loop through all the files and insert the details into the DB. If they're all in one spot, then a simple opendir()/readdir() type loop would catch them all, otherwise you can make a list of file paths to loop over and loop over that.
If you've talking about recording new uploads in the server, then it would be something along these lines:
user uploads file to server
script extracts any wanted/needed info from the file (name, size, mime-type, checksums, etc...)
start database transaction
insert file info into database
retrieve ID of new record
move uploaded file to final resting place, using the ID as its filename
if everything goes file, commit the transaction
Using the ID as the filename solves the worries of filename collisions and new uploads overwriting previous ones. And if you store the uploads somewhere outside of the site's webroot, then the only access to the files will be via your scripts, providing you with complete control over downloads.