Whats the best way to deploy predefined data via api calls? - json

I have several json files that represent the payload for different API's(I can map which API to call based on the file name, but other methods could be applied as well),
what is the best practice to populate my data on the application with the help of those json files?
My first though was to use some automation framework(rest assured for example) to accomplish my task, but I think it might be an overkill for my scenario.
p.s. snapshot of DB/query direct to DB is not an option because of the nature of the application.

Related

Creating a REST API for static hosting

I know this sounds crazy, but I had a thought and I was willing to try it out. I use GitLab pages for all my online projects, but a lot of them are ASP.NET MVC, which is an issue as I don't think you can run ASP.NET MVC sites on GitLab pages. I then thought, what if I make a site using something like angular or node.js, and have a central API for all my web projects? I thought that was a great idea, until I realized I couldn't use a database either. I guess what I'm asking is, would it be possible to create a REST API that uses JSON files for storage and node.js as the request pages, to create an API without a database?
Of course.
If you think about a database from the perspective of your application code, it is basically just a place to store and retrieve data.
Imagine the database library you are using has two simple methods, store and retrieve. In your application code, you could write db.store('here is the item') and the later on, db.retrieve().
However, those store and retrieve methods could be implemented in many different ways to provide the same effective behavior from the perspective of your application. Some examples:
Send/query the data to/from an external data store, such as PostgreSQL
Write it to a file on disk and read it back later
Store the data in memory
Make HTTP requests to an external system to store the data
Some of these options will be more or less appropriate depending on your exact requirements, however, the general idea is that given a database API, you could implement the exact same method signatures with a completely different approach.

GAS. Storing data for a specific domain

I'm working on an add-on in GAS. I know that with Google's PropertiesService it is possible to store data for a particular user,document or script, but I would like to store data specific to a domain. Is this possible? And if it is, how can i do this?
Spreadsheets are quite good data containers, easily shareable across your domain, large and pretty fast if used the right way (see this "hit" post to confirm).
You can store and retrieve data as array data and easily convert it to JavaScript Objects as well.

Dynamic JSON file vs API

I am designing a system with 30,000 objects or so and can't decide between the two: either have a JSON file pre computed for each one and get data by pointing to URL of the file (I think Twitter does something similar) or have a PHP/Perl/whatever else script that will produce JSON object on the fly when requested, from let's say database, and send it back. Is one more suited for than another? I guess if it takes a long time to generate the JSON data it is better to have already done JSON files. What if generating is as quick as accessing a database? Although I suppose one has a dedicated table in the database specifically for that. Data doesn't change very often so updating is not a constant thing. In that respect the data is static for all intense and purposes.
Anyways, any thought would be much appreciated!
Alex
You might want to try MongoDB which retrieves the objects as JSON and is highly scalable and easy to setup.

automatic web crawler

I'm writing a crawler which needs to get data from many websites. The problem is that every website has different structure. How can I easily write a crawler which downloads (correctly) data from (many) different websites? If the structure of a website will change will I need to rewrite the crawler, or are there other methods?
What logical and implemented tools can be used to improve the quality of data mined by an automatic web-crawler (many websites are involved with different structure)?
Thank You!
I presume you want to query it is some way, in which case you should store the data in a flexible data store. A relational database would not be fit for purpose as it has a strict schema, but something like mongodb which lets you store semi structured data without having to define a schema up front, but still provides a powerful query language.
The same goes for how you represent the data in the crawler code. Don't map the data to classes where the structure is defined up front, but use a flexible data structures that can change at runtime. If you are using Java then de-serialise the data into HashMaps. In other languages this might be called Dictionaries or Hashes.
If you're scraping data from websites that actually want to allow you to do that, chances are they will provide some sort of webservice to allow you to query their data in a structured way.
Otherwise, you're on your own, and you might even be violating their terms of use.
If the websites provide no APIs, then you're out cold and you have to write separate extraction module for each data format you're encountering. If the website changes the format, then you have to update your format module. A standard thing to do is to have plugins for every website you're crawling and have a testing framework which does regression testing with data you've already collected. When a test fails you know something went wrong and you can investigate whether you have to update your format plugin or if there is another issue.
Without knowing what kind of data you're collecting it will be very difficult to try to hypothesize about ways to improve the "quality" of the data that was mined.
Maybe you could find out whether the website allows you to access the data like API, if so, you could use this kind of structured data to your website directly. If not, you may need plugins for that. Or you could turn to other web crawlers with API access like Octoparse, to find the way to access their API to your own web crawler.

Is localstorage the right choice for this webapp?

I'm interested in building a small offline webapp and I'm looking for some advice. Here's the basics of what I want it to do
Create reports that, initially, will just have a name and text field
List, edit, and delete these notes
Ideally I'd like to add more fields to the reports later
Is localstorage a good option for storing this type of data locally? If so, can anybody direct me to a complete list of the commands for interacting with it in javascript? e.g. setItem, getItem, etc.
Thanks.
localstorage will work just fine for this, but don't think of it as a robust solution.. It's just a basic key/value store and won't be very performant with thousands of complex things going on.
Check out the excellent Dive into HTML5 guide on localstorage:
http://diveintohtml5.info/storage.html
Link to the localstorage apis
Yes localstorage would be perfect for you application. It would allow your application to have no need to connect to a server at all. Keep in mind that local storage does have maximums on the amount of data that can be stored.
EDIT:
Using JSON.stringify() on can convert complex javascript objects to json which can be storage and retrieved with ease inside of local storage.