Creating a REST API for static hosting - json

I know this sounds crazy, but I had a thought and I was willing to try it out. I use GitLab pages for all my online projects, but a lot of them are ASP.NET MVC, which is an issue as I don't think you can run ASP.NET MVC sites on GitLab pages. I then thought, what if I make a site using something like angular or node.js, and have a central API for all my web projects? I thought that was a great idea, until I realized I couldn't use a database either. I guess what I'm asking is, would it be possible to create a REST API that uses JSON files for storage and node.js as the request pages, to create an API without a database?

Of course.
If you think about a database from the perspective of your application code, it is basically just a place to store and retrieve data.
Imagine the database library you are using has two simple methods, store and retrieve. In your application code, you could write db.store('here is the item') and the later on, db.retrieve().
However, those store and retrieve methods could be implemented in many different ways to provide the same effective behavior from the perspective of your application. Some examples:
Send/query the data to/from an external data store, such as PostgreSQL
Write it to a file on disk and read it back later
Store the data in memory
Make HTTP requests to an external system to store the data
Some of these options will be more or less appropriate depending on your exact requirements, however, the general idea is that given a database API, you could implement the exact same method signatures with a completely different approach.

Related

Whats the best way to deploy predefined data via api calls?

I have several json files that represent the payload for different API's(I can map which API to call based on the file name, but other methods could be applied as well),
what is the best practice to populate my data on the application with the help of those json files?
My first though was to use some automation framework(rest assured for example) to accomplish my task, but I think it might be an overkill for my scenario.
p.s. snapshot of DB/query direct to DB is not an option because of the nature of the application.

Connecting to MySQL directly through flutter application

Can someone explain to me why you can't connect to a MySQL DB directly through dart from a security point of view?
There is no hard guideline on whether to connect frontend directly to backend or not. It is just a design practice that has been widely accepted and evolved over many years.
Typical app structure consists of
FRONTEND -> SOME MIDDLE LAYER -> BACKEND
Where your middle layer handles all the interactions/processing with the database and the frontend uses this functionality with some sort of API structure. Having this layer is extremely helpful when the application goes to scale, it gives an added abstraction to the frontend.
It is not advisable to directly fuse your frontend(your flutter app), to the DB(MySQL) because any efficient hacker might use basic man-in-middle attack to know your DB structure/connections/queries(There are some pretty effective decompilers present out there), and alter your data and you might not even get to know what caused the data to update unless you've applied some checks on DB layer.
Also, your frontend logic needs to be more of end-user centric than to handle the data of the user. Any backend system(java, node, etc) gives you added functionality & freedom to parse and present the data from either side.
You can use the sqlite package available to store basic data, like your session tokens, your app configurations etc, but it is advisable to keep the main user data like the logins, etc in a separate place, or better yet, you can use the firebase plugin to store data in document structure in the cloud.

How do you incorporate Node.js/passport into my website?

I'm new to webdev and I'm trying to use passport for registration/authentication on a site I'm setting up. I'm also going to write an application in node later on that will be using some of the user data (users will need to provide an API key for an account on another site that I will use to pull data into the application).
At the moment, the main issue I'm having is figuring out what goes where. I've found plenty of resources that explain how to create an app using passport, but nothing shows how it would be incorporated into your website or where the files should be in relation to your website. I'm relatively new to Node.js, and while I've written a few small applications I have never hosted them anywhere.
Bonus question: I'm using MongoDB with passport and I was also planning to use it to store some JSON my application will be receiving from API calls. However, I wanted to use MySQL to store some data as well. More specifically, I'm planning to save the raw JSON then I'll create a relational database out of the data I need from the JSON and then keep the rest in MongoDB for easy access. Is this common/smart, or should I focus on keeping everything in my MongoDB? I'm relatively new to NoSQL.
Thanks in advance for any help.
I would reference this tutorial. I just recently used this to help myself with a new application. Also there is an example of the same thing but in SQL here. So not sure what you mean by " where the files should be in relation to your website". The information related to to authentication should go in your database.
To your "bonus question" you can use two databases. The key here is to ask yourself why and what are the true needs for data, and how is this data accessed and used. From ground up I would like one and stick with it. If at some point later you realize a certain type of data would be better in a different database then you can add it.
Side note: look into an IDE such as webstorm to help you out.

automatic web crawler

I'm writing a crawler which needs to get data from many websites. The problem is that every website has different structure. How can I easily write a crawler which downloads (correctly) data from (many) different websites? If the structure of a website will change will I need to rewrite the crawler, or are there other methods?
What logical and implemented tools can be used to improve the quality of data mined by an automatic web-crawler (many websites are involved with different structure)?
Thank You!
I presume you want to query it is some way, in which case you should store the data in a flexible data store. A relational database would not be fit for purpose as it has a strict schema, but something like mongodb which lets you store semi structured data without having to define a schema up front, but still provides a powerful query language.
The same goes for how you represent the data in the crawler code. Don't map the data to classes where the structure is defined up front, but use a flexible data structures that can change at runtime. If you are using Java then de-serialise the data into HashMaps. In other languages this might be called Dictionaries or Hashes.
If you're scraping data from websites that actually want to allow you to do that, chances are they will provide some sort of webservice to allow you to query their data in a structured way.
Otherwise, you're on your own, and you might even be violating their terms of use.
If the websites provide no APIs, then you're out cold and you have to write separate extraction module for each data format you're encountering. If the website changes the format, then you have to update your format module. A standard thing to do is to have plugins for every website you're crawling and have a testing framework which does regression testing with data you've already collected. When a test fails you know something went wrong and you can investigate whether you have to update your format plugin or if there is another issue.
Without knowing what kind of data you're collecting it will be very difficult to try to hypothesize about ways to improve the "quality" of the data that was mined.
Maybe you could find out whether the website allows you to access the data like API, if so, you could use this kind of structured data to your website directly. If not, you may need plugins for that. Or you could turn to other web crawlers with API access like Octoparse, to find the way to access their API to your own web crawler.

Do I need to use core data with web server Database?

I am developing a game (ipad), the game will need an online database for storage, becuz it will need others players data to play multi-player stuffs.
I have been reading core data tutorials, but so far, what I read are all for internal iphone storage (using internal sqlite3 etc).
My question is that:
If I were using online webserver database (connecting/read/write/update by using php), do I need to use the internal core data?
More details for question 1: For example, I fetch a player's data like username, level, gold, hp, exp etc, do I need to wrap to core data, or I just simply create an NSObject for storing the player information, and using share manager to share with others classes that need it?
What are the tips and technique for developing iPad games with web-services(mysql via php HTTP POST). (FYI, I found this ASIHTTPRequest library, and I find it quite useful, and I am using it).
Core Data isn't primarily for storage. Instead it is a means of creating the model layer of an Model-View-Controller design app (which the Apple API uses.) Persisting the model to disk is really just an option.
Core Data handles both size and complexity in models. If your app just fetches dumb data from the web server e.g. a list of static values, then you probably don't need Core Data. However, if your app fetches data from the web server and then manipulates it in a complex manner, then Core Data will provide you many benefits.
A lot times, if you don't use Core Data, you can end up essentially rewriting Core Data just to manage all the relationships between your data objects and the rest of the API.
If you plan on working a lot with the Apple API, you should learn Core Data regardless of the source or destination of your data. It will save you a lot of time in the long run. The important thing to remember is that its not a database wrapper.