I have to develop an application for smartphones using HTML/CSS/JS (for PhoneGap) and I have to store data somewhere.
After some research, I found TaffyDB (http://www.taffydb.com/) that exactly does the job except on one point : security.
I don't want someone to take all my data just by saving the JS file so is there a solution to protect it ?
Or if I want to keep my data private, do I have to use an usual database (like MySQL) coupled with a PHP script that I call via Ajax ?
Thanks for the help.
TaffyDB can be used on Server-Side with a number of server-side solutions, but you will have to control the output on your application to include just the data.
In general, unless you plan to use a javascript server-side solution, I would say you cannot make it "secure", and even if you use non sensitive data on your front-end, I would highly recommend you go through the OSWAP guide before writing any code to determine if it is secure or not.
Related
I'm designing a website where users can upload comments on pages, and other users should see those comments. I reached the stage where I have the comments stored in a database, and I know the place they're supposed to go in the html, and I need to connect those two things somehow.
I'm using express and Node.js on the server side, and postgres on the db side.
As of when I'm asking this, it seems to me it's very bad practice to have the user access the database. So I think the server needs to access the database based on the user's request, modify the generalized html's showing of comments to now have the information of the specific comments, save that to a file, and send it to the user. To do this I was thinking of creating an "html generator function" on the server-side that takes in specific comment information and puts it in the generalized html, but that seems like it doesn't scale well and I'm concerned that storing the intermediate file would be inefficient.
Is that the correct approach? Can you tell me known ways of doing this that aren't so hacky?
If you suggest using php, isn't there a problem where php connects to a server and disconnects every time we use it? I would prefer if the server connected once when it booted and did all the fetching when needed instead of connecting every time. It seems to me like that would involve far less overhead (correct me if I'm wrong...)
See the comment of Amadan for the full solution. It's called a "template engine"
Edit:
I highly recommend learning React. I learned EJS and it's difficult to scale. React is infinitely easier to program with for just a little more investment. The old web is much less declarative (& EJS is much less too).
Good Morning,
as from the title, i'd like to create a proprietary database to be integrate in a Typo3 website.
I'd like to receive some advise on which is the best solution:
- is it possible to create tables directly from Typo3?
- is it better creating a database, for example with MySQL and then integrate
it?
In the second case, how coud that be done?
are there other options?
I hope this is not an already answered topic, in case, please send me to it ( i could not find so much information.
Thanks in advance.
If I understand your question correctly, you want to add a custom Extension to TYPO3, containing custom tables. From a content side, this is perceived as a "database", right?
TYPO3 has a framework for that called Extbase. You can "kickstart" a TYPO3 extension with the "Extension Builder" https://typo3.org/extensions/repository/view/extension_builder by entering the "Model" (the data structure) via GUI and then you get all tables etc. automatically set up.
After that (aside from general TYPO3 knowledge), there is some coding involved. In theory, it's possible to make a "round trip" back to the extension builder from the code, but I've never done that.
You need to know / learn the specificities of extbase / php, which is is based on some "convention over configuration" rules and has some additional tweaks to plain PHP (functional comments). Here's a great resource: http://www.extbase-book.org/.
With that, you have great flexibility and powerful tooling to build almost anything inside TYPO3.
From a TYPO3 view it is best if you are able to hold your data in the TYPO3 database. You need to create an extension to handle your data. In TYPO3 an extension can define it's own tables and with updates of the extension updates in the datastructure are handled automatically.
Since version 8 there is a new layer (doctrine) and so it is possible to define further databases for individual tables. With some restrictions you are able to even use different database (-systems) for different tables.
Anyway you could program your own database interface to get and store your data independent from any TYPO3 restrictions, but then you need to handle everything on your own.
Using the TYPO3 core API will help you in multiple ways to handle your data without programming everything anew.
Especially if you use extbase (and the EXT:extensionbuilder) you will get a complete BE data handling, FE-Plugins with Fluid templates to present your data, even data management from the FE could be generated for you just by defining the datastructure. Of course versioning, workspace and timed visibility support are also available if you use TYPO3 structures which includes some (mostly invisible) fields aside from uid, hidden, deleted.
At the moment I am working on a CRUD app that I am going to deploy (someday) and use for my own startup company. However I am nowhere near finishing this product and I stumbled upon a question that I can't seem to figure out.
I am using Express to serve angular the data out of my MySQL database. To do this I had to create '/api/' routes. However if I go (for example) to '/api/clients' I will be able to see the entire list of clients in an ugly array. In this case that does not really matter since it's just the data they were able to see anyways.
However my question is, is it important to block these kind of routes from users? Will problems arise when a user goes to 'api/createClient'? Could this result in a DB injection that could ruin my db?
My project can be found here: https://github.com/mickvanhulst/BeheerdersOmgevingSA
The server-side routing code can be found: server > Dao > clientDao.js
Controllers, HTML & client-side routing can be found in the 'public' folder.
I hope my question is clear enough and someone will be able to answer my question. If not, please state why the question is not clear and I will try to clarify.
Thanks!
Looking at the code, it looks like your URLs can directly be accessed using browser and if yes, then this does pose a security concern.
Doing DB transaction with the user provided fields or values is major security concern, if these data are not validated and sanitised before making a database call.
I would recommend following minimum steps to follow before crafting APIs which is internal but can be accessed using browser -
If this is internal, then do not provide HEADER ACCESS CONTROL from the server or keep it confined only to your domain name. This prevents any ajax call to be made to your APIs from another domains.
Do sanitise and validate all the data thoroughly before doing any kind of database transactions. There are lots of material on this everywhere on how to do it.
If these APIs are meant to be used for internal purpose, then kindly provide some kind of authentication to your APIs before doing the logical work in your routes with the help of middle-wares. You can leverage cookie authentication for very simple API authentication management. You can also use JSON Web Tokens, if you want a more levels of security.
If you are manipulating your databases then I would highly recommend to use some kind of authentication in your APIs. Ofcourse, point number 2 is must.
I have a editable html5 page and I store new elements in localStorage.
I want to synchronize my page with the server.
I want to know if I can do it without a server side script or if there is some tips to do something like this in a good way.
Thank you :)
You can pull information from the server quite easily using jQuery and then just put it on Local Storage but, if you want to upload local information to the server there is no way around, you have to use some kind of script, tough it's not that difficult, there are many languages (PHP, C#, Python...) and tools you can use.
Keep in mind that when you upload information to the server you have to sanitize it very important security measure.
Basically, the way to go is:
Post the information to the server (using AJAX or a HTML form, either way will do)
Use some server-side script to capture the variables posted.
Sanitize your data (check format, discard non-valid characters, etc)
Store it on database (Do not, ever, concatenate your data with a SQL query ok? that can make you vulnerable to a SQL injection attack), compute something or do stuff.
Return some status to the client (some confirmation maybe?)
You may want to take that confirmation and show a message to the user ("Your info was saved properly" or something like that)
is a javascript timer not sufficient for this manner? or jQuery?
The question really should be more of a problem than a question. If you're updating based on a server's variables then you could use AJAX i believe but if its like increment said variable every X seconds I would focus on using a javascript timer.
I'm writing a crawler which needs to get data from many websites. The problem is that every website has different structure. How can I easily write a crawler which downloads (correctly) data from (many) different websites? If the structure of a website will change will I need to rewrite the crawler, or are there other methods?
What logical and implemented tools can be used to improve the quality of data mined by an automatic web-crawler (many websites are involved with different structure)?
Thank You!
I presume you want to query it is some way, in which case you should store the data in a flexible data store. A relational database would not be fit for purpose as it has a strict schema, but something like mongodb which lets you store semi structured data without having to define a schema up front, but still provides a powerful query language.
The same goes for how you represent the data in the crawler code. Don't map the data to classes where the structure is defined up front, but use a flexible data structures that can change at runtime. If you are using Java then de-serialise the data into HashMaps. In other languages this might be called Dictionaries or Hashes.
If you're scraping data from websites that actually want to allow you to do that, chances are they will provide some sort of webservice to allow you to query their data in a structured way.
Otherwise, you're on your own, and you might even be violating their terms of use.
If the websites provide no APIs, then you're out cold and you have to write separate extraction module for each data format you're encountering. If the website changes the format, then you have to update your format module. A standard thing to do is to have plugins for every website you're crawling and have a testing framework which does regression testing with data you've already collected. When a test fails you know something went wrong and you can investigate whether you have to update your format plugin or if there is another issue.
Without knowing what kind of data you're collecting it will be very difficult to try to hypothesize about ways to improve the "quality" of the data that was mined.
Maybe you could find out whether the website allows you to access the data like API, if so, you could use this kind of structured data to your website directly. If not, you may need plugins for that. Or you could turn to other web crawlers with API access like Octoparse, to find the way to access their API to your own web crawler.