How to read from a database, add some of that information to html, and send that html to a user? - html

I'm designing a website where users can upload comments on pages, and other users should see those comments. I reached the stage where I have the comments stored in a database, and I know the place they're supposed to go in the html, and I need to connect those two things somehow.
I'm using express and Node.js on the server side, and postgres on the db side.
As of when I'm asking this, it seems to me it's very bad practice to have the user access the database. So I think the server needs to access the database based on the user's request, modify the generalized html's showing of comments to now have the information of the specific comments, save that to a file, and send it to the user. To do this I was thinking of creating an "html generator function" on the server-side that takes in specific comment information and puts it in the generalized html, but that seems like it doesn't scale well and I'm concerned that storing the intermediate file would be inefficient.
Is that the correct approach? Can you tell me known ways of doing this that aren't so hacky?
If you suggest using php, isn't there a problem where php connects to a server and disconnects every time we use it? I would prefer if the server connected once when it booted and did all the fetching when needed instead of connecting every time. It seems to me like that would involve far less overhead (correct me if I'm wrong...)

See the comment of Amadan for the full solution. It's called a "template engine"
Edit:
I highly recommend learning React. I learned EJS and it's difficult to scale. React is infinitely easier to program with for just a little more investment. The old web is much less declarative (& EJS is much less too).

Related

html page with xlsm file as backend

I was wondering whether is that possible to have xlsm file as backend while having html as frontend? How can I achieve this if yes?
Thanks in advance.
Since the question lacks the understanding of an application structure in the programming realm, I will put this as an answer hoping to clarify a few things.
First of all I don't think you understand what the term "back-end" means.
Please read https://en.wikipedia.org/wiki/Front_and_back_ends AND http://blog.teamtreehouse.com/i-dont-speak-your-language-frontend-vs-backend
hopefully these will clarify a few thing for you.
Just to explain these concepts shortly:
In an application Front-end and back-end refer to two interfaces that communicate with each other and exchange data in some form. Such separation is made when the program and the user are separate (such as when you have a server and a client such as in distributed programming). This however, is only one of many programming patterns today. Although rare in today's world, there are programs that do not separate functionality in such way and thus delegate all this functionality to the core program that is statically installed on the clients computer. But in other cases here is what the terms front-end vs. Back-end means:
Reason why such separation is necessary:
In today's world many applications (such as web applications and mobile applications) are deployed on common servers to provide wider and faster access, better support and to reduce the cost of access for the client (not requiring any space, no download time etc.). However in such cases, since the client doesn't have access to the program locally, they need to access it over internet protocols such as TCP (which is used by today's http). The problem is that the frontend files are served everytime the application is loaded and can not keep track of states of data (they are stateless) [excluding the edge cases of cookies and caches]
Front End:
The sole reason that the front end exists for the user to interact with the application and to collect data from the user such as login information etc. (User Interface)
Back end:
Now back-end is a little bit more complicated. There are 2 major components to a good back-end design:
Logic
Data
The backend is responsible for processing the data from the user (front-end) in a correct and meaningful way. For example in a really simple program which adds two numbers the front end would be responsible of asking the user for two numbers and the back-end would carry out the actual addition and send the result back to front end to be displayed.
If the data has states. The backend would also need to save the last state of the data somewhere on the server. This is where the second component comes in. The most common practice is to have a ".db" file(s) which represents a database. However there is no obligation for you to do so. When necessary, if you wanted your backend could read data any where from a plain text file to STDIN.
Why do we use databases? ==> The queries. Query languages that come with data bases make it so much easier for us to extract and isolate the relevant data
After processing and modifying the data, the backend sends it back to the front end to be displayed to the user. The common data transferring ways are JSON, XML and SExpressions.
So following this short lecture, back to your question:
Can I have an xlsm file in the backend?
Yes. You can preserve the data in the backend(server) in anyway that you want. The only thing you need to make sure is that the endpoint the front end communicates to reads data from this file and writes back onto this file. (Sometimes CSV files are used in such a way that is similar to xlsm files)
Is it a good idea?
No. Databases exists for a reason. Use them.
Hope this sheds light on a few things. I highly advise you understand the application stack before writing any code

Ruby on Rails - Database or excel

I am currently doing a project in Ruby on Rails and I have been presented with a dilemma.
The dilemma is that the users of my system will be uploading an excel spreadsheet. The issue is should I just read straight from this excel spreadsheet into my front-end or should I load this spreadsheet into my MySQL database and then to my front-end.
I have asked numerous people about this issue and have researched on-line to no avail.
Any help would be much appreciated.
The Excel file is not a database. If you need to allow it as source input, parse it, copy the data into a real database and connect to it.
The database is more flexible and efficient for querying and processing information.
I can think of two benefits, or rather options, of having them upload the excel spreadsheet for processing by your back end.
1) would be for your tracking purposes (who sent what and here is what the back-end did with it...). In fact consider that other formats/versions could be introduced, would it be important to keep them to identify what went wrong? "How can we handle this new format"?
2) On the other side, the front-end way that is, you offload processing from the back-end, but that means that the browser app could get fairly complex and depending on your excel, that is if it has many relationships, sending that data up to the server could be complex. However if is simply a flat spreadsheet, say simple rows without totals/tax calc/..., then it might be an advantage of loading it into the browser and then sending these rows up to the server if offloading processing is of any importance.
However point 2 really is diluted by point 1, which to me would be of greater importance for future migration of this service. So I personally would choose uploading it and processing on the back end.
Update
As you clarified in the comments, if you are asking about the use of Excel on the backend as a database? I would agree with Simone Carletti's answer here. Maybe just add a real database gives you much more flexibility, more tools and, more performance. This difference is loading a file, parsing it into some structure, then saving it (unless you are using some .NET framework and even if, the Database (MySQL, MongoDB...) would give you much more flexibility in structuring and querying, over the headache of managing with the speed of DB connections. You might just want to write a sample in both to evaluate, the DB solution will probably win you over.

Protect client side database like TaffyDB

I have to develop an application for smartphones using HTML/CSS/JS (for PhoneGap) and I have to store data somewhere.
After some research, I found TaffyDB (http://www.taffydb.com/) that exactly does the job except on one point : security.
I don't want someone to take all my data just by saving the JS file so is there a solution to protect it ?
Or if I want to keep my data private, do I have to use an usual database (like MySQL) coupled with a PHP script that I call via Ajax ?
Thanks for the help.
TaffyDB can be used on Server-Side with a number of server-side solutions, but you will have to control the output on your application to include just the data.
In general, unless you plan to use a javascript server-side solution, I would say you cannot make it "secure", and even if you use non sensitive data on your front-end, I would highly recommend you go through the OSWAP guide before writing any code to determine if it is secure or not.

easy way to create a username / password login

I have built a website using html and css (in Dreamveaver CS4) on which I would like to create a section that is only accessible to registered users - users would have to submit their email address and create a password to access the area. I am prepared to take the time to learn with tutorials etc, but I'm a beginner with limited ability of html etc, so I would really appreciate some advice on what would be the easiest way of doing something like this - Drupal? JQuery? I have tried searching online for tutorials but I am getting hundreds of different answers using different solutions and would really appreciate your opinions on how to do this in the easiest possible way.
Many thanks in advance :)
Just pick a tutorial for a scripting language that your web hosting supports. PHP is pretty common: http://phpeasystep.com/phptu/6.html
I would suggest using server side scripting for your login.
For this you would need
A place to store user data
A script that can validate the user
data.
Use whatever scripting language your host supports for this.
You can either use a flat file (text file) to store user data by encrypting it in it or you can use a database (best)
You can write a small script that is called when the user logs in and sets the cookie in the browser
In the pages that only logged in users can view, you can add a small piece of code to verify from the cookie, if it validates, display the data or display something like Authorized users only.
This is a very basic functionality but if that is all you want, this should do it.
Well, you'd need a database on the server to store the username/password combinations. That means you'd need some server side language to interact with the database to check for valid username/password combinations, as well as using the server side language to know -when- to check for username/pw (e.g. which page(s) are password protected).
If you're on a Windows server, MS Access is generally considered to be a good starting point for database, but I'd recommend mySQL or SQL Server for the long run.
For language, there's a ton to choose from. ASP, PHP, ColdFusion, etc. I'm a ColdFusion person, so take this with the bias that implies , but I think CF is the easiest for a beginner to learn.

Can I run an HTTP GET directly in SQL under MySQL?

I'd love to do this:
UPDATE table SET blobCol = HTTPGET(urlCol) WHERE whatever LIMIT n;
Is there code available to do this? I known this should be possible as the MySQL Docs include an example of adding a function that does a DNS lookup.
MySQL / windows / Preferably without having to compile stuff, but I can.
(If you haven't heard of anything like this but you would expect that you would have if it did exist, A "proly not" would be nice.)
EDIT: I known this would open a whole can-o-worms re security, however in my cases, the only access to the DB is via the mysql console app. Its is not a world accessible system. It is not a web back end. It is only a local data logging system
No, thank goodness — it would be a security horror. Every SQL injection hole in an application could be leveraged to start spamming connections to attack other sites.
You could, I suppose, write it in C and compile it as a UDF. But I don't think it really gets you anything in comparison to just SELECTing in your application layer and looping over the results doing HTTP GETs and UPDATEing. If we're talking about making HTTP connections, the extra efficiency of doing it in the database layer will be completely dwarfed by the network delays anyway.
I don't know of any function like that as part of MySQL.
Are you just trying to retreive HTML data from many URLs?
An alternative solution might be to use Google spreadsheet's importHtml function.
Google Spreadsheets Lets You Import Online Data
Proly not. Best practises in a web-enviroment is to have database-servers isolated from the outside, both ways, meaning that the db-server wouldn't be allowed to fetch stuff from the internet.
Proly not.
If you're absolutely determined to get web content from within an SQL environ, there are as far as I know two possibilities:
Write a custom MySQL UDF in C (as bobince mentioned). The could potentially be a huge job, depending on your experience of C, how much security you want, how complete you want the UDF to be: eg. Just GET requests? How about POST? HEAD? etc.
Use a different database which can do this. If you're happy with SQL you could probably do this with PostgreSQL and one of the snap-in languages such as Python or PHP.
If you're not too fussed about sticking with SQL you could use something like eXist. You can do this type of thing relatively easily with XQuery, and would benefit from being able to easily modify the results to fit your schema (rather than just lumping it into a blob field) or store the page "as is" as an xhtml doc in the DB.
Then you can run queries very quickly across all documents to, for instance, get all the links or quotes or whatever. You could even apply XSL to such a result with very little extra work. Great if you're storing the pages for reference and want to adapt the results into a personal "intranet"-style app.
Also since eXist is document-centric it has lots of great methods for fuzzy-text searching, near-word searching, and has a great full-text index (much better than MySQL's). Perfect if you're after doing some data-mining on the content, eg: find me all documents where a word like "burger" within 50 words of "hotdog" where the word isn't in a UL list. Try doing that native in MySQL!
As an aside, and with no malice intended; I often wonder why eXist is over-looked when people build CMSs. Its a database that can store content in its native format (XML, or its subset (x)HTML), query it with ease in its native format, and can translate it from its native format with a powerful templating language which looks and acts like its native format. Sometimes SQL is just plain wrong for the job!
Sorry. Didn't mean to waffle! :-$