I was wondering whether is that possible to have xlsm file as backend while having html as frontend? How can I achieve this if yes?
Thanks in advance.
Since the question lacks the understanding of an application structure in the programming realm, I will put this as an answer hoping to clarify a few things.
First of all I don't think you understand what the term "back-end" means.
Please read https://en.wikipedia.org/wiki/Front_and_back_ends AND http://blog.teamtreehouse.com/i-dont-speak-your-language-frontend-vs-backend
hopefully these will clarify a few thing for you.
Just to explain these concepts shortly:
In an application Front-end and back-end refer to two interfaces that communicate with each other and exchange data in some form. Such separation is made when the program and the user are separate (such as when you have a server and a client such as in distributed programming). This however, is only one of many programming patterns today. Although rare in today's world, there are programs that do not separate functionality in such way and thus delegate all this functionality to the core program that is statically installed on the clients computer. But in other cases here is what the terms front-end vs. Back-end means:
Reason why such separation is necessary:
In today's world many applications (such as web applications and mobile applications) are deployed on common servers to provide wider and faster access, better support and to reduce the cost of access for the client (not requiring any space, no download time etc.). However in such cases, since the client doesn't have access to the program locally, they need to access it over internet protocols such as TCP (which is used by today's http). The problem is that the frontend files are served everytime the application is loaded and can not keep track of states of data (they are stateless) [excluding the edge cases of cookies and caches]
Front End:
The sole reason that the front end exists for the user to interact with the application and to collect data from the user such as login information etc. (User Interface)
Back end:
Now back-end is a little bit more complicated. There are 2 major components to a good back-end design:
Logic
Data
The backend is responsible for processing the data from the user (front-end) in a correct and meaningful way. For example in a really simple program which adds two numbers the front end would be responsible of asking the user for two numbers and the back-end would carry out the actual addition and send the result back to front end to be displayed.
If the data has states. The backend would also need to save the last state of the data somewhere on the server. This is where the second component comes in. The most common practice is to have a ".db" file(s) which represents a database. However there is no obligation for you to do so. When necessary, if you wanted your backend could read data any where from a plain text file to STDIN.
Why do we use databases? ==> The queries. Query languages that come with data bases make it so much easier for us to extract and isolate the relevant data
After processing and modifying the data, the backend sends it back to the front end to be displayed to the user. The common data transferring ways are JSON, XML and SExpressions.
So following this short lecture, back to your question:
Can I have an xlsm file in the backend?
Yes. You can preserve the data in the backend(server) in anyway that you want. The only thing you need to make sure is that the endpoint the front end communicates to reads data from this file and writes back onto this file. (Sometimes CSV files are used in such a way that is similar to xlsm files)
Is it a good idea?
No. Databases exists for a reason. Use them.
Hope this sheds light on a few things. I highly advise you understand the application stack before writing any code
Related
I'm designing a website where users can upload comments on pages, and other users should see those comments. I reached the stage where I have the comments stored in a database, and I know the place they're supposed to go in the html, and I need to connect those two things somehow.
I'm using express and Node.js on the server side, and postgres on the db side.
As of when I'm asking this, it seems to me it's very bad practice to have the user access the database. So I think the server needs to access the database based on the user's request, modify the generalized html's showing of comments to now have the information of the specific comments, save that to a file, and send it to the user. To do this I was thinking of creating an "html generator function" on the server-side that takes in specific comment information and puts it in the generalized html, but that seems like it doesn't scale well and I'm concerned that storing the intermediate file would be inefficient.
Is that the correct approach? Can you tell me known ways of doing this that aren't so hacky?
If you suggest using php, isn't there a problem where php connects to a server and disconnects every time we use it? I would prefer if the server connected once when it booted and did all the fetching when needed instead of connecting every time. It seems to me like that would involve far less overhead (correct me if I'm wrong...)
See the comment of Amadan for the full solution. It's called a "template engine"
Edit:
I highly recommend learning React. I learned EJS and it's difficult to scale. React is infinitely easier to program with for just a little more investment. The old web is much less declarative (& EJS is much less too).
Can someone explain to me why you can't connect to a MySQL DB directly through dart from a security point of view?
There is no hard guideline on whether to connect frontend directly to backend or not. It is just a design practice that has been widely accepted and evolved over many years.
Typical app structure consists of
FRONTEND -> SOME MIDDLE LAYER -> BACKEND
Where your middle layer handles all the interactions/processing with the database and the frontend uses this functionality with some sort of API structure. Having this layer is extremely helpful when the application goes to scale, it gives an added abstraction to the frontend.
It is not advisable to directly fuse your frontend(your flutter app), to the DB(MySQL) because any efficient hacker might use basic man-in-middle attack to know your DB structure/connections/queries(There are some pretty effective decompilers present out there), and alter your data and you might not even get to know what caused the data to update unless you've applied some checks on DB layer.
Also, your frontend logic needs to be more of end-user centric than to handle the data of the user. Any backend system(java, node, etc) gives you added functionality & freedom to parse and present the data from either side.
You can use the sqlite package available to store basic data, like your session tokens, your app configurations etc, but it is advisable to keep the main user data like the logins, etc in a separate place, or better yet, you can use the firebase plugin to store data in document structure in the cloud.
I'm looking to use SSRS for multi-tenant reporting and I'd like the ability to have runtime-chosen Shared Data Sources for my reports. What do I mean by this? Well, I could be flexible but I think the two most likely possibilities are (however, I'm also open to other possibilities):
The Shared Data Source is dictated by the client's authentication. In my case, the "client" is a .NET application and not the user, so if this is a viable path then I'd like to somehow have the MainDB (that's what I'm calling it) Shared Data Source selected by the Service Account that the client logs in as.
Pass the name of the Shared Data Source as a parameter and let that dictate which one to use. Given that all of my clients are "trusted players", I am comfortable with this approach. While each client will have its own representative Service Account, it's just for good measure and should not be important. So instead of just calling the data source MainDB, we could instead have Client1DB and Client2DB, etc. It's okay if a new data source means a new deployment but I need this to scale easily enough as well to ~50 different data sources over time.
Why? Because we have multiple/duplicate copies of our production application for multiple customers but we don't want to duplicate everything, just the web apps and databases. We're fine with some common "back-end" things. And for SSRS, because of how expensive licenses are (and how rarely reports are ran by our users), we really want to have just a single back-end for all of our customers (I actually have a second one on standby for manual disaster recovery situations - we don't need to be too fancy here as reports are the least important DR concern we have).
I have seen this question which points to this post but I was really hoping there was a better way than this. Because of all of those additional steps/efforts/limitations/etc, I'd rather just use PowerShell to script duplicate deployments of the reports with tweaked hardcoded data sources instead of standardizing on the steps in that post. That solution feels WAY too hacky to me and doesn't seem to scale very well at all.
I've done this a bunch of terrible ways (usually hardcoded in a dynamic script), and then I discovered its actually quite simple.
Instead of using Shared Connection, use the Embedded Connection and create your Connection string based on params (or any string manipulation code)....
I am about to:
create a web interface where the user can type in email addresses, which are then sent up to the server in one json bulk, which is then used for messaging these users.
I also have the requirement to be able to upload a csv file with a long list of email addresses. The problem is that the number of email addresses can be very large. We're talking about in the thousands or even more.
Theoretically I can either parse the csv file in the front-end and send the email addresses up in a json object (as I already have the api for the first use case where the email addresses are typed in and sent as a json), or I could upload these csv files to our db and do their parsing on the server side.
Should consider processing the csv files in the front-end at all?
What should be a "safe" number of items for processing in the front-end without breaking anything, or ending up with a heavily compromised user experience?
Can anyone comment from experience? Thanks
What should be a "safe" number of items for processing in the
front-end without breaking something, or ending up with a heavily
compromised user experience?
This depends on the user's machine.
No one here would be able to give you a definitive answer on your question.
Anyway, you can use the Web Workers API
Web Workers allow you to create long running asynchronous threads in the background without heavily affecting/freezing your UI. You can show a spinner indicating that the CSV is being processed. Meanwhile your users would be able to interact with the UI just fine.
That's your best bet where supported.
Should consider processing the csv files in the front-end at all?
String parsing is usually a process that is optimised by modern browsers in some cases. If you move the computation to the server you need to scale your server to meet demands for the calculations as more and more users use your web-app.
You could get playful with it and detect the processing capabilities of the user's machine - if capable, use Web Workers, if not use the server to do this.
The most comprehensive way to do this is by defining a Browser Test Matrix and test for yourself.
You can even emulate bandwidths you want to target/test, specifically, using Network Throttling in Chrome Dev Tools
I am currently doing a project in Ruby on Rails and I have been presented with a dilemma.
The dilemma is that the users of my system will be uploading an excel spreadsheet. The issue is should I just read straight from this excel spreadsheet into my front-end or should I load this spreadsheet into my MySQL database and then to my front-end.
I have asked numerous people about this issue and have researched on-line to no avail.
Any help would be much appreciated.
The Excel file is not a database. If you need to allow it as source input, parse it, copy the data into a real database and connect to it.
The database is more flexible and efficient for querying and processing information.
I can think of two benefits, or rather options, of having them upload the excel spreadsheet for processing by your back end.
1) would be for your tracking purposes (who sent what and here is what the back-end did with it...). In fact consider that other formats/versions could be introduced, would it be important to keep them to identify what went wrong? "How can we handle this new format"?
2) On the other side, the front-end way that is, you offload processing from the back-end, but that means that the browser app could get fairly complex and depending on your excel, that is if it has many relationships, sending that data up to the server could be complex. However if is simply a flat spreadsheet, say simple rows without totals/tax calc/..., then it might be an advantage of loading it into the browser and then sending these rows up to the server if offloading processing is of any importance.
However point 2 really is diluted by point 1, which to me would be of greater importance for future migration of this service. So I personally would choose uploading it and processing on the back end.
Update
As you clarified in the comments, if you are asking about the use of Excel on the backend as a database? I would agree with Simone Carletti's answer here. Maybe just add a real database gives you much more flexibility, more tools and, more performance. This difference is loading a file, parsing it into some structure, then saving it (unless you are using some .NET framework and even if, the Database (MySQL, MongoDB...) would give you much more flexibility in structuring and querying, over the headache of managing with the speed of DB connections. You might just want to write a sample in both to evaluate, the DB solution will probably win you over.