Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was trying to learn Perl then I ended up writing a script that tries to find all possible schedules given course names, where a possible schedule means that there are no clashes between the course times by iterating through all sections.
I crawled my university schedule of classes and placed them in a messy data structure hash to a hash to a 2D array where first hash indicated the Subject and second hash indicated the Course number then an array of sections where each section is an array of all the data. (not the most appealing data structure)
I then, processed all schedules combinations by iterating through all possible schedule combinations and return all schedules that didnt have a clash as a 3D array (where each entry was a schedule and each schedule had courses and each course had its specific data)
Now, I can hard-code the input in the script as a 2D array where each element consisted of Subject name and course number.
What I want to do now is to transform this into a website.
I took an online course on database but I don't have a clue on how to handle databases from Perl or whether this is a good approach.
I don't know how to store the data crawled permanently so it could be used for further computations.
I know basic HTML and CSS and Javascript but I have no idea on how to integrate the script with them and take the input from the user (I only know how to do that in Javascript). Google lead me towards "cgi-scripts" but I don't anything about servers except that they are responsible for computation done by website and one of them is called Apache or AJAX. I am not sure whether this is true or not but I want to give you an idea of my level of expertise.
Could you please point me in the right direction by telling me what do I need to learn in order to be able to make this website.
I took an online course on database but I don't have a clue on how to handle databases from Perl or whether this is a good approach.
Database access in Perl is done via DBI. You can use DBIx::Class to get a nice OO abstraction for it.
I don't know how to store the data crawled permanently so it could be used for further computations.
Databases are a good choice.
I know basic HTML and CSS and Javascript but I have no idea on how to integrate the script with them and take the input from the user (I only know how to do that in Javascript).
Use a <form>. Set the action to the URL of a server side program. Submit the form.
Google lead me towards "cgi-scripts" but I don't anything about servers except that they are responsible for computation done by website and one of them is called Apache or AJAX. I am not sure whether this is true or not but I want to give you an idea of my level of expertise.
An HTTP server listens for HTTP requests and provides HTTP responses. Browsers (and search engines, and other clients) make HTTP requests to servers that host websites. The servers respond with the data (HTML, CSS, JavaScript, Images, etc) needed to render the site and the client renders it (or indexes it, or whatever).
Apache HTTPD is one of the most commonly used HTTP servers.
CGI is means by which an HTTP server can determine what to respond with by running a program instead of just handing over a static file. It is very simple but not very efficient. Some alternatives are described in this answer.
Ajax has nothing to do with this. It means "Using JavaScript, in a web page, to tell the browser to make a new HTTP request (without leaving the page) and make the response available to the JavaScript".
For a pure perl setup, the HTTP::Daemon and HTTP::Response modules are your best friends. I tried to write a web server using nothing but IO::Socket and nearly drove myself crazy.
Getting started is pretty easy.
use strict;
use warnings;
use HTTP::Daemon;
my %opt = (
'listen-host' => 'localhost',
'listen-port' => 8808,
);
my $d = HTTP::Daemon->new(
LocalPort => $opt{'listen-port'},
LocalAddr => $opt{'listen-host'},
Reuse => 1,
) or die "HTTP listener failed at $opt{'listen-host'}:$opt{'listen-port'} - $!";
print "Started HTTP listener!\n";
my $c = $d->accept;
Now your script will sit there until you get a connection from a browser. Of course you still need to send a response, so see HTTP::Response on how to send data back.
This is going to be a partial/vague answer..
For database, what you want to do is to learn to use DBI this is a database implementation independant api to talk to data bases (it can even write to csv files!). You would also need a driver for your database of choice.
As for website it is beyond my skills, there are many ways to do it. Perl would be used server side via something called CGI. Javascript on the other hand is typically processed on the client side, and is used to add dynamic elements to your site. Apache is a web server software, it takes care of talking with your browser and passing it relevant html pages, you might need to use it, but you would not need to code anything for it for basic use-cases.
For perl webpages, you can start with this tutorial to understand better, and then look to perl monks for a better(and more up-to-date) answer. This post will also give you more practical advice like to use Dancer
Related
I am creating a webpage in ReactJS for post feed (with texts, images, videos) just like Reddit with infinite scrolling. I have created a single post component which will be provided with the required data. I am fetching the multiple posts from MySQL with axios. Also, I have implemented redux store in my project.
I have also added post voting. Currently, I am storing all the posts from db in redux store. If user upvotes or downvotes, that change will be in redux store as well as in database, and web-page is re-rendering the element at ease.
Is it feasible to use redux-store for this, as the data will be increased soon, maybe in millions and more ?
I previously used useState hook to store all the data. But with that I had issue of dynamic re-rendering, as I had to set state every time user votes.
If anyone has any efficient way, please help out.
Seems that this question goes far beyond just one topic. Let's break it down to the main pieces:
Client state. You say that you are currently using redux to store posts and update the number of upvotes as it changes. The thing is that this state is not actually a state in your case(or at least most of it). This is a common misconception to treat whatever data that is coming from API a state. In most cases it's not a state, it's a cache. And you need a tool that makes work with cache easier. I would suggest trying something like react-query or swr. This way you will avoid a lot of boilerplate code and hand off server data cache management to a library.
Infinite scrolling. There are a few things to consider here. First, you need to figure out how you are going to detect when to preload more posts. You can do it by using the IntersectionObserver. Or you can use some fance library from NPM that does it for you. Second, if you aim for millions of records, you need to think about virtualization. In a nutshell, it removes elements that are outside of the viewport from the DOM so browsers don't eat up all memory and die after some time of doomscrolling(that would be a nice feature tho). This article would be a good starting point: https://levelup.gitconnected.com/how-to-render-your-lists-faster-with-react-virtualization-5e327588c910.
Data source. You say that you are storing all posts in database but don't mention any API layer. If you are shooting for millions and this is not a project for just practicing your skills, I would suggest having an API between the client app and database. Here are some good questions where you can find out why it is not the best idea to connect to database directly from client: one, two.
I have a question to RESTful services. In REST the POST method is used to create an entity.
And GET is used to query entities. Right?
As I read in another posts it is not allowed in HTTP to send a GET request with a body.
But when I want to send Json to make a query, what is the best way? Are there any best practices or how do you solve such json queries?
Thanks for your answers
In REST the POST method is used to create an entity. And GET is used to query entities. Right?
Not really. GET is used to fetch representations of resources. POST is deliberately vague -- anything not worth standardizing can use POST.
when I want to send Json to make a query, what is the best way?
There is no best way to do it, just trade offs.
The basic plot of HTTP is that you GET representations of resources. If the resource you want doesn't exist, you create a new one. So the "REST" flow would look something like sending a request to the server to create a "the answer to my query" resource, and then using GET to obtain the current representation of that resource. Which is great, because we can fetch the latest representation of that resource any time we're worried that our copy is out of date. Other people with the same query can use the same resource, so we can use a general-purpose cache to take a lot of the work. The end result is "web scale".
OK, not that great, because we learned that sending information over insecure channels is a bad idea; but we can put a general-purpose caching proxy in front of our server, and get some scale that way.
But "create a new resource" is a lot of ceremony when you only expect to need the query once.
Creating a new resource was using POST in this situation anyway, so why not return a representation of the solution right away? And the answer is, go right ahead! that works great... but doesn't give you any cache support at all. You are effectively performing a remote call under the guise of modifying a resource.
Also, POST doesn't promise idempotent semantics -- on an unreliable network, requests can get lost, and general purpose components won't know that in this particular case it is harmless to just repeat the same request.
PUT has idempotent semantics... but it also has very specific opinions about the contents of the payload that don't match "query" at all.
You can dig through other standardized methods, but there aren't really any good fits. The only methods that are close are SEARCH and REPORT, which are coupled to WebDAV semantics.
You can invent your own non standard method; but general purpose components won't understand it.
You can standardize a new method with the semantics you need, but that's a lot of work.
Or you can just use POST.
Remember, the web took over the world using nothing more than GET and POST. So it's probably fine.
Back when I first started developing client/server apps which needed to make use of HTTP to send data to the server, I was pretty nieve when it came to HTTP methods. I literally used GET requests for EVERYTHING.
I later learned that I should use POST for sending data and GET for requesting data however, I was slightly confused as to why this is best practice. From a functionality perspective, I was able to use either GET or POST to achieve the exact same thing.
Why is it important to use specific HTTP methods rather than using the same method for everything?
I understand that POST is more secure than GET (GET makes the data visible in the HTTP URL) however, couldn't we just use POST for everything then?
I'm going to take a stab at giving a short answer to this.
GET is used for reading information. It's the 'default' method, and everything uses this to jump from one link to the next. This includes browsers, but also crawlers.
GET is 'safe'. This means that if you do a GET request, you are guaranteed that you will never change something on the server. If a GET request could cause something to delete on the server, this can be very problematic because a spider/crawler/search engine might assume that following links is safe and automatically delete things.
This is why we have a couple of different methods. GET is meant to allow you to 'get' things from the server. Likewise, PUT allows you to set something new on a server and DELETE allows you remove something.
POST's biggest original purpose is submitting forms. You're posting a form to the server and ask the server to do something with that form.
Any client (a human/browser or machine/crawler) knows that POST is 'unsafe'. It won't do POST requests automatically on your behalf unless it really knows it's what you (the user) wants. It's also used for things like are kinda similar to submitting forms.
So when you design your website, make sure you use GET only for getting things from the server, and use POST if your ajax request will cause 'something' to change on the server.
Fun fact: there are a lot of official HTTP methods. At least 30. You'll probably only use a very few of them though.
So to answer the question in the title more precisely:
Why are there multiple HTTP Methods available?
Different HTTP methods have different rules and restrictions. If everyone agrees on those rules, we can start making assumptions about what the intent is. Because these guarantees exists, HTTP servers, clients and proxies can make smart decisions without understanding your specific application.
Suppose, You have one task app in which you can store data, delete data. Now suppose the route of your web page is /xx so to get the webpage, to store the data using add button , to delete the data using delete button you will send requests to /xx but how web server will know whether you are asking for web page or you want to add data or you want to delete because /xx is the same for all requests that's why we have different web requests browser always sends request name(GET,POST,PUT,DELETE) in header to server so server can understand what you need.
I'm new to service workers. I want to integrate service workers in my site.My motive is to improve the performance of my website not making the website offline.Its a real estate website.
So what i have done till now is create modular templates of my site and store them in the cache.
for e.g. template1
<div>
<p>#data</p>
<div>
whenever a fetch occurs on my page i first call an api through ajax get the data and replace the #data variable in the cache response by actual api response and then i returned the new response to the browser.
Question 1:- So i want to know is that the right approach. for html template caching?
In the above approach i'm getting challenges like loops and conditional statements in my html.
Question 2:- Is there any way that i can cache the templates with loops and change them at run time?.
Question 3:- Say if i show the cached app-shell to the user initially, so is that going to effect my site's SEO ranking?.
Question 4:- I have to write new templates of the existing code, which means i have to maintain two codes one for service-workers and other for normal browsers which dont support service workers.Any solution to this?
Regards
That's a lot of questions and I don't think service workers are necessarily your best bet.
Question 1: Personally I recommend using a framework such as KnockoutJS, Angular, Polymer...etc for your HTML templates. These often have template caching built-in.
Question 2: Instead of the approach you are using whereby you are replacing the variables before 'sending them to the browser' most frameworks would use some form of data bindings which would take care of iterations and conditions within the browser.
Question 3: Caching the app-shell would have no effect on SEO and Google has been parsing JavaScript for a while; however personally I would recommend that the website's content loads without JavaScript and then JavaScript is only used to enhance the experience. This would be the same whether using the app-shell model or not.
Question 4: I do not understand your current setup and this would not be an ordinary scenario so you might have something wrong.
Service workers and the Cache API are ordinarily used to cache your static assets, usually fonts, css, JavaScript and HTML templates and should result in improved performance as there are less HTTP requests; but there are other ways to improve performance that will address all of your questions without the use of service workers.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Tasked to detail a simple API, I did a little research and suspect everything I know about the Internet is wrong.
I've Googled for far longer than I care to admit, reading a number of articles, StackOverflow questions and websites that all seem to vehemently disagree. I recognize every developer does things differently, but still suspect there is an official standard somewhere, or at least a general best practice (although, admittedly, not everyone follows whatever the best practice).
This API would use JSON. That I cannot change.
What my local peers do/have told me (very likely incorrect):
HTTP is a complex, antiquated beast that we should deal with as little as possible. It's simply a vehicle for moving chunks of JSON back and forth, where the magic happens. All data and metadata should be in the JSON, and you can set it up exactly as you like.
Use a 200 status code for everything, even if there is an application-level error or problem with the user's input. The other error codes mean something went wrong with the HTTP operation--a catastrophic unexpected server error, using the wrong URL, that kind of thing.
"Envelope" the JSON data for messages from the server; have JSON properties for metadata and include the actual JSON object/array inside a consistent property, like "data"
HTTPS is "nice to have" but not important for minor projects
Use PUT requests for everything
Log in to get a randomized strong of characters as an access token from the server. The server stores information on when the token expires, what account it is for and what IP address used it. Pass that access token to the server for every other call; the client does not store the password.
URLs tend to be verbs, like /register or /checkout or /changepassword. All other needed data is in the JSON. Each operation has its own URL
What I THINK might be right based on my reading, but not sure
HTTP is the divine data structure. Header information and server return codes can encompass any possible metadata and is, indeed, designed for that purpose. The contents need only contain the actual JSON object(s) the applications are acting upon. Put nothing in the JSON body that could possibly be part of the HTTP metadata.
ALWAYS use HTTPS, for everything
For any possible error (a form field didn't validate, the user's session expired, their game character is dead), send an HTTP status code. Try to pick what seems closest based on the W3C descriptions, but all that really matters is that you use it consistently in your system. The code should be enough to tell the client app what to do (show user validation errors and make them fix form input, make user log in again, take user back to main screen). The body, in case of errors, contains extra details about the error, if necessary.
The client app should pass login info with every request, in the HTTP header. This means it needs to use basic auth, which means it needs to remember the user's password.
The JSON data should never be in an "envelope". There is no standard format, because the contents directly represent the object(s) needed for the given operation as indicated by the combination of URL, GET/POST/PUT/DELETE
URLs tend to be nouns, like /user or /shoppingcart. Subdirectories of the URL refer to the object ID being acted on: /user/johndoe or /shoppingcart/12359. A URL could be used for different operations for GET (retrieve data) POST (update data) PUT (create new data) DELETE (remove data).
I'm not even sure that either of these is fully right--can you tell me the rules for what is the official, or most recommended way to structure such an API?
You should read the relevant part of the Fielding dissertation, that defines what REST is: http://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm
Some additional dissertation from Markus: http://www.markus-lanthaler.com/research/third-generation-web-apis-bridging-the-gap-between-rest-and-linked-data.pdf He started the work about creating a standard REST implementation: the http://www.hydra-cg.com/ RDF REST vocab and the http://json-ld.org/ RDF JSON format. Currently we don't have a standard solution to describe the uniform interface of any REST service. This is like if we would not have a HTML standard. That's why we are not able to write REST browsers just application specific clients.
(Hydra is not production ready, I guess they'll need another 2-3 years to standardize it and start to build Hydra specific tools. Until then we cannot really talk about real REST, because most of the APIs define an implementation specific format or use a non-standard more or less common format, like HAL.)