Why HashRouter in react-router v6 is not recommended? - react-router

On every react-router v6 documentation page which mentions HashRouter there is a short warning text stating that this kind of routing is not recommended. There is no explanation why.
Are there any major disadvantages? Does it break any api somehow?

Short answer.... some devs think hash routing produces "ugly" URLs... but no, really, hash routing serves a purpose where perhaps the server environment isn't setup to handle current HTML or otherwise needs to handle all page requests at a static URL.
This is about as much explanation as the docs provide.
HashRouter
<HashRouter> is for use in web browsers when the URL should not (or
cannot) be sent to the server for some reason. This may happen in some
shared hosting scenarios where you do not have full control over the
server. In these situations, <HashRouter> makes it possible to store
the current location in the hash portion of the current URL, so it is
never sent to the server.
It's basically, "Only use hash routing if that's what you need and you know what you are doing." I think it's generally the case that if you don't really know what you are doing or need that you really just need the BrowserRouter.
Are there any major disadvantages? Does it break any api somehow?
I wouldn't say there are major disadvantages to using the HashRouter, it just serves a different purpose, like the NativeRouter on native mobile devices, or the MemoryRouter in node environments. I don't know if you are asking if it breaks any specific APIs using the HashRouter, but I'm inclined to say no, it still works with redux, fetch/axios, etc... and just about anything else I can think of I've used along with hash routing.

Short answer: If your site is static, use whatever, it does not matter. But if you have a backend, using hash routing is recommended aproach, and not just for react.
Explanation: When hash is used, only your frontend application gets the request, no calls are made to backend. This is important for production enviroments, where you have some backened and/or some reverse proxy(like NGINX etc.), API Gateway etc. Without hash, request will need to be handled by them first, and if no endpoint is found, request is sent to frontend. This creates unnecessary calls, which leads to performance issues, unhandled paths, etc.. And in modern cloud enviroments, this means more money.

Related

Launch a desktop application from a browser

I'm trying to find a way to launch a desktop application from a browser for os x. This application will be for customers only and should only be used for the hardware we provide.
I'd like to start off by saying I think this is a stupid idea. I'm being forced to use this approach by our CEO. I understand security policies could be an issue, as well as glaring vulnerabilities.
Since they can only run this on a single device I don't know that jws would be the right solution. I haven't used it but based on what I've read it doubles as a distribution method (which we don't want). If it were to be jws then it would have to some how have to recognize the device we provided them to ensure it is being placed on the appropriate hardware, possibly based on the serial # (which I don't believe you can get from the browser).
Additionally, the browser would call methods and pass arguments to the application.
Is this even possible? If so, what tool would you recommend? Again, I'm only the messenger for this terrible idea.
You probably have Chrome or Safari configured to handle http://<uri> URIs, but many other types exist. Have you ever seen custom URI schemes used like itunes://<uri>, steam://<uri>, or skype://<uri> ?
Just like for http, when your OS tries to fetch the resource, it will attempt to handle the request in the application that registered a handler for that scheme.
If that's the kind of thing you're looking for, this question has already been answered.
My suspicion is that you were unaware of the term. If that answer works for you, we can mark this question as a duplicate.
In complement to #naomik's answer (which I believe is the right one), there are projects like AppJS, Fluid or Electron which are pretty much willing to bring web based apps to the desktop.
For the matter of communicating the App and your browser (should I say your server?), you could use an approach of message queues and websockets to get there, surely, it is a huge effort of orchestration and workarounds but in the end it is possible to get you there.
(Posted on behalf of the OP).
This does not appear to be possible. If you are considering this please don't, there are better solutions. I have finally convinced my CEO to use Angular2 inside electron for example.

How to keep backend session information in Polymer SPA

I'd like to login to a RESTful back-end server written in Laravel5, with the single page front-end application leveraging Polymer's custom element.
In this system, the persistence(CRUD) layer lives in the server. So, authentication should be done at the server in responding to client's api request. When a request is valid, the server returns User object in JSON format including user's role for access control in client.
Here, my questions is how I can keep the session, even when a user refreshes the front-end page? Thanks.
This is an issue beyond Polymer, or even just single page apps. The question is how you keep session information in a browser. With SPAs it is a bit easier, since you can keep authentication tokens in memory, but traditional Web apps have had this issue since the beginning.
You have two things you need to do:
Tokens: You need a user token that indicates that this user is authenticated. You want it to be something that cannot be guessed, else someone can spoof it. So the token better not be "jimsmith" but something more reliable. You have two choices. Either you can have a randomly generated token which the server stores, so that when presented on future requests, it can validate the token. This is how just most session managers work in app servers like nodejs sessions or Jetty session or etc. The alternative is to do something cryptographic so that the server only needs to validate mathematically, not check in a store to see if the token is valid. I did that for node in http://github.com/deitch/cansecurity but there are various options for it.
Storage: You need some way to store the tokens client-side that does not depend on JS memory, since you expect to reload the page.
There are several ways to do client-side storage. The most common by far is cookies. Since the browser stores them without your trying too hard, and presents them whenever you access the domain that the cookie is registered for, it is pretty easy to do. Many client-side and server-side auth libraries are built around them.
An alternative is html5 local storage. Depending on your target browsers and support, you can consider using it.
There also are ways you can play with URL parameters, but then you run the risk of losing it when someone switches pages. It can work, but I tend to avoid that.
I have not seen any components that handle cookies directly, but it shouldn't be too hard to build one.
Here is the gist for cookie management code I use for a recent app. Feel free to wrap it to build a Web component for cookie management.. as long as you share alike!
https://gist.github.com/deitch/dea1a3a752d54dc0d00a
UPDATE:
component.kitchen has a storage component here http://component.kitchen/components/TylerGarlick/core-resource-storage
Simplest way if you use PHP is to keep the user in a PHP session (like a normal non SPA application).
PHP will store the user info on the server, and generate automatically a cookie that the browser will send with any request. With a single server with no load balancing, the session data is local and very fast.

How Would I Go About Using Node.js For Frontend And Wordpress As The Backend?

I've had a thought of using Wordpress as a CMS backend, because well a lot of people know it and it is easy to use and then using Node.JS as the front-end. You're probably thinking now why would I want to do that in the first place, what is the advantage?
I want to use websockets and the wonderful Socket.io library for Node.JS provides beautiful cross-browser websockets support. Essentially I want a user to come to a site, a websocket is created and then content is fed to the frontend asynchronously as JSON and then decoded on the frontend all without page refreshing.
Effectively I am making Wordpress become a real-time CMS. You visit a site, but every link you click fetches the page as JSON and returns it via a websocket to save multiple requests and of course, page size.
How do I go about getting Node.JS talking to a MySQL database, pulling out info and then showing it? Any tutorials, resources and other useful tips would be gratefully appreciated. A few of my colleagues have wondered the same thing, so I think the answers will be a big help to everyone.
To be exact, you can't use Node.js for a front-end solution, since it runs on the server, not the browser (think of it like any other server-side language such as PHP, JSP etc).
You can, however, create the described solution with jQuery or any other Javascript library, you just have to implement data transfer with Socket.IO. On the server-side you'd need something to handle websockets, so the most native way would be to use Node.js, but since you want to use Wordpress, it gets really complicated, as Wordpress is not meant to be used in the way you described, so I'm afraid you'd have to write your CMS from ground up in Node.
Also, the way you described has a huge flaw. Search engine crawlers are still unable to parse and run Javascript, so if all of your content is loaded dynamically, it would seem empty to Google and others, so it would be impossible to ever make it in the search results rendering your site pretty much useless.
For MySQL and other modules for Node, you should check NPM registry and the Node modules page.
EDIT
After Dwayne explained his solution in comments, this is how I'd do it:
I'd use jQuery for front-end. Binding the document with .on(), and setting the selector to 'a', so that every anchor on the webpage would fire the handler.
The handler parses the a.href attribute and figures out whether it's an external link, which shouldn't be handled by Javascript, or if it's a link to the next page, to an article etc. You can prevent the default action by calling e.preventDefault() in the handler, which prevents the browser from redirecting to the location.
Then the handler would get the content in JSON by calling .getJSON() to the URL based on the article. The easiest way would be to have a certain pattern (such as all urls like www.domain.com/api) redirect to the Node service via .htaccess, to prevent cross-domain problems.
Node would then see the request, extract the parameters and figure out what the user wants. Then connect to the MySQL database with this module (it's as simple as it can get) and return the corresponding content formatted as JSON. Don't forget to set the Content-Type headers to 'application/json'.
jQuery gets the response, figure out the type of the request and updates the content accordingly. Profit.
As you can see, I wouldn't use WebSockets in this case, since you wouldn't really benefit much from it. They are mostly meant for small real-time updates (no huge HTTP headers to reduce the bandwidth) that are both-ways. This means that the server could also push data into the browser, without the browser asking for it. In a blog context, this is not required, and you won't have too many request, so the difference in bandwidth wouldn't be noticeable anyway. If, however, you would like to use it for educational purposes, just basically replace the getJSON part with SocketIO, I'm not sure whether Apache supports proxying WebSockets, though. Extra information about SocketIO basics are here.
Edit: I overlooked the part with 'using Node.js on the front-end'. As Vahur Roosimaa said, Node.js is on the server-side (think of it as Nginx / Apache + PHP combination). Node isn't a frontend library like jQuery.
If you want you can use it just for the websockets functionality (I suggest using Socket.IO).
Nice tutorials about Node.js and MySQL:
http://www.giantflyingsaucer.com/blog/?p=2596
http://mclear.co.uk/2011/01/26/very-simple-nodejs-mysql-select-query-example/
http://www.hacksparrow.com/using-mysql-with-node-js.html
This SO question might also help: MySQL with Node.js
Also check the examples from the github repo of node-mysql.
If you want something more advanced like an ORM, I recommend Sequelize.
Another good question from SO: Which ORM should I use for Node.js and MySQL?
You should check out Wordscript which I recently added a Node JS example which can act as a simple front end for doing basic post retrieval from a Wordpress database.
It uses a common mysql library for node, and generates MySQL queries from get parameters and renders data as it is retrieved from the database; including tags.
Wordscript aims to free backend/frontend developers from being forced to work with the Wordpress PHP codebase, but still allows for Wordpress'es administrative interface to be used when needed (and prudent to do so). API's have been written in Ruby and PHP that both return JSON feeds and function generally the same way the node version does; so thats an additional option where a scripting language is available.
One option you have, if you want to have wordpress as the CMS and keep its admin UI, is to write your wordpress templates to output JSON instead of HTML.
In contrast to Wordscript, this is more solution specific, since you will need to write your JSON output for every template/data you want. The upside is that you can create the JSON specifically for your needs.
On the node side, you write a small server that will consume the JSON, letting you use whatever javascript template language you want. Nodejs will also help out with performance, since you can save the rendered content and/or the JSON output in memory, saving you roundtrips to the wordpress templates.
I wrote a blog about this, which describes more of the benefits of using nodejs and wordpress together.
http://www.1001.io/improve-wordpress-with-nodejs/

HTTP DELETE request with extra authentication

I was searching for a solution of the following problem, so far without success: I'm planning a RESTful web service, where certain actions (e.g. DELETE) should require a special authentication.
The idea is, that users have a normal username/password login (session based or Basic Auth, doesn't really matter here) using which they can access the service. Some actions require an additional authentication in form of a PIN code or maybe even a one-time password. Including the extra piece of authentication into the login process is not possible (and would miss the point of the whole exercise).
I thought about special headers (something like X-OTP-Authetication) but that would make it impossible to access the service via a standard HTML page (no means to include a custom header into a link).
Another option was HTTP query parameters, but that seems to be discouraged, especially for DELETE.
Any ideas how to tackle this problem?
From REST Web Service Security with jQuery Front-End
If you haven't already, I'd recommend some reading on OAuth 1.0 and 2.0. They are both used by some of the bigger API, such as Facebook, Netflix, Twitter, and more. 2.0 is still in draft, but that hasn't stopped anyone from implementing it and using it as it is more simple for a client to use. It sounds like you want something more complicated and more secure, so you might want to focus on 1.0.
I always found Netflix's Authentication Overview to be a good explanation for clients.

How do i get a verified location using HTML5?

I've been playing with HTML5 location lookups recently and its relatively straightforward to pull someones location from a device like an iPhone.
I want to write an app that uses location data, but its important that the location be factual. In other words I need to prevent people from authoring a fake post to the backing website / web service with mocked up GPS coordinates.
Is there anyway to collect GPS coordinates from a mobile device using the HTML5 geolocation apis and securely transmit that back to a web service in a way that someone wouldn't be able to author a post with the same data and "game the system" so to speak?
Not without some serious encryption on the payload on the client. Which if there is money involved, someone will reverse engineer and figure out how to create valid payloads themselves. Remember if there is money or fame involved then somebody will think the effort to do something like this is "worth it". If your web service is public and not using some kind of encryption nothing on the client will ensure that someone with a network connection can't sniff your protocol and fake whatever data they want. And SSL won't cut it. Anyone can proxy the SSL connection on their local network decrypt the payload and inspect it to their hearts content.
No. Completely agree with the answer from fuzzy lollipop. If you’re talking to a remote machine, the data can always be faked. Always always. What makes you certain you’re even talking to a mobile device at all? The User-Agent string? Pfft, it can be faked. Talking to a GPS? Pfft, could be coming from a predefined path. Talking to a web browser? Pfft, could be a bot, or some other malware.
And don’t think encryption (i.e. HTTPS) is going to help you. The client could edit any of your HTML, CSS, or JavaScript on-the-fly — take Firebug or Greasemonkey for example.
The reasons why you can’t trust the client are the same as the reasons why exploits such as SQL or HTML injection are so common. Ever heard the phrase “the customer is always right”? Well, the customer may be right, but the client is always untrustworthy.
The system is there to be gamed. As flaws are discovered, you patch them one by one. It’s more like leapfrog, rather than achieving the holy grail. Bruce Schneier’s quip “security is a process, not a product” comes to mind. Asking for a system that “can’t be gamed” is missing the point. What you need to be doing is creating a system where the server sanitises the data, and/or rejects bad data — fuzz testing is not a bad idea, either.
That’s about the best you can do without shipping custom untamperable mobiles to your customers with the OS in ROM, and the inside sealed with epoxy.