Universal routing with express and react router. Understanding history behaviour - react-router

I am using React Router 4.0 and Express 4.14 to create an app that has a mix of single-page-app (SPA) and multi-page-app (MPA). I don't know if that's good practice, but this is not the point. I am actually doing it to learn rather than for a real world app. This idea comes from the scenario where you have strongly separated sections inside an app, as for example a blog and a portfolio.
Client side
So, when I want to navigate as a SPA, I use the Link component from react-router-dom, like <Link to="/reactrouter-route">. If I want to make a request to a route handled by the server, I use <a href="/server-route">.
Server side
I have a middleware logging the path of any request received by my server. I define two routes, each serving a complete SPA. To keep with the blog/portfolio example, imagine I have the following
const express = require('express');
const app = express();
app.use((req, res, next) => {
console.log(req.path);
next();
});
app.get('/', (req, res) => {
res.sendFile('blog.html');
});
app.get('/portfolio', (req, res) => {
res.sendFile('portfolio.html');
});
Behaviour
When I go to / the blog gets loaded as a SPA and I can go to the different posts navigating back to / when I want. Everything works as expected. All this navigation inside the SPA is managed by React Router, and the server only gets the first request to /.
Imagine that from a specific post, say /posts/some-post, I have a link to the portfolio. If I click it, I get a request at the server, and it responds with the portfolio SPA. I can navigate inside the portfolio SPA, but I cannot go back to /posts/some-post. I get the following error:
Cannot GET /posts/some-post
I thought the error was thrown by the server, but surprisingly I don't get any request when going back. I only get requests at the server when going forward through a link (only with <a>).
I kept doing tests and there is no problem if I go back from /portfolio to /. This works as expected.
It gets interesting
I defined a route in my server with just the same rule that I had in my React Router routes. The path I was matching in this new route was /posts/:postid. I set this route to redirect to /. Now, I get the same error if I go from posts/some-post to /portfolio and I try to come back. This is not strange as the server doesn't get a request. It's also normal that I reach / if I go straight to /posts/some-post by typing in the URL in the browser.
But, once I go to /posts/some-post manually, I can go from /portfolio back to /posts/some-post without the error. Now it behaves as if the server was called. In fact, I get a request in the server to fetch the static files. However, I don't get a request to /posts/some-post nor /.
Even then, I would get an error if a go from /posts/some-other-post to /portfolio and try to go back.
Question
I guess this has to do with the cache, but I don't know what is going on there. When is the React Router handling going back? When is the server handlin it? How is the cache involved in this process?

It sounds like you need a clearer mental model of the roles of the server and the client in an SPA. "Single Page" is the important part.
The client, built with React, should never be loading pages from the server. It should be a "single page". In other words, you should not be using <a href="/server-route"> in your client app at all. The client should only get (JSON) data from the server using something like fetch (https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API).
I highly suggest you check out Create React App which also explains how to integrate with a node API backend during development. Basically you want all your client routes to be something like /post/:postid which will be handled by React Router and then that React component would use fetch to get the data from something like /api/posts/10. If you use /api to prefix all your requests to the server it should help your mental model.

Related

How do I use Nodejs for backend using SQL workbench?

I am a beginner and this is my first full-stack web development project and I have completed the front-end part and created the related tables using MySQL... and now I want to link the tables to front-end using nodejs. How do I proceed further?? Is it proper to use workbench in the first place for a full-stack project?? Please guide me.
SQL Workbench is just an IDE for mySQL so thats fine for building out your DB, setting permissions etc.
Your question is not one that is simple to answer, simply because the steps in creating, setting up a full web app is not that simple to explain..
There are few things you will need to do to hook this up
Ensure you have a mySQL middleware installed
Ensure you have the routes created
Use a templating engine like EJS
Once you have the basic flow working ( meaning you can hit your page and it returns the correct page ) you will want to then hook into the DB before sending the response object back to the browser. A typical flow would be, on your 'get' response, you would perform the mysql 'select'
This should be promis based but will depend on the actual package you install, I don't use mysql but a postgres command is something like
query.pool("SELECT is, name, des from table where id = '10' ").then(results => {
//put in your response code here to send back to the page
}).catch( e => {console.error(e)})
The response code portion is where you would send things back to the page, in an ejs template would then be able to access the response and display the data.
I know, this is a bit light on full explanations and that is because the proper response would be huge!
Judging by the question I would guess you are a bit new to node / DB etc ( sorry if you are not ) I think what may be very helpful is to watch a few youtube videos on setting up Node and EJS ( or any templating engine for that matter )
That should give you the basic understanding and setup of the project.

Websocket communication with multiple Chrome Docker containers

I have a Chrome container (deployed using this Dockerfile) that renders pages on request from an App container.
The basic flow is:
App sends an http request to Chrome and in response receives a websocket url to use (e.g. ws://chrome.example.com:9222/devtools/browser/13400ef6-648b-4618-8e4c-b5c73db2a122)
App then uses that websocket url to communicate further with Chrome, and to receive the rendered page. I am using the puppeteer library to connect to and communicate with the Chrome instance, using puppeteer.connect({ browserWSEndpoint: webSocketUrl });
For a single Chrome container this works really well.
But I'm trying to scale things up to have multiple Chrome containers in a Docker swarm.
The problem is, I think, that the websocket url received by App is specific to the instance running in that particular Chrome container, so when it is used by App (and where there are now multiple Chrome containers), the websocket requests from App will not necessarily be routed to the right Chrome container.
What is the best way of dealing with this?
You’ve got the basic design correct, but the issue you’re experiencing is with session “stickiness”. However, instead of trying to re-route subsequent requests back to the appropriate machine, we should look for a way to avoid the "pre" request.
The best way to do that is to have your Chrome docker image man-in-the-middle all http “upgrade” requests. This http action is what all WebSocket connections emit prior to changing protocols including the puppeteer library (which is just a WebSocket client under-the-hood). Doing this will also obviate the need for a pre-connect call since the proxying to Chrome will happen on upgrade vs exposing a URL for the app to use. Here's a pretty basic example of doing this with the http-proxy module:
const http = require('http');
const httpProxy = require('http-proxy');
const proxy = new httpProxy.createProxyServer();
http
.createServer()
.on('upgrade', async(req, socket, head) => {
const browser = await puppeteer.launch();
const target = browser.wsEndpoint();
proxy.ws(req, socket, head, { target })
})
.listen(3000);
There's other benefits with this approach as will: you can limit things like concurrency and even inject scripts to be ran at a later time. Those require a little more though and preparation, but the overall idea remains the same. This also makes load-balancing trivial since there's not need to make routing sticky.
If this is something you're interested in implementing all that works is largely done for you in the browserless repo. It even allows for things like concurrency limitations, session time limitations, and includes a feature-rich IDE. You can find more docs on that project here.

Web API call not returning

I have a RESTful Web API that is running properly as I can test it with Fiddler. I see calls going through, I see responses coming back.
I am developing a tablet application that needs to use the Web API in order to fetch data or make updates in the repository.
My calls do not return and there is not a single trace in the Fiddler to show that my calls even reach the server.
The first call I need to make is to login. The URI would be this:
http://localhost:53060/api/user
This call would normally return some information about the user (such as group membership, level of authorization and so on). The Web API uses Windows Authentication, so the repository is able to resolve all these fields based on the credentials passed in. As I said, in Fiddler I see the three calls made to the URI as the authentication is negotiated between the caller and the server. The third call returns with a JSON object that contains all information generated from the repository as expected.
Now, moving to my client I have the following:
var webApiClient = new HttpClient(new HttpClientHandler()
{
UseDefaultCredentials = true
})
{
BaseAddress = new Uri("http://localhost:53060/")
};
webApiClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
HttpResponseMessage response = await webApiClient.GetAsync("api/user");
var userLoginInfo = await response.Content.ReadAsAsync<UserLoginInformation>();
My call to "GetAsync" never returns and, like I said, I see no trace of it in Fiddler.
Any idea of what I'm doing wrong?
Changing the URL where the Web API was exposed seemed to have fixed the problem. Thanks to #Nkosi for the suggestion.
For anyone stumbling onto this question and asking themselves how to change the URL of the Web API, there are two ways. If the simulator is running on the same machine with the Web API, the change has to be made in the "applicationhost.config" file for IIS Express. You can locate this file by right-clicking on the IIS Express icon in the Notification Area (the bottom right corner) and selecting show all websites. Highlight the desired Web API and it will show where the application host configuration file is located. In there, one needs to locate the following section:
<bindings>
<binding protocol="http" bindingInformation="*:53060:localhost" />
</bindings>
and replace the "localhost" name with the IP address of the machine where the Web API is running.
However, this approach will not work once you start testing your tablet app with a real device. IIS Express must be coerced into exposing the Web API to the outside world. I found an excellent node.js package that can help with that. It is called IISExpress-proxy.

Nodejs on Server Ajax/http calls not reading new data Mongodb

I've recently started with pushing my locally tested Node,mongo, angularjs sites to live environments hosted on DigitalOcean.
I'm having inconsistency with ajax/http calls. on my Local machine, I am able to do http request and update an angularjs variable and this in return populates the html on the frontend. all works Great! now testing this on my server with same envireontment setup, the only time the variable load new data is when i refresh the page.
For example (not my actualy code):
Nodejs - app.js:
app.get('/getlist', requiredAuthentication, function(req, res) {
list.find({'username':req.session.user.username}, function(err,list) {
res.send(list);
});});
Angularjs - angular_app.js:
$scope.onClick = function (points, evt) {
$http.get('/getlist').then(function(response) {
$rootScope.list = response;});
};
Jade - home:
li(ng-repeat="row in list")
So like I said, this works perfectly on my local machine, but on my server I must refresh my page to load new data, it's as though my variable gets cached on the server.
Any idea would help.
Thanks.!
------- UPDATE - testing v0.1 --------
So after some intensive testing here is what I've found, but still no fix.
If I add new data via an http post, and I go look in my mongo db, I see the new data. Then when I click on the ng-click to retrieve the new data via HTTP, it doesn't return the new data, and is stuck on the old.
If I leave the page open for 10mins, and then click the button, it retrieves the new data, this is such a shlep.
Sounds like cache, but why des it work perfectly on my local?
When looking at the console > network > status. it is code 304, and this means nothing changed?
------- UPDATE - testing v0.2 --------
I've now tested the return data with a log in the console and I did the GET with ajax jQuery, and I'm getting the same issue/behaviour, it's stuck on the same collection of data, so my conclusion must be that node.js is causing the issues.
------- UPDATE - testing v0.3 --------
Okay so I've completely stopped mongo and switched everything to mysql using node-mysql. once again, on my local it works like a machine and on my actual server its laggy with reading new data.
I used Sequal PRO to access mysql and I started adding new entries to a table.
Opening my web url in the brower it Immediately showed the new entries. But after that, adding new entries or deleting entries only showed affect in 10mins or so.
So my conclusion is that Nodejs is caching like a mother, anyone know more bout this? am i really the only one every to experience this?
Try res.json for return data from node
app.get('/getlist', requiredAuthentication, function(req, res)
{
list.find({'username':req.session.user.username}, function(err,list)
{
res.json(list);
});
});
My conclusion to this issue was that port 80 was somehow caching the content of a page and will only load new data with a page refresh.
I upgrade Node and used latest Express. And I'm running my web app on a custom port, all is working now.

The matched route does not include a 'controller' route value on AJAX call (Sitecore)

I am having an issue with an AJAX call to a controller action, using POST and having 3 parameters sent and return JSON data. Whenever I try to call that on the developement server, it throws a 500 (Internal Server Error), with the description
The matched route does not include a 'controller' route value, which is required.
The main problem with this, is that on my local machine it actually works. The way the websites I am working on are structured, is without any App_Start files (so no custom route config).
Is there anyone who could help me with this? I am struggling with this for days now. Please keep in mind that I am trying this in Sitecore 7.1 (MVC) and IIS 7.5.
Thank you.
Best regards,
Marius.
The absence of App_Start files in your solution doesn't guarantee there aren't any custom routes. Check Global.asax in your local environment vs your development server as this is another area where custom routes may have been set up.