Exception filter NestJS does not work when delegating - exception

I am using a gateway to emit events.
For clarity reasons, I created files that make the link between services and this gateway.
For example, in gateway :
#SubscribeMessage('createRoom')
async createRoom(client: Socket, channel: newChannelDto) {
this.chatService.createRoom(client, channel)
}
In Chat service :
async createRoom(client: Socket, channel: newChannelDto)
{
await this.channelService.createChannel(channel.chanName, client.data.user, channel.password, channel.private)
client.join(channel.chanName)
for (let [allUsers, socket] of this.gateway.activeUsers.entries())
this.gateway._server.to(socket.id).emit('rooms', " get rooms ", await this.channelService.getChannelsForUser(allUsers));
}
where :
gateway is the injected gateway, and _server, declared as
#WebSocketServer()
public _server : Server
the server contained in Gateway.
I made a WS/HTTP/Query failed exception filter, included on top of my Gateway via #UseFilters(new MyCustomExceptionsFilter()).
Before I move functions to the "children" files, my filter was able to catch everything, but since I moved them away from gateway, it does not work anymore ; I am still able to catch them manually, but they are not sent to front anymore.
Example of output in my terminal, that I was once able to catch/send as user-friendly error:
api | Error: this is still thrown if caught manually
api | at /api/src/websocket/chat.service.ts:67:28
api | at processTicksAndRejections (node:internal/process/task_queues:95:5)
api | at ChatService.createRoom (/api/src/websocket/chat.service.ts:66:3)

Without the await in the gateway call, the promise is not properly handled the the lifecycle of the request, as Nest sees it, ends, so when an error happens it is outside of the exception zone that Nest is responsible for. Add await to the this.chatService.createRoom(client, channel) and all should be good from there

Related

Why is my RESTful web service returning error 401?

Update
Thanks to being nudged towards implementing debug logs, I can now see that the issue is that Spring is reporting an invalid CSRF token found for the notices controller. So far I've checked the headers postman generates and compared them to the ones generated through the fetch requests, and, found no difference. The token that was generated is successfully placed into the header of the request. Unfortunately there is nothing discernible in the Spring logs, so the debugging continues.
I'm working on learning Spring Security and currently connecting the frontend React portion to the Spring backend. I'm having trouble because when the POST request is made to the desired endpoint, it returns an error of 401. This is confusing to me because I believe I have correctly configured CORS and also marked the end points as permit all.
In short, the process calls an endpoint /token to get a CSRF token, then calls /notices and passes the token in as a header. If done with Postman, the process works as expected, so I had thought it was a CORS issue, however, I've tried running the frontend on a different port and it was blocked by CORS so I think the issue is somewhere else.
Some additional info:
/notices and /token are both POST operations.
Both the Spring backend and React frontend are ran off the same local machine.
Error code 401 is received
The code for the frontend JavaScript call is:
const debugNotices = () => {
let tokenData:any;
fetch('http://localhost:8080/token', {method:"POST"})
.then((response) => response.json())
.then((data) => tokenData = data).then((data:any) => fetch("http://localhost:8080/notices",
{
method:"POST",
headers: {
"X-XSRF-TOKEN": tokenData.token
}
}))
}
Spring security config:
#Bean
SecurityFilterChain defaultSecurityFilterChain(HttpSecurity http) throws Exception {
http
.cors()
.configurationSource(new CorsConfigurationSource() {
#Override
public CorsConfiguration getCorsConfiguration(HttpServletRequest request) {
CorsConfiguration config = new CorsConfiguration();
config.setAllowedOrigins(Collections.singletonList("http://localhost:3000"));
config.setAllowedMethods(Collections.singletonList(("*")));
config.setAllowCredentials(true);
config.setAllowedHeaders(Collections.singletonList("*"));
config.setMaxAge(3600L);
return config;
}
})
.and()
.csrf()
.ignoringRequestMatchers("/contact", "/register", "/token")
.csrfTokenRepository(CookieCsrfTokenRepository.withHttpOnlyFalse())
.and()
.securityContext()
.requireExplicitSave(false)
.and()
.sessionManagement()
.sessionCreationPolicy(SessionCreationPolicy.IF_REQUIRED)
.and()
.authorizeHttpRequests()
.requestMatchers("/myAccount", "/myBalance", "/myLoans", "/myCards", "/user").authenticated()
.requestMatchers("/notices", "/contact", "/register", "/test", "/token").permitAll()
.and()
.httpBasic()
.and()
.formLogin();
return http.build();
}
I've tried including
credentials:'include' in the request body, however it causes a login prompt that I don't believe is the direction I'm looking for.
I've also tried manually inserting the CSRF token instead of requesting the data from the server, with the same failed results.
I've also tested CORS as far as I know how to, accessing the endpoints from anything other than localhost:3000 gets denied with a CORS error as expected.
This issue only happens when the React frontend is accessed from localhost:portNumber, I was able to work around the issue completely by instead using my local IP address in place of localhost.
192.168.0.105:3000 for example.
I'm still unsure why there is an issue using localhost in the URL, and, would love to hear why this is happening if you know.

Service Worker not caching API content on first load

I've created a service worker enabled application that is intended to cache the response from an AJAX call so it's viewable offline. The issue I'm running into is that the service worker caches the page, but not the AJAX response the first time it's loaded.
If you visit http://ivesjames.github.io/pwa and switch to airplane mode after the SW toast it shows no API content. If you go back online and load the page and do it again it will load the API content offline on the second load.
This is what I'm using to cache the API response (Taken via the Polymer docs):
(function(global) {
global.untappdFetchHandler = function(request) {
// Attempt to fetch(request). This will always make a network request, and will include the
// full request URL, including the search parameters.
return global.fetch(request).then(function(response) {
if (response.ok) {
// If we got back a successful response, great!
return global.caches.open(global.toolbox.options.cacheName).then(function(cache) {
// First, store the response in the cache, stripping away the search parameters to
// normalize the URL key.
return cache.put(stripSearchParameters(request.url), response.clone()).then(function() {
// Once that entry is written to the cache, return the response to the controlled page.
return response;
});
});
}
// If we got back an error response, raise a new Error, which will trigger the catch().
throw new Error('A response with an error status code was returned.');
}).catch(function(error) {
// This code is executed when there's either a network error or a response with an error
// status code was returned.
return global.caches.open(global.toolbox.options.cacheName).then(function(cache) {
// Normalize the request URL by stripping the search parameters, and then return a
// previously cached response as a fallback.
return cache.match(stripSearchParameters(request.url));
});
});
}
})(self);
And then I define the handler in the sw-import:
<platinum-sw-import-script href="scripts/untappd-fetch-handler.js">
<platinum-sw-fetch handler="untappdFetchHandler"
path="/v4/user/checkins/jimouk?client_id=(apikey)&client_secret=(clientsecret)"
origin="https://api.untappd.com">
</platinum-sw-fetch>
<paper-toast id="caching-complete"
duration="6000"
text="Caching complete! This app will work offline.">
</paper-toast>
<platinum-sw-register auto-register
clients-claim
skip-waiting
base-uri="bower_components/platinum-sw/bootstrap"
on-service-worker-installed="displayInstalledToast">
<platinum-sw-cache default-cache-strategy="fastest"
cache-config-file="cache-config.json">
</platinum-sw-cache>
</platinum-sw-register>
Is there somewhere I'm going wrong? I'm not quite sure why it works on load #2 instead of load #1.
Any help would be appreciated.
While the skip-waiting + clients-claim attributes should cause your service worker to take control as soon as possible, it's still an asynchronous process that might not kick in until after your AJAX request is made. If you want to guarantee that the service worker will be in control of the page, then you'd need to either delay your AJAX request until the service worker has taken control (following, e.g., this technique), or alternatively, you can use the reload-on-install attribute.
Equally important, though, make sure that your <platinum-sw-import-script> and <platinum-sw-fetch> elements are children of your <platinum-sw-register> element, or else they won't have the intended effect. This is called out in the documentation, but unfortunately it's just a silent failure at runtime.

What are the difference between `pushManager.subscribe` and `pushManager.getSubscription` Service Worker's

PushManager.getSubscription()
Retrieves an existing push subscription. It returns a Promise that resolves to a PushSubscription object containing details of an existing subscription. If no existing subscription exists, this resolves to a null value.
[...]
PushManager.subscribe()
Subscribes to a push service. It returns a Promise that resolves to a PushSubscription object containing details of a push subscription. A new push subscription is created if the current service worker does not have an existing subscription.
According to MDN's pushManager documentation. There methods are pretty much the same, except the point that in case of getSubcription() it may resolved with a null value.
I am basically understand that I can simply use subscribe() and Service Worker will try to get the subscription in case it available, and also create new one in case it not available.
=> But I was trying to do something else. I want to try to get subscription first, if it resolved with null I will try to subscribe it.
navigator.serviceWorker.register('./worker.js')
.then(function(reg) {
// Subscribe push manager
reg.pushManager.getSubscription()
.then(function(subscription) {
if(subscription){
// TODO... get the enpoint here
} else {
reg.pushManager.subscribe()
.then(function(sub){
// TODO... get the endpoint here
});
}
}, function(error) {
console.error(error);
})
});
But then I am ended up with the error:
Uncaught (in promise) DOMException: Subscription failed - no active Service Worker
It is confusing, and I am doubting this is a limitation of Chrome on Push API of Service Worker or can possibly a bug. Does any one has any information about this strange behavior?
The problem is that your service worker is registered, but it isn't active yet.
You can use navigator.serviceWorker.ready instead of subscribing right after registering the service worker.
If you want to make the service worker active as soon as possible, you can use skipWaiting and Clients.claim, as described in this ServiceWorker Cookbook recipe.

why DeferredResult ends on setResult() on trying to use SSE

i am trying to implement a Server Sent Events (SSE) webpage which is powered by Spring. My test code does the following:
Browser uses EventSource(url) to connect to server. Spring accepts the request with the following controller code:
#RequestMapping(value="myurl", method = RequestMethod.GET, produces = "text/event-stream")
#ResponseBody
public DeferredResult<String> subscribe() throws Exception {
final DeferredResult<String> deferredResult = new DeferredResult<>();
resultList.add(deferredResult);
deferredResult.onCompletion(() -> {
logTimer.info("deferedResult "+deferredResult+" completion");
resultList.remove(deferredResult);
});
return deferredResult;
}
So mainly it puts the DeferredResult in a List and register a completion callback so that i can remove this thing from the List in case of completion.
Now i have a timer method, that will periodically output current timestamp to all registered "browser" via their DeferredResults.
#Scheduled(fixedRate=10000)
public void processQueues() {
Date d = new Date();
log.info("outputting to "+ LoginController.resultList.size()+ " connections");
LoginController.resultList.forEach(deferredResult -> deferredResult.setResult("data: "+d.getTime()+"\n\n"));
}
The data is sent to the browser and the following client code works:
var source = new EventSource('/myurl');
source.addEventListener('message', function (e) {
console.log(e.data);
$("#content").append(e.data).append("<br>");
});
Now the problem:
The completion callback on the DeferredResult is called on every setResult() call in the timer thread. So for some reason the connection is closed after the setResult() call. SSE in the browser reconnects as per spec and then same thing again. So on client side i have a polling behavior, but i want an kept open request where i can push data on the same DeferredResult over and over again.
Do i miss something here? Is DeferredResult not capable of sending multiple results? i put in a 10 seconds delay in the timer thread to see if the request only terminates after setResult(). So in the browser the request is kept open until the timer pushes the data but then its closed.
Thanks for any hint on that. One more note: I added async-supported to all filters/servlets in tomcat.
Indeed DeferredResult can be set only once (notice that setResult returns a boolean). It completes processing with the full range of Spring MVC processing options, i.e. meaning that all you know about what happens during a Spring MVC request remains more or less the same, except for the asynchronously produced return value.
What you need for SSE is something more focused, i.e. write each value to the response using an HttpMessageConverter. I've created a ticket for that https://jira.spring.io/browse/SPR-12212.
Note that Spring's SockJS support does have an SSE transport which takes care of a few extras such as cross-domain requests with cookies (important for IE). It's also used on top of a WebSocket API and WebSocket-style messaging (even if WebSocket is not available on either the client or the server side) which fully abstracts the details of HTTP long polling.
As a workaround you can also write directly to the Servlet response using an HttpMessageConverter.

How to fix cross-site origin policy for server and web-site

I'm using Dropwizard, which I'm hosting, along with a website, on the google cloud (GCE). This means that there are 2 locations currently active:
Some.IP.Address - UI
Some.IP.Address:8080 - Dropwizard server
When the UI tries to call anything from my dropwizard server, I get cross-site origin errors, which is understandable. However, this is posing a problem for me. How do I fix this? It would be great if I could somehow spoof the addresses so that I don't have to fully qualify the resource in the UI.
What I'm looking to do is this:
$.get('/provider/upload/display_information')
Or, if I have to fully qualify
$.get('http://Some.IP.Address:8080/provider/upload/display_information')
I tried setting Origin Filters in Dropwizard per this google groups thread (https://groups.google.com/forum/#!topic/dropwizard-user/ybDOTOxjlLI), but it doesn't seem to work.
In index.html that is served by the server at http://Some.IP.Address you might have a jQuery script that look as follows.
$.get('http://Some.IP.Address:8080/provider/upload/display_information', data, callback);
Of course your browser will not allow accessing http://Some.IP.Address:8080 due to the Same-Origin-Policy (SOP). The protocol (http, https) and the host as well as the port have to be the same.
To achieve Cross-Origin Resource Sharing (CORS) on Dropwizard, you have to add a CrossOriginFilter to the servlet environment. This filter will add some Access-Control-Headers to every response the server is sending. In the run method of your Dropwizard application write:
import org.eclipse.jetty.servlets.CrossOriginFilter;
public class SomeApplication extends Application<SomeConfiguration> {
#Override
public void run(TodoConfiguration config, Environment environment) throws Exception {
FilterRegistration.Dynamic filter = environment.servlets().addFilter("CORS", CrossOriginFilter.class);
filter.addMappingForUrlPatterns(EnumSet.allOf(DispatcherType.class), true, "/*");
filter.setInitParameter("allowedOrigins", "http://Some.IP.Address"); // allowed origins comma separated
filter.setInitParameter("allowedHeaders", "Content-Type,Authorization,X-Requested-With,Content-Length,Accept,Origin");
filter.setInitParameter("allowedMethods", "GET,PUT,POST,DELETE,OPTIONS");
filter.setInitParameter("preflightMaxAge", "5184000"); // 2 months
filter.setInitParameter("allowCredentials", "true");
// ...
}
// ...
}
This solution works for Dropwizard 0.7.0 and can be found on https://groups.google.com/d/msg/dropwizard-user/xl5dc_i8V24/gbspHyl4y5QJ.
This filter will add some Access-Control-Headers to every response. Have a look on http://www.eclipse.org/jetty/documentation/current/cross-origin-filter.html for a detailed description of the initialisation parameters of the CrossOriginFilter.