RStudio with node reverse proxy not working - json

If I use nginx as the reverse proxy for RStudio, it works great following the instructions on https://support.rstudio.com/hc/en-us/articles/200552326-Running-RStudio-Server-with-a-Proxy?mobile_site=true
However if I write my proxy using node http proxy, I get an error with "code 6 Invalid json-rpc request".
I am using express and http-proxy. Users with similar issues claim that node middleware modifies the JSON request.

Related

POSTMAN client send correct JSON response but the Chrome browser receives HTML

This sounds very absurd but I opened up an old React project of mine and the view does not render. Upon inspection, I found it is receiving a HTML response from the Flask server. However, I am sending valid JSON response to the frontend. This is evident from the POSTMAN client too which throws a JSON response.
So to summarise, Postman Client is receiving expected JSON response from Flask server BUT the Chrome browser is not ! How /Why is this happening ?
Attaching the screenshot below
The server and client are running on different ports. The Postman requests are sent to the port 5000 and any endpoint is relative to localhost:5000 . Hence the response on Postman is as expected. However, for the React frontend to redirect its calls to the backend running on port 5000 , we need a way to tell the frontend to do so since endpoints are relative to the address on which they are running. Hence, an endpoint /questions for the server running on 5000 means localhost:5000/questions but for the frontend means localhost:3000/questions . This connection is provided by setting proxy on the package.json of your project. The value of the proxy would be the server address .

What App server to use for json and soap API

I am trying to select a app server platform to service JOSN & Soap POST requests from client app on Desktops. The app server runs on linux and use PHP code.
I searched and found PHP framework but that doesn't give you easy to use auth user/password, so you have to do the auth in app code. What is to use for simple request/response app server with secure auth like user/password on first request and then token like normal web servers.
Currently using Drupal 7 REST services module to handle request auth, but SOAP has security issues. Looking for better and light weight framework.
For your API : https://www.drupal.org/project/jsonapi (it's gonna be implemented in 8.7 anymay)
If you have an external authentication provider (like keycloak, azureAD…) : https://www.drupal.org/project/oauth2_jwt_sso
If you don't : https://www.drupal.org/project/simple_oauth

webmethods IS pub.client http not following http client standards?

The issue: Webmethods HTTP client is calling the wrong endpoint on my Apache server configured with multiple virtual hosts, based on DNS.
What I think is happening: I think Webmethods HTTP client may be looking up the IP address and using that to perform HTTP operations instead of using the DNS name, which is causing the Apache server to identify it as a request to the main virtual server, not the desired one.
Question: So, how can I make webmethods use the DNS name instead of the IP? Is my theory about the Webmethods HTTP client correct? As far as I can tell this is a very non-standard approach to HTTP Client design.
Here is how it is configured to help you better understand:
Apache ->
host.example.com => /var/www/host/html
host2.example.com => /var/www/host2/html
curl -v http://host.example.com and curl -v http://host2.example.com appropriately return documents from their respective directories.
Configuring pub.client:http with http://host2.example.com causes the webmethods IS server to request http://host.example.com documents (obviously leading to a 404: Not Found).
Note that obviously the system is not returning documents like HTML but rather serving dynamic content.
The comment from Progman is the clue here - basically in order to direct Apache to call your virtual server, the Host header must be given with the expected value. In my example, that would be Host: host2.example.com. I was having webmethods IS copy over the headers exactly like I was posting them from curl, and it was sending Host: localhost:5555 over to my proxied server. I simply created a pipeline Map operation and hardcoded it and it is working fine now.
The oddity to me seems to be that pub.client:http didn't auto-set the Host header for me based on the 'url' value, which is what I would have expected.

Kubernetes pod exec API exception: Response must not include 'Sec-WebSocket-Protocol' header if not present in request

I am trying to setup a websocket connection to the Kubernetes Pod Exec API, based on the suggestions given in this SO post: How to execute command in a pod (kubernetes) using API?.
Here's what I have done so far -
Installed Simple Web Socket Client extension in Chrome.
Started kubectl proxy --disable-filter=true to run proxy with WS connections allowed. kubectl.exe version is 1.8.
Used address ws://localhost:8001/api/v1/namespaces/default/pods/nginx-3580832997-26zcn/exec?container=nginx&stdin=1&stdout=1&stderr=1&tty=1&command=%2Fbin%2Fsh in the Chrome extension to connect to the exec api.
When I click connect, Chrome reports back an error with the message -
Error during WebSocket handshake: Response must not include 'Sec-WebSocket-Protocol' header if not present in request
Apparently, kubectl is sending back empty Sec-WebSocket-Protocol header in the response and Chrome is taking offense to that.
I tried changing the code of Simple Web Socket Client open method to send empty protocols parameter to the Websocket client creation call, like - ws = new WebSocket(url, []); to coax Chrome in sending empty header in request, but Chrome doesn't send empty header.
So what can be done to directly connect to the exec in Chrome?
This is a known issue; kubectl proxy does not support websockets. (You can verify this easily by starting up kubectl proxy and then attempting kubectl --server=http://127.0.0.1:8001 exec ...; you will receive the message error: unable to upgrade connection: <h3>Unauthorized</h3> if the filter is enabled and Error from server (BadRequest): Upgrade request required if the filter is disabled).
The confusion might come from the fact that the kube-apiserver proxy does support websockets, but that proxy is different from the kubectl proxy.
As I see you have 3 options now (in order of difficulty):
Access kube-apiserver directly. You will likely need authentication that kubectl proxy is handling for you now
Use SockJS, this is what Kubernetes Dashboard does for the exec feature
Fix #25126
After reading the code in https://github.com/kubernetes-ui/container-terminal/blob/master/container-terminal.js, found that exec uses base64.channel.k8s.io protocol. The Simple Web Socket Client code wouldn't have worked because of this and also that the stream communication is in base64, not plain text.
Leaving this as an answer for other folks trying to implement a WS based terminal emulator... as #janos-lenart mentioned, the code is pretty new and there may be issues using it in different browsers, best bet at this point is to read example code and start from there.

Creating a http web service in message broker

I need to create a http post request that will put a message in a message queue.So far I am able to do it successfull within the test framework i.e: using the integration node.
My question is how I will be able to test this from a external browser?
Do I need to deploy it in an external server?
Any links or suggestion will be really helpful.
I often use curl for testing webservices deployed to IIB. You can use the "-d" parameter to specify a file containing the POST data and this works well for both HTTP and SOAP.
I don't think browsers are meant to call web services directly, try SOAPUI for testing.
Message Broker Applications need to be deployed on a Broker and Execution Group or Integration Node and Integration Server (after V9).