Sails.js : compression doesn’t seem to work on json - json

I am trying to activate gzip compression on all JSON output on sails.js.
I added this in config/http.js:
order: [
'startRequestTimer',
'cookieParser',
'session',
'myRequestLogger',
'bodyParser',
'handleBodyParserError',
'compress',
'methodOverride',
'poweredBy',
'$custom',
'router',
'www',
'favicon',
'404',
'500'
],
compress: require('compression')(),
I know the compress: require('compression')() line is called because I try with a wrong value and it crashes.
I restarted sails but the headers do not show gzip compression.
Requested headers show I accept gzip compression:
Accept: application/json, text/plain, */*
Accept-Encoding: gzip, deflate
Thank you for your help!

I was struggling with the same thing, then I went through the Sails source code and found that the compress middleware is only activated if the app is run in production environment (i.e. NODE_ENV === production).
Could it be that you're doing this locally? I bet it will work if you set NODE_ENV to production.
This should at least apply to the default compress middleware, so maybe try removing the one you added yourself.

Related

Google Drive API: we're sorry but your computer or network may be sending automated queries

I know this question has been asked multiple times in the last year but I can't seem to get an answer that works. I am trying to download a file from Google Drive using this url:
https://www.googleapis.com/drive/v3/files/12BeD3I6JoRMfgeEJfZpZGEGew4Ncpw4i?alt=media&access_token=ya29.a0AfH6SMDyh3TTrbXZxSxQkuwj
(token shortened for brevity here)
The response is 403 Forbidden with the explanation "We're sorry but your computer or network may be sending automated queries. To protect our users, we can't process your request right now.".
We are a verified application, and there is no problem accessing file lists and uploading files. We do indeed intend to download files in an automated fashion, but I see this error on the very first try.
I did find another question on this topic where the solution is to use an authorization header. This is what our headers look like:
Host: www.googleapis.com
User-Agent: comaxis-agent/1.0
Accept: */*
Content-Type: application/json
Authorization: bearer ya29.a0AfH6SMDyh3TTrbXZxSxQkuwj
(Again, token shortened)
This does not work, there is no change. Can anybody help with this?
Modification points:
From January, 2020, the access token cannot be used with the query parameter like access_token=###. Ref I think that this is the reason of your issue. So in the current stage, it is required to use the access token in the request header. This has already been mentioned in your question.
About your following request
Host: www.googleapis.com
User-Agent: comaxis-agent/1.0
Accept: */*
Content-Type: application/json
Authorization: bearer ya29.a0AfH6SMDyh3TTrbXZxSxQkuwj
https://www.googleapis.com/drive/v3/files/###?alt=media is the GET method. In this case, Content-Type is not required. And, please modify bearer to Bearer
When above points are reflected to the curl command, it becomes as follows.
Sample curl command:
curl \
-H "Authorization: Bearer ###" \
"https://www.googleapis.com/drive/v3/files/12BeD3I6JoRMfgeEJfZpZGEGew4Ncpw4i?alt=media"
When that file is the binary file, the option of -o filename might be required.
Note:
When you want to download the files of Google Docs (Document, Spreadsheet, Slides and so on), please use the export method. Ref
Reference:
Download files

HAProxy 1.5 - Serving static json file on 504 error

I'm trying to set up HAProxy to server a static JSON file on 504 errors. To test, we've set up the configuration file to timeout after 10 seconds, and to use the errorfile option:
defaults
log global
mode http
retries 3
timeout client 10s
timeout connect 10s
timeout server 10s
option tcplog
balance roundrobin
frontend https
maxconn 2000
bind 0.0.0.0:9000
errorfile 504 /home/user1/test/error.json
acl employee-api-service path_reg /employee/api.*
use_backend servers-employee-api if employee-api-service
backend servers-employee-api
server www.server.com 127.0.0.1:8000
Effectively, I'm trying to serve JSON instead of HTML on a timeout, so the backend service can fail gracefully. However, on testing, we could not get anything, neither HTML or JSON. On looking at the response, it simply says it failed, with no status code. Is my setup correct for errorfile? Does HAProxy 1.5 support this?
According to the documentation of errorfile:
<file> designates a file containing the full HTTP response. It is
recommended to follow the common practice of appending ".http" to
the filename so that people do not confuse the response with HTML
error pages, and to use absolute paths, since files are read
before any chroot is performed.
So, the file should contain a complete HTTP response but you're trying to serve JSON only.
The documentation further says that:
For better HTTP compliance, it is
recommended that all header lines end with CR-LF and not LF alone.
The example configuration, for example,
errorfile 503 /etc/haproxy/errorfiles/503sorry.http
shows the common practice of .http extension for the error file.
You can find samples of some default error files here.
Sample (504.http):
HTTP/1.0 504 Gateway Time-out
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>504 Gateway Time-out</h1>
The server didn't respond in time.
</body></html>
So, in your scenario, 504.http would be like this:
HTTP/1.0 504 Gateway Time-out
Cache-Control: no-cache
Connection: close
Content-Type: application/json
{
"message": "Gateway Timeout"
}
Also, you need to keep the file size under limit i.e. BUFSIZE (8 or 16 KB) as described in the documentation.
There might be some error logs for not serving your JSON file. You might want to look at HAProxy's logs again thoroughly. Just to be sure.

Why is wget POST giving 400 Bad Request?

I am trying to use wget to POST a .json file to a site using the REST api. Specifically I am trying to create an issue in JIRA from my command line. I am aware that the easiest way to do this is using cURL; however, Solaris' native compiler has an issue with cURL versions before 7.49.0, so I can't connect via https which is a requirement, and I can't update cURL as I'm working on a company machine.
Every time I try to use wget to POST I get an ERROR: 400 Bad Request. It occurs to me that usually this is a problem with the json I'm trying to send but I can't for the life of me figure out what is wrong.
OS: Solaris 10
Version: wget 1.9.1
Command:
wget --post-file=data.json --http-user=[userID] --http-passwd=[userPass] --header="Content-Type: application/json" -i "uri.txt"
data.json:
{
"fields": {
"project":
{
"id": "10200"
},
"summary": "Creating issue remotely.",
"description": "Creating of an issue using project keys and issue type names using the REST API",
"issuetype": {
"name": "Story"
},
"customfield_11600": {
"id": "11303"
}
}
}
UPDATE:
I think my header is not correct when I send out my http POST. In my wget command above I set the Content-Type to application/json; however, looking at the debug information there is two Content-Type spaces in my header.
Content-Type: application/x-www-form-urlencoded
Content-Length: 175
Content-Type: application/json
Could it be that I'm setting my Content-Type incorrectly and the POST is being sent with the wrong encoding?
UPDATE 2
I think I've ruled out the above update as an issue, as removing the application/json content-type gives a 415 Unsupported Media Type error, implying that adding it makes the POST send the correct media type.
I've gotten a curl command to work properly on my windows machine and it connects via http1.1 automatically while wget connects via http1.0, I believe that when connecting via http1.0 the data being sent is appended on the end of the URI. Could this be why the server believes the json to be incorrect?

The cyrillic encoding issue in play framework

In the local environment everything with encoding is ok, but when i make dist and run my app on the server (ubuntu) and do POST, cyryllic characters of json in the request body turn in └я▀п╡я└п╟я▀я└' (as it turned out it's only the terminal issues) in controllers:
def editUser = SecuredAction(WithRole(ADMIN)).async(parse.json) { implicit request =>
log.debug(request.body) // here I have └я▀п╡я└п╟я▀я└' instead of cyrillic characters
I checked request headers:
Accept:application/json
Accept-Encoding:gzip, deflate
Accept-Language:ru-RU,ru;q=0.8,en-US;q=0.6,en;q=0.4
Connection:keep-alive
Content-Length:192
Content-Type:application/json; charset=UTF-8
Maybe some of you encountered with that. Thx!
The problem was in mysql, here the answer:
I added useUnicode=true&characterEncoding=UTF-8:
db.default.url="jdbc:mysql://localhost:3306/mydb?useUnicode=true&characterEncoding=UTF-8"
but it didn't help. So I added to my.etc on the server:
[mysql]
default-character-set=utf8
[mysqld]
character-set-server=utf8
OK!

POST some JSON through TELNET

I am told to upload an image to a server by sending a JSON as a request.
JSON is sth like below:
{"action":"setMap","data":{"mapName":"myMapName","mapURL":"http://tinypic.com/myimg"}}
I do not know how to use TELNET to POST a JSON.
i guess i should write something like below
terminal>telnet my.ip.num.ber port
POST /setMap HTTP/1.1
but dont know how to continue.
Should i write
DATA : {"action":"setMap","data":{"mapName":"myMapName","mapURL":"http://tinypic.com/myimg"}}
How can i get the JSON sent?
I can't understand why you want to use Telnet. Telnet can be useful to quickly test chatty protocols, and even if HTTP is chatty to some degree, it's very cumbersome to upload an image (plus, from the given service name, setMap, I guess the service doesn't really let you upload an image, but just insert a record in the database pointing to an image accessible on another service).
What you are asking is something like:
$ telnet example.com 80
> POST /setMap HTTP/1.1
> Host: www.example.com
> Content-Type: application/json; charset=utf-8
> Content-Length: 1234
>
> {"mapName":"myMapName","mapURL":"http://tinypic.com/myimg"}
>
Note that it's just an example. You have to replace connection parameters (host, port), content-type, content-length and the actual JSON data - and this we can't know because depends on the actual service implementation.