How to bypass Cloudflare 503 Please turn JavaScript on and reload the page - google-chrome

I know this might be a frequently asked question, but I believe this is a different one.
Cloudflare prevents programmatically sent requests by responding status code 503 and saying "Please turn JavaScript on and reload the page.". Both python requests module and curl command raise this error. However, browsing on the same host with Chrome browser is fine, even if under "Incognito" mode.
I have made these attempts but failed to bypass it:
Use cloudscraper module. Like this
Copy all the headers including user-agent, cookie from opened browser page. Like this
Use mechanize module. Like this
Use requests_html to run JS scripts on the page. Like this
According to my inspections, I found that, in a newly opened Chrome Incognito Window, when visiting https://onlinelibrary.wiley.com/doi/full/10.1111/jvs.13069, the following requests happens:
Browser send request to the url with no cookies. The server responds 302 to redirect to the same url with a cookieSet=1 query param, i.e. https://onlinelibrary.wiley.com/doi/full/10.1111/jvs.13069?cookieSet=1. The response also contains set-cookie headers. The response has no body.
Browser send request to the redirected url, with the set cookies. The server responds 302 to redirect to the original url with no query param. The response contains no set-cookie header and has no body.
Browser send request the original url, with the previously set cookies. The server responds 200 with the HTML contents we'd like to see as its body.
However, in a curl request without redirection enabled (i.e. without -L arg), I got status code 503 and a HTML response body which says Please turn JavaScript on and reload the page..
curl -i -v 'https://onlinelibrary.wiley.com/doi/abs/10.1111/jvs.13069' \
--header 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9' \
--header 'accept-encoding: gzip, deflate, br' \
--header 'accept-language: en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7,ja;q=0.6' \
--header 'cache-control: no-cache' \
--header 'cookie: MAID=k4Rf/MejFqG1LjKdveWUPQ==; I2KBRCK=1; rskxRunCookie=0; rCookie=i182bjtmkm3tmujm7wb4xl6fx8wuv; osano_consentmanager_uuid=35ffb0d0-e7e0-487a-a6a5-b35cad9e589f; osano_consentmanager=EtuJH5neWpR-w0VyI9rGqVBE85dQA-2D4f3nUxLGsObfRMLPNtojj-WolqO0wrIvAr3wxlwRPXQuL0CCFvMIDZxXehUBFEicwFqrV4kgDwBshiqbbfh1M3w3V6WWcesS8ZLdPX4iGQ3yTPaxmzpOEBJbeSeY5dByRkR7P2XyOEDAWPT8by2QQjsCt3y3ttreU_M3eV_MJCDCgknIWOyiKdL_FBYJz-ddg8MFAb1N8YBTRQbQAg8r-bSO1vlCqPyWlgzGn-A5xgIDWlCDIpej0Xg2rjA=; JSESSIONID=aaaFppotKtA-t7ze73Rjy; SERVER=WZ6myaEXBLGhNb4JIHwyZ2nYKk2egCfX; MACHINE_LAST_SEEN=2022-08-05T00%3A52%3A30.362-07%3A00; __cf_bm=d9mhQ_ZtETjf41X0VuxDl6GkIZbQtNLJnNIOtDoIPuA-1659685954-0-AXLwPXO1kJb2/IQc+zIesAsL71FoLTgRJqS5M5fxizuFMTw92mMT/yRv5cIq6ZMiRcZE1DchGsO2ZZMdv+/P4JSdUDMAcepY/oXIKFQgauELPNrwiwG/7XYXFRy91+qreazjYASX6Fq0Ir90MNfJ8EcWc10KJyGvSN7QtledQ6Lu9B5S1tqHoxlddPAMOtdL6Q==; lastRskxRun=1659686676640' \
--header 'pragma: no-cache' \
--header 'sec-ch-ua: ".Not/A)Brand";v="99", "Google Chrome";v="103", "Chromium";v="103"' \
--header 'sec-ch-ua-mobile: ?0' \
--header 'sec-ch-ua-platform: "macOS"' \
--header 'sec-fetch-dest: document' \
--header 'sec-fetch-mode: navigate' \
--header 'sec-fetch-site: none' \
--header 'sec-fetch-user: ?1' \
--header 'upgrade-insecure-requests: 1' \
--header 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36'
* Trying 162.159.129.87...
* TCP_NODELAY set
* Connected to onlinelibrary.wiley.com (162.159.129.87) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /Users/cosmo/anaconda3/ssl/cacert.pem
CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-AES128-GCM-SHA256
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: C=US; ST=California; L=San Francisco; O=Cloudflare, Inc.; CN=sni.cloudflaressl.com
* start date: Apr 17 00:00:00 2022 GMT
* expire date: Apr 17 23:59:59 2023 GMT
* subjectAltName: host "onlinelibrary.wiley.com" matched cert's "onlinelibrary.wiley.com"
* issuer: C=US; O=Cloudflare, Inc.; CN=Cloudflare Inc ECC CA-3
* SSL certificate verify ok.
> GET /doi/abs/10.1111/jvs.13069 HTTP/1.1
> Host: onlinelibrary.wiley.com
> accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
> accept-encoding: gzip, deflate, br
> accept-language: en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7,ja;q=0.6
> cache-control: no-cache
> cookie: MAID=k4Rf/MejFqG1LjKdveWUPQ==; I2KBRCK=1; rskxRunCookie=0; rCookie=i182bjtmkm3tmujm7wb4xl6fx8wuv; osano_consentmanager_uuid=35ffb0d0-e7e0-487a-a6a5-b35cad9e589f; osano_consentmanager=EtuJH5neWpR-w0VyI9rGqVBE85dQA-2D4f3nUxLGsObfRMLPNtojj-WolqO0wrIvAr3wxlwRPXQuL0CCFvMIDZxXehUBFEicwFqrV4kgDwBshiqbbfh1M3w3V6WWcesS8ZLdPX4iGQ3yTPaxmzpOEBJbeSeY5dByRkR7P2XyOEDAWPT8by2QQjsCt3y3ttreU_M3eV_MJCDCgknIWOyiKdL_FBYJz-ddg8MFAb1N8YBTRQbQAg8r-bSO1vlCqPyWlgzGn-A5xgIDWlCDIpej0Xg2rjA=; JSESSIONID=aaaFppotKtA-t7ze73Rjy; SERVER=WZ6myaEXBLGhNb4JIHwyZ2nYKk2egCfX; MACHINE_LAST_SEEN=2022-08-05T00%3A52%3A30.362-07%3A00; __cf_bm=d9mhQ_ZtETjf41X0VuxDl6GkIZbQtNLJnNIOtDoIPuA-1659685954-0-AXLwPXO1kJb2/IQc+zIesAsL71FoLTgRJqS5M5fxizuFMTw92mMT/yRv5cIq6ZMiRcZE1DchGsO2ZZMdv+/P4JSdUDMAcepY/oXIKFQgauELPNrwiwG/7XYXFRy91+qreazjYASX6Fq0Ir90MNfJ8EcWc10KJyGvSN7QtledQ6Lu9B5S1tqHoxlddPAMOtdL6Q==; lastRskxRun=1659686676640
> pragma: no-cache
> sec-ch-ua: ".Not/A)Brand";v="99", "Google Chrome";v="103", "Chromium";v="103"
> sec-ch-ua-mobile: ?0
> sec-ch-ua-platform: "macOS"
> sec-fetch-dest: document
> sec-fetch-mode: navigate
> sec-fetch-site: none
> sec-fetch-user: ?1
> upgrade-insecure-requests: 1
> user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36
>
< HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
< Date: Fri, 05 Aug 2022 08:56:14 GMT
Date: Fri, 05 Aug 2022 08:56:14 GMT
< Content-Type: text/html; charset=UTF-8
Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
Transfer-Encoding: chunked
< Connection: close
Connection: close
< X-Frame-Options: SAMEORIGIN
X-Frame-Options: SAMEORIGIN
< Permissions-Policy: accelerometer=(),autoplay=(),camera=(),clipboard-read=(),clipboard-write=(),fullscreen=(),geolocation=(),gyroscope=(),hid=(),interest-cohort=(),magnetometer=(),microphone=(),payment=(),publickey-credentials-get=(),screen-wake-lock=(),serial=(),sync-xhr=(),usb=()
Permissions-Policy: accelerometer=(),autoplay=(),camera=(),clipboard-read=(),clipboard-write=(),fullscreen=(),geolocation=(),gyroscope=(),hid=(),interest-cohort=(),magnetometer=(),microphone=(),payment=(),publickey-credentials-get=(),screen-wake-lock=(),serial=(),sync-xhr=(),usb=()
< Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
< Expires: Thu, 01 Jan 1970 00:00:01 GMT
Expires: Thu, 01 Jan 1970 00:00:01 GMT
< Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
< Set-Cookie: __cf_bm=Z8oUUTMhz8K.._yzicdZVzO49fmFKCtgS2CDTlnFvpU-1659689774-0-ARUAfH3m6VNwz09gKVsRECZkXJf5BdqNsW+oIPcy1oKzvppiMWxz7HGFkEwMuGHGzrHRDy5nV+VVj74AxTN8ThozSiHa/8sYH0IwMMe62woC; path=/; expires=Fri, 05-Aug-22 09:26:14 GMT; domain=.onlinelibrary.wiley.com; HttpOnly; Secure; SameSite=None
Set-Cookie: __cf_bm=Z8oUUTMhz8K.._yzicdZVzO49fmFKCtgS2CDTlnFvpU-1659689774-0-ARUAfH3m6VNwz09gKVsRECZkXJf5BdqNsW+oIPcy1oKzvppiMWxz7HGFkEwMuGHGzrHRDy5nV+VVj74AxTN8ThozSiHa/8sYH0IwMMe62woC; path=/; expires=Fri, 05-Aug-22 09:26:14 GMT; domain=.onlinelibrary.wiley.com; HttpOnly; Secure; SameSite=None
< Vary: Accept-Encoding
Vary: Accept-Encoding
< Strict-Transport-Security: max-age=15552000
Strict-Transport-Security: max-age=15552000
< Server: cloudflare
Server: cloudflare
< CF-RAY: 735e5184085e52cb-LAX
CF-RAY: 735e5184085e52cb-LAX
<
<!DOCTYPE HTML>
<html lang="en-US">
...... (HTML codes saying "Please turn JavaScript on and reload the page")
The HTML looks like this as rendered by Postman:
And yes, Postman can neither visit the url.
According to these observations, I believe that the site behaves differently when receiving a first request from browser and curl. But I don't know how does Cloudflare tells between a human being (using a browser) and a bot (using curl). As I have described before, the two kinds of clients have no difference in:
IP address (they are tested on the same host)
context (both requests are the very first request)
headers (headers are copied from browser to the command line)

I ran into the same problem and solved it by adding headers to one of the solutions, using cloudscraper, already mentioned in the initial post.
import cloudscraper
url = 'https://onlinelibrary.wiley.com/doi/full/10.1111/jvs.13069'
headers = {
'Wiley-TDM-Client-Token': 'your-Wiley-TDM-Client-Token',
'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36'
}
scraper = cloudscraper.create_scraper()
r = scraper.get(url, headers = headers)
print(r.status_code)
This code snipped returns a status_code 200
It seems like Cloudflare looks for the presence of the 'Wiley-TDM-Client-Token' header, and not for the actual token as removing it from the headers results in a 'Detected a Cloudflare version 2 challenge' error. The code posted here already works without adding your own TDM key, but I expect that for paid content a TDM with the right licenses is needed.

Related

The repository does not have a GitHub Pages site. See https://docs.github.com/v3/repos/pages/

I have a Jekyll site published to Github Pages. I am trying to use the Github API to be able to schedule posts.
I can "Get a Github Pages site" with the API, https://docs.github.com/en/rest/pages?apiVersion=2022-11-28#get-a-github-pages-site
When I try to test the Github API for "Request a Github Pages build," it gives me the following error:
curl
-X POST
-H "Accept: application/vnd.github+json"
-H "Authorization: Bearer $Classic_Personal_Access_Token_with_all_permissions_for_testing"
-H "X-GitHub-Api-Version: 2022-11-28"
https://api.github.com/repos/aykchoe/aykchoe.github.io/pages/builds -vv
Trying 192.30.255.116:443...
Connected to api.github.com (192.30.255.116) port 443 (#0)
ALPN: offers h2
ALPN: offers http/1.1
CAfile: /etc/ssl/cert.pem
CApath: none
(304) (OUT), TLS handshake, Client hello (1):
(304) (IN), TLS handshake, Server hello (2):
(304) (IN), TLS handshake, Unknown (8):
(304) (IN), TLS handshake, Certificate (11):
(304) (IN), TLS handshake, CERT verify (15):
(304) (IN), TLS handshake, Finished (20):
(304) (OUT), TLS handshake, Finished (20):
SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256
ALPN: server accepted h2
Server certificate:
subject: C=US; ST=California; L=San Francisco; O=GitHub, Inc.; CN=.github.com
start date: Mar 16 00:00:00 2022 GMT
expire date: Mar 16 23:59:59 2023 GMT
subjectAltName: host "api.github.com" matched cert's ".github.com"
issuer: C=US; O=DigiCert Inc; CN=DigiCert TLS Hybrid ECC SHA384 2020 CA1
SSL certificate verify ok.
Using HTTP2, server supports multiplexing
Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
h2h3 [:method: POST]
h2h3 [:path: /repos/aykchoe/aykchoe.github.io/pages/builds]
h2h3 [:scheme: https]
h2h3 [:authority: api.github.com]
h2h3 [user-agent: curl/7.85.0]
h2h3 [accept: application/vnd.github+json]
h2h3 [authorization: Bearer $Classic_Personal_Access_Token_with_all_permissions_for_testing]
h2h3 [x-github-api-version: 2022-11-28]
Using Stream ID: 1 (easy handle 0x15600d800)
POST /repos/aykchoe/aykchoe.github.io/pages/builds HTTP/2
Host: api.github.com
user-agent: curl/7.85.0
accept: application/vnd.github+json
authorization: Bearer $Classic_Personal_Access_Token_with_all_permissions_for_testing
x-github-api-version: 2022-11-28
HTTP/2 403
server: GitHub.com
date: Sat, 14 Jan 2023 03:49:29 GMT
content-type: application/json; charset=utf-8
content-length: 203
x-oauth-scopes: admin:enterprise, admin:gpg_key, admin:org, admin:org_hook, admin:public_key, admin:repo_hook, admin:ssh_signing_key, audit_log, codespace, delete:packages, delete_repo, gist, notifications, project, repo, user, workflow, write:discussion, write:packages
x-accepted-oauth-scopes:
github-authentication-token-expiration: 2023-02-13 03:44:38 UTC
x-github-media-type: github.v3; format=json
x-github-api-version-selected: 2022-11-28
x-ratelimit-limit: 5000
x-ratelimit-remaining: 4996
x-ratelimit-reset: 1673671506
x-ratelimit-used: 4
x-ratelimit-resource: core
access-control-expose-headers: ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset
access-control-allow-origin: *
strict-transport-security: max-age=31536000; includeSubdomains; preload
x-frame-options: deny
x-content-type-options: nosniff
x-xss-protection: 0
referrer-policy: origin-when-cross-origin, strict-origin-when-cross-origin
content-security-policy: default-src 'none'
vary: Accept-Encoding, Accept, X-Requested-With
x-github-request-id: C127:4B9A:5325FBA:5629D14:63C22649
{
"message": "The repository does not have a GitHub Pages site. See https://docs.github.com/v3/repos/pages/",
"documentation_url": "https://docs.github.com/rest/pages#request-a-github-pages-build"
}
Connection #0 to host api.github.com left intact
From Googling, it seems like a permissions error (http 403), but not sure what I could be missing.
I tried both fine-grain token and classic.
I noticed my git user was different than the repo owner, so I added it as a collaborator.
I tried switching the git user to the repo owner.
I tried making a new Jekyll repo with Github Pages and running the API code.

Adding HTST header in my internal website doesn't work as expected

I have created a website where I am trying to add the HSTS security header via httpd.conf
<IfModule mod_headers.c>
Header always set Strict-Transport-Security 'max-age=4000; includeSubDomains'
</IfModule>
Adding the above code, able to see the Strict-Transport-Security header added over my HTTPS response header
host> curl -v https://172.21.218.67 --insecure
* About to connect() to 172.21.218.67 port 443 (#0)
* Trying 172.21.218.67... connected
* Connected to 172.21.218.67 (172.21.218.67) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
* Server certificate:
* subject: ****************************************
* start date: Oct 21 06:42:49 2019 GMT
* expire date: Nov 20 06:42:49 2019 GMT
* common name: Insights
* issuer: *****************************************
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 172.21.218.67
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Mon, 21 Oct 2019 10:50:54 GMT
< Server: Apache
< Strict-Transport-Security: max-age=4000; includeSubDomains
< Last-Modified: Mon, 21 Oct 2019 08:58:58 GMT
< ETag: "8f3-59567e4f07362"
< Accept-Ranges: bytes
< Content-Length: 2291
< Content-Type: text/html
But this does create an impact over my website by the browser. (Browser is not redirecting to HTTPS if the user tries to access my website via HTTP).
I could not even see my website listing in chrome's HSTS checklist
chrome://net-internals/#hsts
Do I need to add any other configuration in order to make it work?
As suggested by IMSoP, my test server was not trusted by the server which affected the HSTS functionality.
Solved: Made my test server as a trusted source to the browser by adding a self-signed certificate.
Now the HSTS working as expected.

CURL does not work with URLs with curly braces in parameters

Some URLs with brackets don't work with CURL but work on Chrome and Firefox.
For example this URL: https://rdtrkr.com/mg.php?voluum_id=d51b17bc-c537-4f3e-9879-2e373341ae5a&widget_id={widget_id}&campaign_id={campaign_id}&teaser_id={teaser_id}&geo={geo}&img=guy18.jpg&txt=german&lp=de&click_price={click_price}&click_id={click_id}&{click_id} does work in Chrome and firefox but when called with CURL, gives a 404 error.
curl \
-H "User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.80 Safari/537.36" \
-v "https://rdtrkr.com/mg.php?voluum_id=d51b17bc-c537-4f3e-9879-2e373341ae5a&widget_id={widget_id}&campaign_id={campaign_id}&teaser_id={teaser_id}&geo={geo}&img=guy18.jpg&txt=german&lp=de&click_price={click_price}&click_id={click_id}&{click_id}"
Produces the result:
< HTTP/2 404
< server: nginx
< date: Thu, 13 Dec 2018 16:53:45 GMT
< content-type: text/html; charset=UTF-8
< content-length: 0
But with chrome developper tools in "Preserve log" mode I have :
CURL receives 404 instead of a 302 redirect. Is it related to the fact that CURL might be URL encoding brackets? I don't know what is going wrong here.
ps: I am not the owner of the website I'm using in the example.
Curly brackets are unsafe in URLs. cURL (unlike Google Chrome) tries to do you a favor and automatically encodes the URL.
In other words, it transforms { to %7B and } to &7D.
To prevent that behavior, you can pass the query string parameters using -d instead. Since -d changes the request to a POST, you'll also need to use -G to force the request to be a GET.
So instead of doing
curl "http://example.com?param1=xxx&param2=yyy"
you can do
curl "http://example.com" -G -d "param1=xxx&param2=yyy"
In your particular case, for some reason the webserver you're targeting will still return 404 unless you supply an Accept-Language header:
curl -v "http://rdtrkr.com/mg.php" \
-G -d "voluum_id=d51b17bc-c537-4f3e-9879-2e373341ae5a&widget_id={widget_id}&campaign_id={campaign_id}&teaser_id={teaser_id}&geo={geo}&img=guy18.jpg&txt=german&lp=de&click_price={click_price}&click_id={click_id}&{click_id}" \
-H "Accept-Language: en-US,en;q=0.9,fr;q=0.8,ru;q=0.7,es;q=0.6"
gives
* Trying 34.192.193.118...
* Connected to rdtrkr.com (34.192.193.118) port 80 (#0)
> GET /mg.php?voluum_id=d51b17bc-c537-4f3e-9879-2e373341ae5a&widget_id={widget_id}&campaign_id={campaign_id}&teaser_id={teaser_id}&geo={geo}&img=guy18.jpg&txt=german&lp=de&click_price={click_price}&click_id={click_id}&{click_id} HTTP/1.1
> Host: rdtrkr.com
> User-Agent: curl/7.47.0
> Accept: */*
> Accept-Language: en-US,en;q=0.9,fr;q=0.8,ru;q=0.7,es;q=0.6
>
< HTTP/1.1 302 Found
< Server: nginx
< Date: Thu, 13 Dec 2018 17:39:18 GMT
< Content-Type: text/html; charset=UTF-8
< Content-Length: 0
< Connection: keep-alive
< Location: https://rotronica-premarity.com/d51b17bc-c537-4f3e-9879-2e373341ae5a?widget_id={widget_id}&campaign_id={campaign_id}&teaser_id={teaser_id}&geo={geo}&img=guy18.jpg&txt=german&lp=de&click_price={click_price}&click_id={click_id}
<
* Connection #0 to host rdtrkr.com left intact
Use this flag (from man curl):
-g/--globoff
This option switches off the "URL globbing parser". When you set this option, you can
specify URLs that contain the letters {}[] without having them being interpreted by curl
itself. Note that these letters are not normal legal URL contents but they should be
encoded according to the URI standard.

httpd-2.4.18 mod_http2 works with curl and nghttp but doesn't work with the browser

I have installed httpd-2.4.18 with nghttp 1.6.0 and curl 7.46 to work with an http2 server. It seems that the server works with http2 when I test it with Curl and nghttp comands (as you can see below), but when I use the browser( Google Chrome 47.0.2526.106) the response headers are http1 instead of http2, and the Spdy indicator is grey (should be blue). Does anybody know why?
Commands Used
Curl command used that says me that http2 works properly:
eloy#eloy-OptiPlex-745:/usr/local/apache2/logs$ curl --http2 -I http://localhost
HTTP/1.1 101 Switching Protocols
Upgrade: h2c
Connection: Upgrade
HTTP/2.0 200
date:Thu, 07 Jan 2016 21:38:06 GMT
server:Apache/2.4.18 (Unix) OpenSSL/1.0.2e
last-modified:Mon, 11 Jun 2007 18:53:14 GMT
etag:"2d-432a5e4a73a80"
accept-ranges:bytes
content-length:45
content-type:text/html
The same with nghttp2, it seems that http2 server is working properly with the following command:
eloy#eloy-OptiPlex-745:/usr/local/apache2/logs$ nghttp -uv http://localhost
[ 0.000] Connected
[ 0.000] HTTP Upgrade request
GET / HTTP/1.1
host: localhost
connection: Upgrade, HTTP2-Settings
upgrade: h2c
http2-settings: AAMAAABkAAQAAP__
accept: */*
user-agent: nghttp2/1.6.0
[ 0.001] HTTP Upgrade response
HTTP/1.1 101 Switching Protocols
Upgrade: h2c
Connection: Upgrade
[ 0.001] HTTP Upgrade success
[ 0.001] recv SETTINGS frame <length=6, flags=0x00, stream_id=0>
Response headers from browser:
HTTP/1.1 304 Not Modified
Date: Thu, 07 Jan 2016 21:49:40 GMT
Server: Apache/2.4.18 (Unix) OpenSSL/1.0.2e
Connection: Upgrade, Keep-Alive
Keep-Alive: timeout=5, max=100
ETag: "2d-432a5e4a73a80"
Request Headers from browser:
GET / HTTP/1.1
Host: localhost
Connection: keep-alive
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.106 Safari/537.36
Accept-Encoding: gzip, deflate, sdch
Accept-Language: es-ES,es;q=0.8
If-None-Match: "2d-432a5e4a73a80"
If-Modified-Since: Mon, 11 Jun 2007 18:53:14 GMT
Browsers do not support HTTP/1.1 to HTTP/2 upgrade requests.
The only way to use HTTP/2 from browsers is via TLS and ALPN.
Having said that, your "Request headers from browser" above are actually response headers and viceversa, so it's difficult to tell what you are actually doing. The request headers lack the necessary upgrade bits.
If you make a clear-text request from a browser (i.e. using the http scheme), then the browser will not try to upgrade and you will stay in HTTP/1.1 mode.

Error when posting data with cURL

I have written the following curl command.
curl -v -X POST -H 'Accept: application/json'
-H 'Content-Type: application/json' -u username:password
-d 'Id=5&Email=add#ress.com&OptInType=0&EmailType=0&DataFields=null&Status=0}'
https://api.dotmailer.com/v2/contacts
However when I run in Powershell, it returns the following error
"message":"Could not parse the body of the request based on the
content type\"application/json\"
ERROR_BODY_DOES_NOT_MATCH_CONTENT_TYPE"}
This is the verbose server response
Hostname was NOT found in DNS cache
Trying 94.143.104.204...
Connected to api.dotmailer.com (94.143.104.204) port 443 (#0)
successfully set certificate verify locations:
CAfile: C:\Program Files\cURL\bin\curl-ca-bundle.crt CApath: none
SSLv3, TLS handshake, Client hello (1):
SSLv3, TLS handshake, Server hello (2):
SSLv3, TLS handshake, CERT (11):
SSLv3, TLS handshake, Server finished (14):
SSLv3, TLS handshake, Client key exchange (16):
SSLv3, TLS change cipher, Client hello (1):
SSLv3, TLS handshake, Finished (20):
SSLv3, TLS change cipher, Client hello (1):
SSLv3, TLS handshake, Finished (20):
SSL connection using TLSv1.0 / AES128-SHA
Server certificate:
subject: C=GB; ST=England; L=London; OU=DDG; O=dotMailer Ltd; CN=*.dotmailer.com
start date: 2012-01-16 15:51:40 GMT
expire date: 2015-01-16 15:51:40 GMT
subjectAltName: api.dotmailer.com matched
issuer: C=BE; O=GlobalSign nv-sa; CN=GlobalSign Organization Validation CA - G2
SSL certificate verify ok.
Server auth using Basic with user 'username'
*POST /v2/contacts HTTP/1.1
*Authorization: Basic XXXpdXNlci0zNGFmNWU0NTdmYTJAYXBpY29ubmVjdG9yLmNvbTpzaW1vbmUxMjM=
User-Agent: curl/7.37.0
Host: api.dotmailer.com
Accept: application/json
Content-Type: application/json
Content-Length: 72
upload completely sent off: 72 out of 72 bytes
HTTP/1.1 400 Bad Request
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Date: Thu, 24 Jul 2014 08:46:46 GMT
Content-Length: 139
{"message":"Could not parse the body of the request based on the
content type \"application/json\"
ERROR_BODY_DOES_NOT_MATCH_CONTENT_TYPE"}
Connection #0 to host api.dotmailer.com left intact
Can anyone point me in the right direction for getting my command right?
Thanks.
Your data in -d doesn't seem to be in json format. Try jsonlint to validate json format. Or if you are trying to post text data, use text/plain as Content-type.