The repository does not have a GitHub Pages site. See https://docs.github.com/v3/repos/pages/ - jekyll

I have a Jekyll site published to Github Pages. I am trying to use the Github API to be able to schedule posts.
I can "Get a Github Pages site" with the API, https://docs.github.com/en/rest/pages?apiVersion=2022-11-28#get-a-github-pages-site
When I try to test the Github API for "Request a Github Pages build," it gives me the following error:
curl
-X POST
-H "Accept: application/vnd.github+json"
-H "Authorization: Bearer $Classic_Personal_Access_Token_with_all_permissions_for_testing"
-H "X-GitHub-Api-Version: 2022-11-28"
https://api.github.com/repos/aykchoe/aykchoe.github.io/pages/builds -vv
Trying 192.30.255.116:443...
Connected to api.github.com (192.30.255.116) port 443 (#0)
ALPN: offers h2
ALPN: offers http/1.1
CAfile: /etc/ssl/cert.pem
CApath: none
(304) (OUT), TLS handshake, Client hello (1):
(304) (IN), TLS handshake, Server hello (2):
(304) (IN), TLS handshake, Unknown (8):
(304) (IN), TLS handshake, Certificate (11):
(304) (IN), TLS handshake, CERT verify (15):
(304) (IN), TLS handshake, Finished (20):
(304) (OUT), TLS handshake, Finished (20):
SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256
ALPN: server accepted h2
Server certificate:
subject: C=US; ST=California; L=San Francisco; O=GitHub, Inc.; CN=.github.com
start date: Mar 16 00:00:00 2022 GMT
expire date: Mar 16 23:59:59 2023 GMT
subjectAltName: host "api.github.com" matched cert's ".github.com"
issuer: C=US; O=DigiCert Inc; CN=DigiCert TLS Hybrid ECC SHA384 2020 CA1
SSL certificate verify ok.
Using HTTP2, server supports multiplexing
Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
h2h3 [:method: POST]
h2h3 [:path: /repos/aykchoe/aykchoe.github.io/pages/builds]
h2h3 [:scheme: https]
h2h3 [:authority: api.github.com]
h2h3 [user-agent: curl/7.85.0]
h2h3 [accept: application/vnd.github+json]
h2h3 [authorization: Bearer $Classic_Personal_Access_Token_with_all_permissions_for_testing]
h2h3 [x-github-api-version: 2022-11-28]
Using Stream ID: 1 (easy handle 0x15600d800)
POST /repos/aykchoe/aykchoe.github.io/pages/builds HTTP/2
Host: api.github.com
user-agent: curl/7.85.0
accept: application/vnd.github+json
authorization: Bearer $Classic_Personal_Access_Token_with_all_permissions_for_testing
x-github-api-version: 2022-11-28
HTTP/2 403
server: GitHub.com
date: Sat, 14 Jan 2023 03:49:29 GMT
content-type: application/json; charset=utf-8
content-length: 203
x-oauth-scopes: admin:enterprise, admin:gpg_key, admin:org, admin:org_hook, admin:public_key, admin:repo_hook, admin:ssh_signing_key, audit_log, codespace, delete:packages, delete_repo, gist, notifications, project, repo, user, workflow, write:discussion, write:packages
x-accepted-oauth-scopes:
github-authentication-token-expiration: 2023-02-13 03:44:38 UTC
x-github-media-type: github.v3; format=json
x-github-api-version-selected: 2022-11-28
x-ratelimit-limit: 5000
x-ratelimit-remaining: 4996
x-ratelimit-reset: 1673671506
x-ratelimit-used: 4
x-ratelimit-resource: core
access-control-expose-headers: ETag, Link, Location, Retry-After, X-GitHub-OTP, X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Used, X-RateLimit-Resource, X-RateLimit-Reset, X-OAuth-Scopes, X-Accepted-OAuth-Scopes, X-Poll-Interval, X-GitHub-Media-Type, X-GitHub-SSO, X-GitHub-Request-Id, Deprecation, Sunset
access-control-allow-origin: *
strict-transport-security: max-age=31536000; includeSubdomains; preload
x-frame-options: deny
x-content-type-options: nosniff
x-xss-protection: 0
referrer-policy: origin-when-cross-origin, strict-origin-when-cross-origin
content-security-policy: default-src 'none'
vary: Accept-Encoding, Accept, X-Requested-With
x-github-request-id: C127:4B9A:5325FBA:5629D14:63C22649
{
"message": "The repository does not have a GitHub Pages site. See https://docs.github.com/v3/repos/pages/",
"documentation_url": "https://docs.github.com/rest/pages#request-a-github-pages-build"
}
Connection #0 to host api.github.com left intact
From Googling, it seems like a permissions error (http 403), but not sure what I could be missing.
I tried both fine-grain token and classic.
I noticed my git user was different than the repo owner, so I added it as a collaborator.
I tried switching the git user to the repo owner.
I tried making a new Jekyll repo with Github Pages and running the API code.

Related

How to bypass Cloudflare 503 Please turn JavaScript on and reload the page

I know this might be a frequently asked question, but I believe this is a different one.
Cloudflare prevents programmatically sent requests by responding status code 503 and saying "Please turn JavaScript on and reload the page.". Both python requests module and curl command raise this error. However, browsing on the same host with Chrome browser is fine, even if under "Incognito" mode.
I have made these attempts but failed to bypass it:
Use cloudscraper module. Like this
Copy all the headers including user-agent, cookie from opened browser page. Like this
Use mechanize module. Like this
Use requests_html to run JS scripts on the page. Like this
According to my inspections, I found that, in a newly opened Chrome Incognito Window, when visiting https://onlinelibrary.wiley.com/doi/full/10.1111/jvs.13069, the following requests happens:
Browser send request to the url with no cookies. The server responds 302 to redirect to the same url with a cookieSet=1 query param, i.e. https://onlinelibrary.wiley.com/doi/full/10.1111/jvs.13069?cookieSet=1. The response also contains set-cookie headers. The response has no body.
Browser send request to the redirected url, with the set cookies. The server responds 302 to redirect to the original url with no query param. The response contains no set-cookie header and has no body.
Browser send request the original url, with the previously set cookies. The server responds 200 with the HTML contents we'd like to see as its body.
However, in a curl request without redirection enabled (i.e. without -L arg), I got status code 503 and a HTML response body which says Please turn JavaScript on and reload the page..
curl -i -v 'https://onlinelibrary.wiley.com/doi/abs/10.1111/jvs.13069' \
--header 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9' \
--header 'accept-encoding: gzip, deflate, br' \
--header 'accept-language: en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7,ja;q=0.6' \
--header 'cache-control: no-cache' \
--header 'cookie: MAID=k4Rf/MejFqG1LjKdveWUPQ==; I2KBRCK=1; rskxRunCookie=0; rCookie=i182bjtmkm3tmujm7wb4xl6fx8wuv; osano_consentmanager_uuid=35ffb0d0-e7e0-487a-a6a5-b35cad9e589f; osano_consentmanager=EtuJH5neWpR-w0VyI9rGqVBE85dQA-2D4f3nUxLGsObfRMLPNtojj-WolqO0wrIvAr3wxlwRPXQuL0CCFvMIDZxXehUBFEicwFqrV4kgDwBshiqbbfh1M3w3V6WWcesS8ZLdPX4iGQ3yTPaxmzpOEBJbeSeY5dByRkR7P2XyOEDAWPT8by2QQjsCt3y3ttreU_M3eV_MJCDCgknIWOyiKdL_FBYJz-ddg8MFAb1N8YBTRQbQAg8r-bSO1vlCqPyWlgzGn-A5xgIDWlCDIpej0Xg2rjA=; JSESSIONID=aaaFppotKtA-t7ze73Rjy; SERVER=WZ6myaEXBLGhNb4JIHwyZ2nYKk2egCfX; MACHINE_LAST_SEEN=2022-08-05T00%3A52%3A30.362-07%3A00; __cf_bm=d9mhQ_ZtETjf41X0VuxDl6GkIZbQtNLJnNIOtDoIPuA-1659685954-0-AXLwPXO1kJb2/IQc+zIesAsL71FoLTgRJqS5M5fxizuFMTw92mMT/yRv5cIq6ZMiRcZE1DchGsO2ZZMdv+/P4JSdUDMAcepY/oXIKFQgauELPNrwiwG/7XYXFRy91+qreazjYASX6Fq0Ir90MNfJ8EcWc10KJyGvSN7QtledQ6Lu9B5S1tqHoxlddPAMOtdL6Q==; lastRskxRun=1659686676640' \
--header 'pragma: no-cache' \
--header 'sec-ch-ua: ".Not/A)Brand";v="99", "Google Chrome";v="103", "Chromium";v="103"' \
--header 'sec-ch-ua-mobile: ?0' \
--header 'sec-ch-ua-platform: "macOS"' \
--header 'sec-fetch-dest: document' \
--header 'sec-fetch-mode: navigate' \
--header 'sec-fetch-site: none' \
--header 'sec-fetch-user: ?1' \
--header 'upgrade-insecure-requests: 1' \
--header 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36'
* Trying 162.159.129.87...
* TCP_NODELAY set
* Connected to onlinelibrary.wiley.com (162.159.129.87) port 443 (#0)
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:#STRENGTH
* successfully set certificate verify locations:
* CAfile: /Users/cosmo/anaconda3/ssl/cacert.pem
CApath: none
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-AES128-GCM-SHA256
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: C=US; ST=California; L=San Francisco; O=Cloudflare, Inc.; CN=sni.cloudflaressl.com
* start date: Apr 17 00:00:00 2022 GMT
* expire date: Apr 17 23:59:59 2023 GMT
* subjectAltName: host "onlinelibrary.wiley.com" matched cert's "onlinelibrary.wiley.com"
* issuer: C=US; O=Cloudflare, Inc.; CN=Cloudflare Inc ECC CA-3
* SSL certificate verify ok.
> GET /doi/abs/10.1111/jvs.13069 HTTP/1.1
> Host: onlinelibrary.wiley.com
> accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
> accept-encoding: gzip, deflate, br
> accept-language: en-US,en;q=0.9,zh-CN;q=0.8,zh;q=0.7,ja;q=0.6
> cache-control: no-cache
> cookie: MAID=k4Rf/MejFqG1LjKdveWUPQ==; I2KBRCK=1; rskxRunCookie=0; rCookie=i182bjtmkm3tmujm7wb4xl6fx8wuv; osano_consentmanager_uuid=35ffb0d0-e7e0-487a-a6a5-b35cad9e589f; osano_consentmanager=EtuJH5neWpR-w0VyI9rGqVBE85dQA-2D4f3nUxLGsObfRMLPNtojj-WolqO0wrIvAr3wxlwRPXQuL0CCFvMIDZxXehUBFEicwFqrV4kgDwBshiqbbfh1M3w3V6WWcesS8ZLdPX4iGQ3yTPaxmzpOEBJbeSeY5dByRkR7P2XyOEDAWPT8by2QQjsCt3y3ttreU_M3eV_MJCDCgknIWOyiKdL_FBYJz-ddg8MFAb1N8YBTRQbQAg8r-bSO1vlCqPyWlgzGn-A5xgIDWlCDIpej0Xg2rjA=; JSESSIONID=aaaFppotKtA-t7ze73Rjy; SERVER=WZ6myaEXBLGhNb4JIHwyZ2nYKk2egCfX; MACHINE_LAST_SEEN=2022-08-05T00%3A52%3A30.362-07%3A00; __cf_bm=d9mhQ_ZtETjf41X0VuxDl6GkIZbQtNLJnNIOtDoIPuA-1659685954-0-AXLwPXO1kJb2/IQc+zIesAsL71FoLTgRJqS5M5fxizuFMTw92mMT/yRv5cIq6ZMiRcZE1DchGsO2ZZMdv+/P4JSdUDMAcepY/oXIKFQgauELPNrwiwG/7XYXFRy91+qreazjYASX6Fq0Ir90MNfJ8EcWc10KJyGvSN7QtledQ6Lu9B5S1tqHoxlddPAMOtdL6Q==; lastRskxRun=1659686676640
> pragma: no-cache
> sec-ch-ua: ".Not/A)Brand";v="99", "Google Chrome";v="103", "Chromium";v="103"
> sec-ch-ua-mobile: ?0
> sec-ch-ua-platform: "macOS"
> sec-fetch-dest: document
> sec-fetch-mode: navigate
> sec-fetch-site: none
> sec-fetch-user: ?1
> upgrade-insecure-requests: 1
> user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36
>
< HTTP/1.1 503 Service Temporarily Unavailable
HTTP/1.1 503 Service Temporarily Unavailable
< Date: Fri, 05 Aug 2022 08:56:14 GMT
Date: Fri, 05 Aug 2022 08:56:14 GMT
< Content-Type: text/html; charset=UTF-8
Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
Transfer-Encoding: chunked
< Connection: close
Connection: close
< X-Frame-Options: SAMEORIGIN
X-Frame-Options: SAMEORIGIN
< Permissions-Policy: accelerometer=(),autoplay=(),camera=(),clipboard-read=(),clipboard-write=(),fullscreen=(),geolocation=(),gyroscope=(),hid=(),interest-cohort=(),magnetometer=(),microphone=(),payment=(),publickey-credentials-get=(),screen-wake-lock=(),serial=(),sync-xhr=(),usb=()
Permissions-Policy: accelerometer=(),autoplay=(),camera=(),clipboard-read=(),clipboard-write=(),fullscreen=(),geolocation=(),gyroscope=(),hid=(),interest-cohort=(),magnetometer=(),microphone=(),payment=(),publickey-credentials-get=(),screen-wake-lock=(),serial=(),sync-xhr=(),usb=()
< Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
Cache-Control: private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0
< Expires: Thu, 01 Jan 1970 00:00:01 GMT
Expires: Thu, 01 Jan 1970 00:00:01 GMT
< Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
Expect-CT: max-age=604800, report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
< Set-Cookie: __cf_bm=Z8oUUTMhz8K.._yzicdZVzO49fmFKCtgS2CDTlnFvpU-1659689774-0-ARUAfH3m6VNwz09gKVsRECZkXJf5BdqNsW+oIPcy1oKzvppiMWxz7HGFkEwMuGHGzrHRDy5nV+VVj74AxTN8ThozSiHa/8sYH0IwMMe62woC; path=/; expires=Fri, 05-Aug-22 09:26:14 GMT; domain=.onlinelibrary.wiley.com; HttpOnly; Secure; SameSite=None
Set-Cookie: __cf_bm=Z8oUUTMhz8K.._yzicdZVzO49fmFKCtgS2CDTlnFvpU-1659689774-0-ARUAfH3m6VNwz09gKVsRECZkXJf5BdqNsW+oIPcy1oKzvppiMWxz7HGFkEwMuGHGzrHRDy5nV+VVj74AxTN8ThozSiHa/8sYH0IwMMe62woC; path=/; expires=Fri, 05-Aug-22 09:26:14 GMT; domain=.onlinelibrary.wiley.com; HttpOnly; Secure; SameSite=None
< Vary: Accept-Encoding
Vary: Accept-Encoding
< Strict-Transport-Security: max-age=15552000
Strict-Transport-Security: max-age=15552000
< Server: cloudflare
Server: cloudflare
< CF-RAY: 735e5184085e52cb-LAX
CF-RAY: 735e5184085e52cb-LAX
<
<!DOCTYPE HTML>
<html lang="en-US">
...... (HTML codes saying "Please turn JavaScript on and reload the page")
The HTML looks like this as rendered by Postman:
And yes, Postman can neither visit the url.
According to these observations, I believe that the site behaves differently when receiving a first request from browser and curl. But I don't know how does Cloudflare tells between a human being (using a browser) and a bot (using curl). As I have described before, the two kinds of clients have no difference in:
IP address (they are tested on the same host)
context (both requests are the very first request)
headers (headers are copied from browser to the command line)
I ran into the same problem and solved it by adding headers to one of the solutions, using cloudscraper, already mentioned in the initial post.
import cloudscraper
url = 'https://onlinelibrary.wiley.com/doi/full/10.1111/jvs.13069'
headers = {
'Wiley-TDM-Client-Token': 'your-Wiley-TDM-Client-Token',
'User-Agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/106.0.0.0 Safari/537.36'
}
scraper = cloudscraper.create_scraper()
r = scraper.get(url, headers = headers)
print(r.status_code)
This code snipped returns a status_code 200
It seems like Cloudflare looks for the presence of the 'Wiley-TDM-Client-Token' header, and not for the actual token as removing it from the headers results in a 'Detected a Cloudflare version 2 challenge' error. The code posted here already works without adding your own TDM key, but I expect that for paid content a TDM with the right licenses is needed.

Adding HTST header in my internal website doesn't work as expected

I have created a website where I am trying to add the HSTS security header via httpd.conf
<IfModule mod_headers.c>
Header always set Strict-Transport-Security 'max-age=4000; includeSubDomains'
</IfModule>
Adding the above code, able to see the Strict-Transport-Security header added over my HTTPS response header
host> curl -v https://172.21.218.67 --insecure
* About to connect() to 172.21.218.67 port 443 (#0)
* Trying 172.21.218.67... connected
* Connected to 172.21.218.67 (172.21.218.67) port 443 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* warning: ignoring value of ssl.verifyhost
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
* Server certificate:
* subject: ****************************************
* start date: Oct 21 06:42:49 2019 GMT
* expire date: Nov 20 06:42:49 2019 GMT
* common name: Insights
* issuer: *****************************************
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.27.1 zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: 172.21.218.67
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Mon, 21 Oct 2019 10:50:54 GMT
< Server: Apache
< Strict-Transport-Security: max-age=4000; includeSubDomains
< Last-Modified: Mon, 21 Oct 2019 08:58:58 GMT
< ETag: "8f3-59567e4f07362"
< Accept-Ranges: bytes
< Content-Length: 2291
< Content-Type: text/html
But this does create an impact over my website by the browser. (Browser is not redirecting to HTTPS if the user tries to access my website via HTTP).
I could not even see my website listing in chrome's HSTS checklist
chrome://net-internals/#hsts
Do I need to add any other configuration in order to make it work?
As suggested by IMSoP, my test server was not trusted by the server which affected the HSTS functionality.
Solved: Made my test server as a trusted source to the browser by adding a self-signed certificate.
Now the HSTS working as expected.

Connectivity problems between FILAB VMs and Cosmos global instance

I have the same kind of connectivity problem discussed in the question "Cygnus can not persist data on Cosmos global instance". However, I have found no solution after read it.
Nowadays, I have recently deployed two virtual machines in FILAB (both VMs contain Orion ContextBroker 0.26.1 and Cygnus 0.11.0).
When I try to persist data on Cosmos via Cygnus, I get the following error message (the same in both VMs) :
2015-12-17 19:03:00,221 (SinkRunner-PollingRunner-DefaultSinkProcessor)
[ERROR - com.telefonica.iot.cygnus.sinks.OrionSink.process(OrionSink.java:305)]
Persistence error (The /user/rmartinezcarreras/def_serv/def_serv_path/room1_room
directory could not be created in HDFS. Server response: 503 Service unavailable)
On the other hand, when I try to fire a request from the command line of whatever VM, I get the next response:
[root#orionlarge centos]# curl -v -X GET "http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/rmartinezcarreras/?
op=liststatus&user.name=rmartinezcarreras" -H "X-Auth-Token: XXXXXXX"
* About to connect() to cosmos.lab.fiware.org port 14000 (#0)
* Trying 130.206.80.46... connected
* Connected to cosmos.lab.fiware.org (130.206.80.46) port 14000 (#0)
> GET /webhdfs/v1/user/rmartinezcarreras/?
op=liststatus&user.name=rmartinezcarreras HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7
NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: cosmos.lab.fiware.org:14000
> Accept: */*
> X-Auth-Token: XXXXX
>
* Closing connection #0
* Failure when receiving data from the peer
curl: (56) Failure when receiving data from the peer
Nevertheless, from an external VM (outside FILAB):
[root#dsieBroker orion]# curl -v -X GET
"http://cosmos.lab.fiware.org:14000/webhdfs/v1/user/rmartinezcarreras/?
op=liststatus&user.name=rmartinezcarreras" -H "X-Auth-Token: XXXXX"
* About to connect() to cosmos.lab.fiware.org port 14000 (#0)
* Trying 130.206.80.46... connected
* Connected to cosmos.lab.fiware.org (130.206.80.46) port 14000 (#0)
> GET /webhdfs/v1/user/rmartinezcarreras/?
op=liststatus&user.name=rmartinezcarreras HTTP/1.1
> User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7
NSS/3.19.1 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2
> Host: cosmos.lab.fiware.org:14000
> Accept: */*
> X-Auth-Token: XXXXXX
>
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: HEAD, POST, GET, OPTIONS, DELETE
< Access-Control-Allow-Headers: origin, content-type, X-Auth-Token, Tenant-
ID, Authorization
< server: Apache-Coyote/1.1
< set-cookie:
hadoop.auth="u=rmartinezcarreras&p=rmartinezcarreras&t=simple&e=XXXXXX&s=
XXXXhD 8="; Version=1; Path=/
< Content-Type: application/json; charset=utf-8
< transfer-encoding: chunked
< date: Thu, 17 Dec 2015 18:52:46 GMT
< connection: close
< Content-Length: 243
< ETag: W/"f3-NL9+bYJLweyFpoJfNgjQrg"
<
{"FileStatuses":{"FileStatus":
[{"pathSuffix":"def_serv","type":"DIRECTORY","length":0,"owner":
"rmartinezcarreras","group":"rmartinezcarreras","permission":"740",
"accessTime":0,"modificationTime":1450349251833,"blockSize":0,
"replication":0}]}}
* Closing connection #0
Also get good results from my Cosmos account.
How can I solve this? It seems a connectivity problem. Could you help me?
Thank you in advance
Finally, this was a problem with the OAuth2 proxy we are using for Authentication and Authorization. The underlying Express module it is based was adding a content-length header when another transfer-encoding: chunked header was present. As researched in this other question, this combination is not according to the RFC, and was causing certain fully compliant client implementations were reseting the connection.

Keyrock doesn't accept user, even when using admin

I got two users created by me, admin, with admin permissions, and another user, now with admin permissions too, but initially community (i'll be referring to this account as community).
I've registered an application with the community user and associated the admin later. As callbackUrl i've registered the address below in my keyrock instance
<keystone ip>:/oauth2/token
The request i am making to get oauth2 follows below, it uses https://raw.githubusercontent.com/Bitergia/fiware-chanchan-docker/master/images/pep-wilma/4.3.0/auth-token.sh as a guideline. I've changed the user,pass, host, client id and app secret
curl -s --insecure -i --header "Authorization: Basic NmJjODMyMWMzNDQwNGVlYzkwYzNhNzhlYTU0ZTE2NjY6M2YwMzQyZjE4ZTM1NGI0ZDg5YjhlYWVkNTZmNGI5Mjc=" --header "Content-Type: application/x-www-form-urlencoded" -X POST http://<keyrock IP>/oauth2/token -d 'grant_type=password&username=<user>&password=<pass>&client_id=<clientID>&client_secret=<secret>'
The request reaches the keystone and it replies with a 404 (access token not found).
When i try to get oauth2 tokens from keyrock for both the admin and the community, it says
Error: Root - User access-token not authorized
I can login in horizon with both users.
What did i miss in order to get a oauth2 token from idm?
Edit: Code used to create users:
users_default_pass = '...'
user0 = _register_user(keystone,"user0",passwd=users_default_pass)
keystone.roles.grant(user=user0.id,role=keystone.roles.find(name='community'), project=user0.default_project_id)
Edit2: raw response and response from keystone captured with tcpflow
request:
POST /oauth2/token HTTP/1.1
User-Agent: curl/7.35.0
Host: 130.206.118.xxx:5000
Accept: */*
Authorization: Basic ZWU2YmFjMWNjOTQ3NDdhNmI4MTU3NDdiNDk5NmVhZjQ6NTRkY2NjMjgxODhhNDMxYTk4OTY3MjkwN2UxYjIxYzY=
Content-Type: application/x-www-form-urlencoded
Content-Length: 143
grant_type=password&username=admin&password=admin&client_id=ee6bac1cc94747a6b815747b4996eaf4&client_secret=54dccc28188a431a989672907e1b21c6
write error to stdout
response:
HTTP/1.1 404 Not Found
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 93
Date: Wed, 09 Sep 2015 09:46:19 GMT
{"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}}
write error to stdout
Took me a while to find it :)
In KeyRock, oauth2 is implemented in Horizon. Looking at your request, I've found couple things:
You are using HTTP instead of HTTPS
Requests are being sent against port 5000 (usually Keystone)
That made me think that your requests are going against Keystone.
By default, KeyRock handles oauth2 requests at Horizon, which means, use https and port 443. As you said, doing requests against Keystone fails:
HTTP/1.1 404 Not Found
Vary: X-Auth-Token
Content-Type: application/json
Content-Length: 93
Date: Wed, 09 Sep 2015 15:36:34 GMT
{"error": {"message": "The resource could not be found.", "code": 404, "title": "Not Found"}}
Make sure you do the request against Horizon with HTTPS and port 443 and everything should work!

Error when posting data with cURL

I have written the following curl command.
curl -v -X POST -H 'Accept: application/json'
-H 'Content-Type: application/json' -u username:password
-d 'Id=5&Email=add#ress.com&OptInType=0&EmailType=0&DataFields=null&Status=0}'
https://api.dotmailer.com/v2/contacts
However when I run in Powershell, it returns the following error
"message":"Could not parse the body of the request based on the
content type\"application/json\"
ERROR_BODY_DOES_NOT_MATCH_CONTENT_TYPE"}
This is the verbose server response
Hostname was NOT found in DNS cache
Trying 94.143.104.204...
Connected to api.dotmailer.com (94.143.104.204) port 443 (#0)
successfully set certificate verify locations:
CAfile: C:\Program Files\cURL\bin\curl-ca-bundle.crt CApath: none
SSLv3, TLS handshake, Client hello (1):
SSLv3, TLS handshake, Server hello (2):
SSLv3, TLS handshake, CERT (11):
SSLv3, TLS handshake, Server finished (14):
SSLv3, TLS handshake, Client key exchange (16):
SSLv3, TLS change cipher, Client hello (1):
SSLv3, TLS handshake, Finished (20):
SSLv3, TLS change cipher, Client hello (1):
SSLv3, TLS handshake, Finished (20):
SSL connection using TLSv1.0 / AES128-SHA
Server certificate:
subject: C=GB; ST=England; L=London; OU=DDG; O=dotMailer Ltd; CN=*.dotmailer.com
start date: 2012-01-16 15:51:40 GMT
expire date: 2015-01-16 15:51:40 GMT
subjectAltName: api.dotmailer.com matched
issuer: C=BE; O=GlobalSign nv-sa; CN=GlobalSign Organization Validation CA - G2
SSL certificate verify ok.
Server auth using Basic with user 'username'
*POST /v2/contacts HTTP/1.1
*Authorization: Basic XXXpdXNlci0zNGFmNWU0NTdmYTJAYXBpY29ubmVjdG9yLmNvbTpzaW1vbmUxMjM=
User-Agent: curl/7.37.0
Host: api.dotmailer.com
Accept: application/json
Content-Type: application/json
Content-Length: 72
upload completely sent off: 72 out of 72 bytes
HTTP/1.1 400 Bad Request
Cache-Control: no-cache
Pragma: no-cache
Content-Type: application/json; charset=utf-8
Expires: -1
Date: Thu, 24 Jul 2014 08:46:46 GMT
Content-Length: 139
{"message":"Could not parse the body of the request based on the
content type \"application/json\"
ERROR_BODY_DOES_NOT_MATCH_CONTENT_TYPE"}
Connection #0 to host api.dotmailer.com left intact
Can anyone point me in the right direction for getting my command right?
Thanks.
Your data in -d doesn't seem to be in json format. Try jsonlint to validate json format. Or if you are trying to post text data, use text/plain as Content-type.