PEP proxy config file for integration of IDM GE, PEP proxy and Cosmos big data - fiware

I have a question regarding PEP proxy file.
My keystone service is running on 192.168.4.33:5000.
My horizon service is running on 192.168.4.33:443.
My WebHDFS service is running on 192.168.4.180:50070
and i intend to run PEP Proxy on 192.168.4.180:80
But what i don't get is what should i put in place of config.account_host?
Inside mysql database for keyrock manager there is "idm" user with "idm" password and every request i make via curl on Identity manager works.
But with this config:
config.account_host = 'https://192.168.4.33:443';
config.keystone_host = '192.168.4.33';
config.keystone_port = 5000;
config.app_host = '192.168.4.180';
config.app_port = '50070';
config.username = 'idm';
config.password = 'idm';
when i start pep-proxy with:
sudo node server.js
i get next error:
Starting PEP proxy in port 80. Keystone authentication ...
Error in keystone communication {"error": {"message": "The request you
have made requires authentication.", "code": 401, "title":
"Unauthorized"}}

First, I wouldn't type the port at your config.account_host, as it is not required there, but this doesn't interfere the operation.
My guessing is that you are using your own KeyRock FIWARE Identity Manager with the default provision of roles.
If you check the code, PEP Proxy sends a Domain Scoped request against KeyRock, as stands in the Keystone v3 API.
So the thing is, the idm user you are using to authenticate PEP, probably doesn't have any domain roles. The workaround to check it would be:
Try the Domain Scoped request:
curl -i \
-H "Content-Type: application/json" \
-d '
{ "auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "idm",
"domain": { "id": "default" },
"password": "idm"
}
}
},
"scope": {
"domain": {
"id": "default"
}
}
}
}' \
http://192.168.4.33:5000/v3/auth/tokens ; echo
If you get a 401 code, you are not authorized to make Domain Scoped requests.
Check if the user has any role in this domain. For this you will need to get an Auth token using the Default Scope request:
curl -i -H "Content-Type: application/json" -d '
{ "auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "idm",
"domain": { "id": "default" },
"password": "idm"
}
}
}
}
}' http://192.168.4.33:5000/v3/auth/tokens ; echo
This will return a X-Subject-Token that you will need for the workaround.
With that token, we will send a request to the default domain using the user we selected before, idm, to check if we have assigned any roles there:
curl -i \
-H "X-Auth-Token:<retrieved_token>" \
-H "Content-type: application/json" \
http://192.168.4.33:5000/v3/domains/default/users/idm/roles
And probably, this request will give you a response like:
{"links": {"self": "http://192.168.4.33:5000/v3/domains/default/users/idm/roles", "previous": null, "next": null}, "roles": []}
In that case, you will need to create a role for that user. To create it, you will need to assing a role to the user idm in the default domain. For that, you will need to retrieve the role id of the role you want to assign. You can do this by sending the following request:
curl -i \
-H "X-Auth-Token:<retrieved_token>" \
-H "Content-type: application/json" \
http://192.168.4.33:5000/v3/roles
It will return a JSON with all the available roles and its ids.
Assign a role to the user idm in the default domain. There are 6 available: member, owner, trial, basic, community and admin. As idm is the main administrator, I would chose the admin id. So finally, with the admin id, we assign the role by doing:
curl -s -X PUT \
-H "X-Auth-Token:<retrieved_token>" \
-H "Content-type: application/json" \
http://192.168.4.33:5000/v3/domains/default/users/idm/roles/<role_id>
Now you can try again Step 1, and if everything works, you should be able to start the PEP proxy:
sudo node server.js
Let me know how it goes!

Related

Bitcoin RPC authentication issue - regtest

i am currently developing a bitcoin application which involves running a full bitcoin node.
As i am testing my source code, i decided to use the bitcoin regtest mode.
This is how i start my bitcoin node:
./bitcoind -regtest -rpcuser=a -rpcpassword=b -server -bind=0.0.0.0
This is how i am interacting with my regtest node:
./bitcoin-cli -regtest -rpcuser=a -rpcpassword=b getnewaddress
Output:
2N152jpoD9u52cpswsN7ih8RZ3P4DszaUGg
This example works as expected... BUT !
As soon as i try to interact with bitcoin node not using bitcoin-cli, but curl or python i get stuck:
curl --user a --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getnewaddress", "params": [] }' -H 'content-type: text/plain;' http://192.168.178.200:18444/
i get asked for the password => i enter b
and then it says:
curl: (52) Empty reply from server
same for:
curl --user a:b --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getnewaddress", "params": [] }' -H 'content-type: text/plain;' http://192.168.178.200:18444/
and:
curl --data-binary '{"jsonrpc": "1.0", "id":"curltest", "method": "getnewaddress", "params": [] }' -H 'content-type: text/plain;' http://a:b#192.168.178.200:18444/
I also looked for a cookie file to authenticate with cookie, but there was none.
i already researched the problem, e.g.
https://bitcoin.stackexchange.com/questions/22335/bitcoin-daemon-sends-empty-reply-from-server-when-in-test-net
and various other sites, but none helped...
i am running version 0.18.0
Well, i described my problem in detail and mentioned what i already tried for two days..
Any suggestions?
Thanks and Greetings!
We should update the regtest RPC port if the version is >= 0.16.0 to 18443.
So, I just changed the port from 18444 to 18443, it worked.
Example:
curl --user username:password --data-binary '{"jsonrpc":"1.0","id":"curltext","method":"getblockhash","params":[0]}' -H 'content-type:text/plain;' http://127.0.0.1:18443
Ref: https://github.com/ruimarinho/bitcoin-core/issues/60

How to use Google Natural Language processing from Google Cloud Storage?

I have a sample code here. It is json
{
"document":{
"type":"PLAIN_TEXT",
"content":"Joanne Rowling, who writes under the pen names J. K. Rowling and Robert Galbraith, is a British novelist and screenwriter who wrote the Harry Pott$
},
"encodingType":"UTF8"
}
I found a tutorial on google's documentation on Natural Language processing on reading from Google Cloud Storage.
curl -X POST \ -H "Authorization: Bearer "$(gcloud auth application-default print-access-token) \
-H "Content-Type: application/json; charset=utf-8" \ --data "{
'document':{
'type':'PLAIN_TEXT',
'gcsContentUri':'gs://reubucket/textData'
}
}" "https://language.googleapis.com/v1/documents:analyzeEntitySentiment"
And the error that I got is
ERROR: (gcloud.auth) Invalid choice: '*************-_m6csS1Wzlj1pyC_J7vzC0'.
Usage: gcloud auth [optional flags] <group | command>
group may be application-default
command may be activate-service-account | configure-docker | list |
login | revoke
How do I call the command with my API key.
I need a way to change the "content" to entries into my CSV file.
Thank you.
Here is an example of the error that I am receiving please help:
mufaroshumba#reucybertextmining:~/myFolder$ gcloud auth activate-service-account --key-file="/home/mufaroshumba/myFolder/reucybertextmining-74fa66372251.json"
Activated service account credentials for: [starting-*******[CENSORED]#reucybertextmining.iam.gserviceaccount.com]
mufaroshumba#reucybertextmining:~/myFolder$ curl "https://language.googleapis.com/v1/documents:analyzeSentiment?key=${API_KEY}" \ -s -X POST -H "Content-Type: app
lication/json" --data-binary #request.json
{
"error": {
"code": 401,
"message": "Permission to access the GCS address is denied.",
"status": "UNAUTHENTICATED",
"details": [
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "document.gcs_content_uri",
"description": "Permission to access the GCS address is denied."
}
]
}
]
}
}
curl: (6) Could not resolve host: -s
mufaroshumba#reucybertextmining:~/myFolder$
I then used this website trying to get
It looks like your auth is not setup correctly. If you just run this command:
gcloud auth application-default print-access-token
it should be giving you a token, but it seems like it's not. Please follow the steps here to make sure that this command is working first:
https://cloud.google.com/natural-language/docs/quickstart#quickstart-analyze-entities-cli
Then, as long as you have permission to access the gcs bucket, you should be able to get content out of it. Note that the API is expecting to see the actual content in the gcs file, and not a CSV.

Fiware keystone api create user and access with horizon

im using keystone api to create an user (as in Fiware Keystone API Create User).
my steps:
create project with:
curl -s -H "X-Auth-Token:17007fe11124bd71eb60" -H "Content-Type:
application/json" -d '{"tenant": {"description":"Project1",
"name":"proyecto1", "enabled": true}}'
http://localhost:35357/v2.0/tenants -X POST | python
-mjson.tool
create role:
curl -s -H "X-Auth-Token:17007fe11124bd71eb60" -H "Content-Type:
application/json" -d '{"role":{"name":"Project1Admin",
"description":"Role Admin for project1"}}'
http://localhost:35357/v3/roles | python -mjson.tool
Create user:
curl -s -H "X-Auth-Token:17007fe11124bd71eb60" -H "Content-Type:
application/json" -d '{"user": {"default_project_id":
"d0f384973b9f4a57b975fcd9bef10c6e", "description":"admin1",
"enabled":true, "name":"admin", "password":"admin",
"email":"admin#gmail.com"}}' http://localhost:35357/v2.0/users |
python -mjson.tool
last step: create user-role-tenant relationship:
curl -s -H "X-Auth-Token:17007fe11124bd71eb60"
http://localhost:35357/v2.0/tenants/d0f384973b9f4a57b975fcd9bef10c6e/users/admin1/roles/OS-KS/0c10f475076345368724a03ccd1c3403
-X PUT
if i check user:
curl -s -H "X-Auth-Token:17007fe11124bd71eb60" http://localhost:5000/v3/users/admin1 | python -mjson.tool
response:
{
"user": {
"default_project_id": "d0f384973b9f4a57b975fcd9bef10c6e",
"description": "admin1",
"domain_id": "default",
"email": "admin1#gmail.com",
"enabled": true,
"id": "admin1",
"links": {
"self": "http://localhost:5000/v3/users/admin1"
},
"name": "admin1",
"username": null
}
}
I think thats good, But I try to connect with horizon and have an error "Invalid user or password". The result im getting in logs is the following :
keystone.log
2016-04-20 07:56:03.949 2150 WARNING keystone.common.wsgi [-] Could not find user: admin1#gmail.com
2016-04-20 07:56:03.967 2150 INFO eventlet.wsgi.server [-] 127.0.0.1 - - [20/Apr/2016 07:56:03] "HEAD /v3/OS-TWO-FACTOR/two_factor_auth?user_name=admin1%40gmail.com&domain_name=Default HTTP/1.1" 404 159 0.077033
horizon.log:
[Wed Apr 20 07:59:41.934935 2016] [:error] [pid 5963:tid
140154061260544] Login failed for user "admin1#gmail.com".
Anyone knows why this user cant connect with horizon?
thanks
In KeyRock, we use the name field to store the user email, and the username field to store its username. When creating a user, all attributes provided in the request but the name, the username, the default_project_id, the domain_id and the enabled attribute are serialized and stored inside a field called extra. Therefore, your email attribute will be stored in the extra field.
After registering, when loging in to Horizon and providing the user email, Horizon sends a request to Keystone to search for the email in the name field. Since you are entering admin1#gmail.com, but the actual name you provided is admin1, login into Horizon will fail.
Registering the user again with admin1#gmail.com as name (and not email) should fix your problem, but you can also enter admin1 in the email field of the login form if you can't afford to recreate the user.
Hope this solves your issue!

Returning status of Bluemix container IP allocation as JSON

I'm trying to better automate my deployment of containers using the containers service available from IBM Bluemix. I'm currently at the point where I'd like to create a script to assign an IP address and then hit a REST endpoint to update the DNS entry.
Management of IP addresses can be done using the IBM Containers plug-in with commands such as cf ic ip bind. However, before executing that command, I'd like to know which IP addresses are available. This is commonly done with the cf ic ip list command which has output that looks like this:
Number of allocated public IP addresses: 8
Listing the IP addresses in this space...
IP Address Container ID
1.1.1.1
1.1.1.2
2.1.1.1
2.1.1.2 deadbeef-aaaa-4444-bbbb-012345678912
2.1.1.3
2.1.1.4
1.1.1.3
2.1.1.5
This is useful output to a human, but requires a lot of extra munging for a script to handle. Is there a way to simply have this command return the JSON output that is probably coming from the API? For regular CloudFoundry commands we can use cf curl and get usable output, but there doesn't appear to be an analog here.
You can use the IBM Containers REST API for that:
curl -X GET --header "Accept: application/json" --header "X-Auth-Token: xxxxxxxx" --header "X-Auth-Project-Id: xxxxxxxx" "https://containers-api.ng.bluemix.net/v3/containers/floating-ips?all=true"
Example of output is (for privacy purposes I modified output below):
[
{
"Bindings": {
"ContainerId": null
},
"IpAddress": "111.111.1111.1111",
"MetaId": "607c9e7afaf54f89b4d1c926",
"PortId": null,
"Status": "DOWN"
},
{
"Bindings": {
"ContainerId": "abcdefg-123"
},
"IpAddress": "111.111.1111.1112",
"MetaId": "607c9e7afaf54f89b4d1c9262d",
"PortId": "8fa30c31-1128-43da-b709",
"Status": "ACTIVE"
},
{
"Bindings": {
"ContainerId": "abcdefg-123"
},
"IpAddress": "111.111.1111.1113",
"MetaId": "607c9e7afaf54f89b4d1c9262",
"PortId": "6f698778-94f6-43d0-95d1",
"Status": "ACTIVE"
},
{
"Bindings": {
"ContainerId": null
},
"IpAddress": "111.111.1111.1114",
"MetaId": "607c9e7afaf54f89b4d1c926",
"PortId": null,
"Status": "DOWN"
}
]
To get token for X-Auth-Token and space ID for X-Auth-Project-Id:
$ cf oauth-token
$ cf space <space-name> --guid

Problems with cosmos auth and Identity manager integration

I want to integrate cosmos-auth with Idm GE.
Config for node.js application is:
{
"host": "192.168.4.180",
"port": 13000,
"private_key_file": "key.pem",
"certificate_file": "cert.pem",
"idm": {
"host": "192.168.4.33",
"port": "443",
"path": "/oauth2/token"
},
"cosmos_app": {
"client_id": "0434fdf60897479588c3c31cfc957b6d",
"client_secret": "a7c3540aa5de4de3a0b1c52a606b82df"
},
"log": {
"file_name": "/var/log/cosmos/cosmos-auth/cosmos-auth.log",
"date_pattern": ".dd-MM-yyyy"
}
}
When i send HTTP POST request directly to IDM GE to url
https://192.168.4.33:443/oauth2/token
with required parameters i get ok results:
{
access_token: "LyZT5DRGSn0F8IKqYU8EmRFTLo1iPJ"
token_type: "Bearer"
expires_in: 3600
refresh_token: "XiyfKCHrIVyludabjaCyGqVsTkx8Sf"
}
But when i curl the cosmos-auth node.js application
curl -X POST "https://192.168.4.180:13000/cosmos-auth/v1/token" -H
"Content-Type: application/x-www-form-urlencoded" -d
"grant_type=password&username=idm&password=idm" -k
I get next result:
{"statusCode":500,"error":"Internal Server Error","message":"An internal server error occurred"}
Has anyone encountered something similar?
What could be the problem?
The error i made was using unsigned certificate.How clumsy of me.
So either sign the certificate or insert additional element in options object (rejectUnauthorized: false)
var options = {
host : host,
port : port,
path : path,
method : method,
headers: headers,
rejectUnauthorized: false
};
or in the beginning of the file insert:
process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0';
Ofcourse this is only temporary solution until we use fully signed cert.
Anyways error handling and logs in cosmos-auth node.js app should show a little bit more.