I am using box api integration to my app, I am facing an issue with fetching a single user data, I am an enterprise admin and I am getting all the users list when I use GET /users api. How can I take the single user out of this when I pass the login param of the User object. Any ideas ?
You can fetch a single user by using the filter_term query parameter to match all or part of the user's login (docs):
curl https://api.box.com/2.0/users?filter_term=prats
-H "Authorization: Bearer ACCESS_TOKEN"
Response:
{
"total_count": 1,
"entries": [
{
"type": "user",
"id": "123456",
"name": "prats",
"login": "prats#stackoverflow.com",
...
}
]
}
Be aware that filter_term matches the beginning of the login string. If you have multiple users with names that start the same way, e.g. prats and prats2, the above request will return both of them. To prevent this, specify the entire login, or simply append an # to the end of the filter_term value:
curl https://api.box.com/2.0/users?filter_term=prats#
-H "Authorization: Bearer ACCESS_TOKEN"
Related
Hi i am developing a google oauth2 login and i dont't know how to get a user info using access token or id token using this urls
https://www.googleapis.com/oauth2/v3/userinfo passsing access token in Authorization header returns this json:
"sub": "100366573866312827626",
"picture": "https://lh3.googleusercontent.com/a/default-user=s96-c"
}
https://www.googleapis.com/oauth2/v3/tokeninfo?id_token={id_token} get this json
{
"iss": "https://accounts.google.com",
"azp": "375890336523-u4rpach5688ltoc0v4mof1r7j70gem95.apps.googleusercontent.com",
"aud": "375890336523-u4rpach5688ltoc0v4mof1r7j70gem95.apps.googleusercontent.com",
"sub": "100366573866312827626",
"at_hash": "htkn2tQqV4Ff4EmrtPDh9w",
"iat": "1636631006",
"exp": "1636634606",
"alg": "RS256",
"kid": "27c72619d0935a290c41c3f010167138685f7e53",
"typ": "JWT"
}
But in both json doesn't have any user information like username or email
thanks for your time and sorry for my english, is not my native languaje
Make sure when you authorized the user that you requested profile scope.
Using the Access token returned when your user authorized your application. You can get the users profile information you should make a call to the People.get method
curl \
'https://people.googleapis.com/v1/people/me?key=[YOUR_API_KEY]' \
--header 'Authorization: Bearer [YOUR_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
--compressed
The user info endpoint will give you a little bit of info but not as much as using the userinfo endpoint.
Google does not always return the user claims in the id token, so i would not try to rely upon that.
I have 3 forge viewers that I use, I have no access to buckets from particularly in one viewer. Only difference of this viewer is retention policy: persistent.
When I want to delete or see details an object from a bucket first I get a 2-Legged Token
curl -v 'https://developer.api.autodesk.com/authentication/v1/authenticate'
-X 'POST'
-H 'Content-Type: application/x-www-form-urlencoded'
-d '
client_id=...&
client_secret=...&
grant_type=client_credentials&
scope=bucket:create%20bucket:read%20bucket:delete%20data:write%20data:read%20account:read%20viewables:read'
and then use this token in
curl -v "https://developer.api.autodesk.com/oss/v2/buckets/apptestbucket/objects?limit=1"
-X GET
-H "Authorization: Bearer ..."
-H "Content-Type: application/json"
but when this last call is made, I only recieve
* Connection #0 to host developer.api.autodesk.com left intact
{"reason":"No access"}
Can it be because of retention policy or do I miss something ? Thank you.
Rention policy will not affect your access/permission to a bucket and there're pretty much only two things that would which are whether your Forge app is granted access (owner or authorized via bucket permissions) and the scope of your token.
To view, update or delete a bucket object make sure your token is given the scopes below:
GET bucket(s)/details - bucket:read
GET object(s) - data:read
DELETE object - data:write
PUT object data:write
And to determine whether your current client credentials have access to a bucket use GET buckets to list all your buckets:
{
"items" : [ {
"bucketKey" : "00001fbf-8505-49ab-8a42-44c6a96adbd0",
"createdDate" : 1441329298362,
"policyKey" : "transient"
}, {
"bucketKey" : "0003114d",
"createdDate" : 1440119769765,
"policyKey" : "transient"
}, {
"bucketKey" : "0003fbc1-389a-4194-915a-38313797d753",
"createdDate" : 1453886285506,
"policyKey" : "transient"
}, {
...
I want to create Version for ML Engine Model by Rest API and set as default. kindly help me and suggest what is the mistake that I am doing.Sending below request and hitting the post API Below.
Trying hitting by Google Auth Playground.
Post URL : https://ml.googleapis.com/v1/projects//models//versions
Request Body :
{
"name": "v4",
"description": "This is test Version created by API",
"isDefault": True,
"deploymentUri": "gs://car-hertz/vans-uk-hertz/output/v1/F0/export/exporter/1531390162/",
"runtimeVersion": "1.4",
"framework": enum(TENSORFLOW),
"pythonVersion": "2.7"
}
In the REST API docs for the Version resource you can see in the description for the framework field that:
Valid values are TENSORFLOW, SCIKIT_LEARN, and XGBOOST
enum(Framework) is just the field's type. Also, from that same link: isDefault field is output only. You shouldn't include it in the request to create a model version. From the docs for the create method:
If you want a new version to be the default, you must call
projects.models.versions.setDefault.
So, to create a new model version and set it as default via REST API:
Put the request payload in a json file:
{
"name": "v4",
"description": "This is test Version created by API",
"deploymentUri": "gs://car-hertz/vans-uk-hertz/output/v1/F0/export/exporter/1531390162/",
"runtimeVersion": "1.4",
"framework": "TENSORFLOW",
"pythonVersion": "2.7"
}
Create the version by running in a shell the following (I like to use a gcurl alias):
alias gcurl='curl -H "Authorization: Bearer $(gcloud auth application-default print-access-token)" -H "Content-Type: application/json" '
gcurl -X POST -T "$REQUEST_FILEPATH" https://ml.googleapis.com/v1/projects/$PROJECT/models/$MODEL/versions
Set the above as the default:
gcurl -X POST https://ml.googleapis.com/v1/projects/$PROJECT/models/$MODEL/versions/v4:setDefault
I'm trying to better automate my deployment of containers using the containers service available from IBM Bluemix. I'm currently at the point where I'd like to create a script to assign an IP address and then hit a REST endpoint to update the DNS entry.
Management of IP addresses can be done using the IBM Containers plug-in with commands such as cf ic ip bind. However, before executing that command, I'd like to know which IP addresses are available. This is commonly done with the cf ic ip list command which has output that looks like this:
Number of allocated public IP addresses: 8
Listing the IP addresses in this space...
IP Address Container ID
1.1.1.1
1.1.1.2
2.1.1.1
2.1.1.2 deadbeef-aaaa-4444-bbbb-012345678912
2.1.1.3
2.1.1.4
1.1.1.3
2.1.1.5
This is useful output to a human, but requires a lot of extra munging for a script to handle. Is there a way to simply have this command return the JSON output that is probably coming from the API? For regular CloudFoundry commands we can use cf curl and get usable output, but there doesn't appear to be an analog here.
You can use the IBM Containers REST API for that:
curl -X GET --header "Accept: application/json" --header "X-Auth-Token: xxxxxxxx" --header "X-Auth-Project-Id: xxxxxxxx" "https://containers-api.ng.bluemix.net/v3/containers/floating-ips?all=true"
Example of output is (for privacy purposes I modified output below):
[
{
"Bindings": {
"ContainerId": null
},
"IpAddress": "111.111.1111.1111",
"MetaId": "607c9e7afaf54f89b4d1c926",
"PortId": null,
"Status": "DOWN"
},
{
"Bindings": {
"ContainerId": "abcdefg-123"
},
"IpAddress": "111.111.1111.1112",
"MetaId": "607c9e7afaf54f89b4d1c9262d",
"PortId": "8fa30c31-1128-43da-b709",
"Status": "ACTIVE"
},
{
"Bindings": {
"ContainerId": "abcdefg-123"
},
"IpAddress": "111.111.1111.1113",
"MetaId": "607c9e7afaf54f89b4d1c9262",
"PortId": "6f698778-94f6-43d0-95d1",
"Status": "ACTIVE"
},
{
"Bindings": {
"ContainerId": null
},
"IpAddress": "111.111.1111.1114",
"MetaId": "607c9e7afaf54f89b4d1c926",
"PortId": null,
"Status": "DOWN"
}
]
To get token for X-Auth-Token and space ID for X-Auth-Project-Id:
$ cf oauth-token
$ cf space <space-name> --guid
I have a question regarding PEP proxy file.
My keystone service is running on 192.168.4.33:5000.
My horizon service is running on 192.168.4.33:443.
My WebHDFS service is running on 192.168.4.180:50070
and i intend to run PEP Proxy on 192.168.4.180:80
But what i don't get is what should i put in place of config.account_host?
Inside mysql database for keyrock manager there is "idm" user with "idm" password and every request i make via curl on Identity manager works.
But with this config:
config.account_host = 'https://192.168.4.33:443';
config.keystone_host = '192.168.4.33';
config.keystone_port = 5000;
config.app_host = '192.168.4.180';
config.app_port = '50070';
config.username = 'idm';
config.password = 'idm';
when i start pep-proxy with:
sudo node server.js
i get next error:
Starting PEP proxy in port 80. Keystone authentication ...
Error in keystone communication {"error": {"message": "The request you
have made requires authentication.", "code": 401, "title":
"Unauthorized"}}
First, I wouldn't type the port at your config.account_host, as it is not required there, but this doesn't interfere the operation.
My guessing is that you are using your own KeyRock FIWARE Identity Manager with the default provision of roles.
If you check the code, PEP Proxy sends a Domain Scoped request against KeyRock, as stands in the Keystone v3 API.
So the thing is, the idm user you are using to authenticate PEP, probably doesn't have any domain roles. The workaround to check it would be:
Try the Domain Scoped request:
curl -i \
-H "Content-Type: application/json" \
-d '
{ "auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "idm",
"domain": { "id": "default" },
"password": "idm"
}
}
},
"scope": {
"domain": {
"id": "default"
}
}
}
}' \
http://192.168.4.33:5000/v3/auth/tokens ; echo
If you get a 401 code, you are not authorized to make Domain Scoped requests.
Check if the user has any role in this domain. For this you will need to get an Auth token using the Default Scope request:
curl -i -H "Content-Type: application/json" -d '
{ "auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "idm",
"domain": { "id": "default" },
"password": "idm"
}
}
}
}
}' http://192.168.4.33:5000/v3/auth/tokens ; echo
This will return a X-Subject-Token that you will need for the workaround.
With that token, we will send a request to the default domain using the user we selected before, idm, to check if we have assigned any roles there:
curl -i \
-H "X-Auth-Token:<retrieved_token>" \
-H "Content-type: application/json" \
http://192.168.4.33:5000/v3/domains/default/users/idm/roles
And probably, this request will give you a response like:
{"links": {"self": "http://192.168.4.33:5000/v3/domains/default/users/idm/roles", "previous": null, "next": null}, "roles": []}
In that case, you will need to create a role for that user. To create it, you will need to assing a role to the user idm in the default domain. For that, you will need to retrieve the role id of the role you want to assign. You can do this by sending the following request:
curl -i \
-H "X-Auth-Token:<retrieved_token>" \
-H "Content-type: application/json" \
http://192.168.4.33:5000/v3/roles
It will return a JSON with all the available roles and its ids.
Assign a role to the user idm in the default domain. There are 6 available: member, owner, trial, basic, community and admin. As idm is the main administrator, I would chose the admin id. So finally, with the admin id, we assign the role by doing:
curl -s -X PUT \
-H "X-Auth-Token:<retrieved_token>" \
-H "Content-type: application/json" \
http://192.168.4.33:5000/v3/domains/default/users/idm/roles/<role_id>
Now you can try again Step 1, and if everything works, you should be able to start the PEP proxy:
sudo node server.js
Let me know how it goes!