I have an entity represented on the Context Broker with severall attributes (such as temperature, humidity etc...). I have a MySql database that persists the values from that entity by row. So in order to persist that info i have to make a subscription from cygnus. The problem here is that the information is being presisted but when i run the command (curl http://localhost:1026/v2/subscription) i get the output: [] as if any subscription was being made. If i do the command echo 'db.csubs.count()' | mongo orion --quiet it even shows the output 0.
Running OS: Centos 6
My Orion Context Broker Version: 0.26.0
Orion runs as a service:
/usr/bin/contextBroker -port 1026 -logDir /var/log/contextBroker -pidpath /var/run/contextBroker/contextBroker.pid -dbhost localhost -db orion -multiservice
So from the first step. Let's assume i don't have any subscription made to the database. I run the command:
/usr/cygnus/bin/cygnus-flume-ng agent --conf /usr/cygnus/conf/ -f /usr/cygnus/conf/agent_a1.conf -n cygnusagent -Dflume.root.logger=INFO,console
to ensure that a flume agent is running.
python2.7 SetSubscription.py bus temperature http://188.???.??.???:5050/notify
The python script is the one provided in the fiware-figway folder provided in the github repository.
import requests, json, re
from collections import Counter
import ConfigParser
import io
import sys
CONFIG_FILE = "../config.ini"
NUM_ARG=len(sys.argv)
COMMAND=sys.argv[0]
if NUM_ARG==4:
ENTITY_ID=sys.argv[1]
ENTITY_ATTR=sys.argv[2]
SERVER_URL=sys.argv[3]
else:
print 'Usage: '+COMMAND+' [ENTITY ID] [ATTRIBUTE] [SERVER URL]'
print ' ENTITY ID = Entity you want to watch for changes/updates.'
print ' ATTRIBUTE = Attribute whose change will trigger notifications. In this example script only this attribute will be notified.'
print ' SERVER URL = (Local) Server listening for notifications.Example: http://myserver.domain.com:10000'
print ' It has to be a reachable address:port for the ContextBroker. If you are behind a NAT/Firewall contact your network admin.'
print
sys.exit(2)
# Load the configuration file
with open(CONFIG_FILE,'r+') as f:
sample_config = f.read()
config = ConfigParser.RawConfigParser(allow_no_value=True)
config.readfp(io.BytesIO(sample_config))
CB_HOST=config.get('contextbroker', 'host')
CB_PORT=config.get('contextbroker', 'port')
CB_FIWARE_SERVICE=config.get('contextbroker', 'fiware_service')
CB_AAA=config.get('contextbroker', 'OAuth')
if CB_AAA == "yes":
TOKEN=config.get('user', 'token')
TOKEN_SHOW=TOKEN[1:5]+"**********************************************************************"+TOKEN[-5:]
else:
TOKEN="NULL"
TOKEN_SHOW="NULL"
NODE_ID=config.get('local', 'host_id')
f.close()
CB_URL = "http://"+CB_HOST+":"+CB_PORT
MIN_INTERVAL = "PT5S"
DURATION = "P1M"
ENTITY_TYPE = ""
ENTITY_ATTR_WATCH = ENTITY_ATTR
ENTITY_ATTR_NOTIFY = ENTITY_ATTR
PAYLOAD = '{ \
"entities": [ \
{ \
"type": "'+ENTITY_TYPE+'", \
"isPattern": "false", \
"id": "'+ENTITY_ID+'" \
} \
], \
"attributes": [ \
"'+ENTITY_ATTR_NOTIFY+'" \
], \
"reference": "'+SERVER_URL+'", \
"duration": "'+DURATION+'", \
"notifyConditions": [ \
{ \
"type": "ONCHANGE", \
"condValues": [ \
"'+ENTITY_ATTR_WATCH+'" \
] \
} \
], \
"throttling": "'+MIN_INTERVAL+'" \
}'
HEADERS = {'content-type': 'application/json', 'accept': 'application/json' , 'Fiware-Service': CB_FIWARE_SERVICE ,'X-Auth-Token' : TOKEN}
#HEADERS = {'content-type': 'application/json', 'Fiware-Service': CB_FIWARE_SERVICE ,'X-Auth-Token' : TOKEN}
HEADERS_SHOW = {'content-type': 'application/json', 'accept': 'application/json' , 'Fiware-Service': CB_FIWARE_SERVICE , 'X-Auth-Token' : TOKEN_SHOW}
URL = CB_URL + '/v1/subscribeContext'
print "* Asking to "+URL
print "* Headers: "+str(HEADERS_SHOW)
print "* Sending PAYLOAD: "
print json.dumps(json.loads(PAYLOAD), indent=4)
print
print "..."
r = requests.post(URL, data=PAYLOAD, headers=HEADERS)
print
print "* Status Code: "+str(r.status_code)
print
print r.text
Now everytime that i make a change in the values, the value is changed and it is presisted in the database.
If you need any more information feel free to ask!
EDIT:
The payload "Asking to ", "Headers: " and "Sending payload":
* Asking to http://188.???.??.???:1026/v1/subscribeContext
* Headers: {'Fiware-Service': 'fiwarefinal', 'content-type': 'application/json', 'accept': 'application/json', 'X-Auth-Token': 'NULL'}
* Sending PAYLOAD:
{
"reference": "http://localhost:5050/notify",
"throttling": "PT5S",
"entities": [
{
"type": "",
"id": "bus1",
"isPattern": "false"
}
],
"attributes": [
"temperature"
],
"duration": "P1M",
"notifyConditions": [
{
"condValues": [
"temperature"
],
"type": "ONCHANGE"
}
]
}
...
* Status Code: 200
{
"subscribeResponse" : {
"subscriptionId" : "56781c2d071f621a6546e740",
"duration" : "P1M",
"throttling" : "PT5S"
}
}
Note that you are running Orion in multiservice mode. Thus, entities/attributes/subscriptions are separated by tentant/services completely isolated in different databases at MongoDB layer. In particular, note that the subscribe context request is using Fiware-Service: fiwarefinal so the subscription is created in the "fiwarefinal" service (associated to the orion-fiwarefinal DB).
Thus curl http://localhost:1026/v2/subscription will not show any subscription, as the subscrition query is resolved in the default tenant (associated to the orion database). The same for echo 'db.csubs.count()' | mongo orion --quiet. In order to get the subscription list (or count in DB) for the "fiwarefinal" service you should use:
curl --header 'Fiware-Service: fiwarefinal' http://localhost:1026/v2/subscription
echo 'db.csubs.count()' | mongo orion-fiwarefinal --quiet
Note that you also need to use Fiware-Service: fiwarefinal in the unsubscribe context operation if you want to remove subscription in the "fiwarefinal" tenant/service.
Please find, more information about multiservice mode and how it relates with DB level.
Related
I tried to make some http requests through GCP's Cloud Function with Python 3.10 runtime, some went well, and some went wrong.
To find out the reason, I ping the IP of each url, and the IP I need has no response:
ping ip(173.194.217.106) of url(https://www.google.com): True
ping ip(69.147.92.11) of url(https://www.yahoo.com): True
ping ip(13.107.42.14) of url(https://www.linkedin.com): True
ping ip(117.56.7.114) of url(https://data.moi.gov.tw): False
Is there any way to make a successful request to https://data.moi.gov.tw through Cloud Function?
Here are the materials to reproduce the results with Cloud Function (Gen1):
main.py:
import platform # For getting the operating system name
import subprocess # For executing a shell command
import requests
def ping(host):
"""
Returns True if host (str) responds to a ping request.
Remember that a host may not respond to a ping (ICMP) request even if the host name is valid.
"""
# Option for the number of packets as a function of
param = '-n' if platform.system().lower()=='windows' else '-c'
# Building the command. Ex: "ping -c 1 google.com"
command = ['ping', param, '1', host]
return subprocess.call(command) == 0
def main(event):
d_ip_url = {
'173.194.217.106' : 'https://www.google.com',
'69.147.92.11' : 'https://www.yahoo.com',
'13.107.42.14' : 'https://www.linkedin.com',
'117.56.7.114' : 'https://data.moi.gov.tw',
}
for ip, url in d_ip_url.items():
print(f'ping ip({ip}) of url({url}):', ping(ip))
requirements.txt:
# Function dependencies, for example:
# package>=version
requests
The cloud Function settings:
{
"name": "projects/corgis-361708/locations/asia-east1/functions/ping-test",
"httpsTrigger": {
"url": "https://asia-east1-corgis-361708.cloudfunctions.net/ping-test",
"securityLevel": "SECURE_ALWAYS"
},
"status": "ACTIVE",
"entryPoint": "main",
"timeout": "60s",
"availableMemoryMb": 256,
"serviceAccountEmail": "corgis-361708#appspot.gserviceaccount.com",
"updateTime": "2022-09-21T06:04:31.746Z",
"versionId": "2",
"labels": {
"deployment-tool": "console-cloud"
},
"sourceUploadUrl": "https://storage.googleapis.com/uploads-918581105162.asia-east1.cloudfunctions.appspot.com/78ad8f77-d16c-412c-843f-51238703fbbf.zip",
"runtime": "python310",
"maxInstances": 1,
"ingressSettings": "ALLOW_ALL",
"buildId": "0c8bf5d0-3467-4516-8fea-c39d0e093c2e",
"buildName": "projects/647355426154/locations/asia-east1/builds/0c8bf5d0-3467-4516-8fea-c39d0e093c2e",
"dockerRegistry": "CONTAINER_REGISTRY"
}
I have a question regarding PEP proxy file.
My keystone service is running on 192.168.4.33:5000.
My horizon service is running on 192.168.4.33:443.
My WebHDFS service is running on 192.168.4.180:50070
and i intend to run PEP Proxy on 192.168.4.180:80
But what i don't get is what should i put in place of config.account_host?
Inside mysql database for keyrock manager there is "idm" user with "idm" password and every request i make via curl on Identity manager works.
But with this config:
config.account_host = 'https://192.168.4.33:443';
config.keystone_host = '192.168.4.33';
config.keystone_port = 5000;
config.app_host = '192.168.4.180';
config.app_port = '50070';
config.username = 'idm';
config.password = 'idm';
when i start pep-proxy with:
sudo node server.js
i get next error:
Starting PEP proxy in port 80. Keystone authentication ...
Error in keystone communication {"error": {"message": "The request you
have made requires authentication.", "code": 401, "title":
"Unauthorized"}}
First, I wouldn't type the port at your config.account_host, as it is not required there, but this doesn't interfere the operation.
My guessing is that you are using your own KeyRock FIWARE Identity Manager with the default provision of roles.
If you check the code, PEP Proxy sends a Domain Scoped request against KeyRock, as stands in the Keystone v3 API.
So the thing is, the idm user you are using to authenticate PEP, probably doesn't have any domain roles. The workaround to check it would be:
Try the Domain Scoped request:
curl -i \
-H "Content-Type: application/json" \
-d '
{ "auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "idm",
"domain": { "id": "default" },
"password": "idm"
}
}
},
"scope": {
"domain": {
"id": "default"
}
}
}
}' \
http://192.168.4.33:5000/v3/auth/tokens ; echo
If you get a 401 code, you are not authorized to make Domain Scoped requests.
Check if the user has any role in this domain. For this you will need to get an Auth token using the Default Scope request:
curl -i -H "Content-Type: application/json" -d '
{ "auth": {
"identity": {
"methods": ["password"],
"password": {
"user": {
"name": "idm",
"domain": { "id": "default" },
"password": "idm"
}
}
}
}
}' http://192.168.4.33:5000/v3/auth/tokens ; echo
This will return a X-Subject-Token that you will need for the workaround.
With that token, we will send a request to the default domain using the user we selected before, idm, to check if we have assigned any roles there:
curl -i \
-H "X-Auth-Token:<retrieved_token>" \
-H "Content-type: application/json" \
http://192.168.4.33:5000/v3/domains/default/users/idm/roles
And probably, this request will give you a response like:
{"links": {"self": "http://192.168.4.33:5000/v3/domains/default/users/idm/roles", "previous": null, "next": null}, "roles": []}
In that case, you will need to create a role for that user. To create it, you will need to assing a role to the user idm in the default domain. For that, you will need to retrieve the role id of the role you want to assign. You can do this by sending the following request:
curl -i \
-H "X-Auth-Token:<retrieved_token>" \
-H "Content-type: application/json" \
http://192.168.4.33:5000/v3/roles
It will return a JSON with all the available roles and its ids.
Assign a role to the user idm in the default domain. There are 6 available: member, owner, trial, basic, community and admin. As idm is the main administrator, I would chose the admin id. So finally, with the admin id, we assign the role by doing:
curl -s -X PUT \
-H "X-Auth-Token:<retrieved_token>" \
-H "Content-type: application/json" \
http://192.168.4.33:5000/v3/domains/default/users/idm/roles/<role_id>
Now you can try again Step 1, and if everything works, you should be able to start the PEP proxy:
sudo node server.js
Let me know how it goes!
I want to integrate cosmos-auth with Idm GE.
Config for node.js application is:
{
"host": "192.168.4.180",
"port": 13000,
"private_key_file": "key.pem",
"certificate_file": "cert.pem",
"idm": {
"host": "192.168.4.33",
"port": "443",
"path": "/oauth2/token"
},
"cosmos_app": {
"client_id": "0434fdf60897479588c3c31cfc957b6d",
"client_secret": "a7c3540aa5de4de3a0b1c52a606b82df"
},
"log": {
"file_name": "/var/log/cosmos/cosmos-auth/cosmos-auth.log",
"date_pattern": ".dd-MM-yyyy"
}
}
When i send HTTP POST request directly to IDM GE to url
https://192.168.4.33:443/oauth2/token
with required parameters i get ok results:
{
access_token: "LyZT5DRGSn0F8IKqYU8EmRFTLo1iPJ"
token_type: "Bearer"
expires_in: 3600
refresh_token: "XiyfKCHrIVyludabjaCyGqVsTkx8Sf"
}
But when i curl the cosmos-auth node.js application
curl -X POST "https://192.168.4.180:13000/cosmos-auth/v1/token" -H
"Content-Type: application/x-www-form-urlencoded" -d
"grant_type=password&username=idm&password=idm" -k
I get next result:
{"statusCode":500,"error":"Internal Server Error","message":"An internal server error occurred"}
Has anyone encountered something similar?
What could be the problem?
The error i made was using unsigned certificate.How clumsy of me.
So either sign the certificate or insert additional element in options object (rejectUnauthorized: false)
var options = {
host : host,
port : port,
path : path,
method : method,
headers: headers,
rejectUnauthorized: false
};
or in the beginning of the file insert:
process.env.NODE_TLS_REJECT_UNAUTHORIZED = '0';
Ofcourse this is only temporary solution until we use fully signed cert.
Anyways error handling and logs in cosmos-auth node.js app should show a little bit more.
I cloned the github project of figway in order to query the attributes of the entities to the orion but i'm getting an error in all python scripts:
File "GetEntity.py", line 37, in <module>
config = ConfigParser.RawConfigParser(allow_no_value=True)
TypeError: __init__() got an unexpected keyword argument 'allow_no_value'
I called it like -> python GetEntity.py Room
Some tips to investigate what is going on:
You should be using Python2.7 to run these scripts. Can you please let me know which version and OS are you using?
We have updated FIGWAY last week. Can you please clone it again if you did it before?
You should be using the new scripts at folder: /python-IDAS4/ContextBroker
With the previous assumptions you should get something like this (as long as that entity does not exist on that ContextBroker at the time being):
i6#raspberrypi ~/github/fiware-figway/python-IDAS4/ContextBroker $ python GetEntity.py Room
* Asking to http://130.206.80.40:1026/ngsi10/queryContext
* Headers: {'Fiware-Service': 'OpenIoT', 'content-type': 'application/json', 'accept': 'application/json', 'X-Auth-Token': 'NULL'}
* Sending PAYLOAD:
{
"entities": [
{
"type": "",
"id": "Room",
"isPattern": "false"
}
],
"attributes": []
}
...
* Status Code: 200
* Response:
{
"errorCode" : {
"code" : "404",
"reasonPhrase" : "No context element found"
}
}
I want to send a json request and embedd a variable in the post data.
I did a little research and I came up with the single quotes around the variable.
#!/bin/bash
FILENAME="/media/file.avi"
curl -i -X POST -H "Content-Type: application/json" —d '{"jsonrpc": "2.0", "method": "Player.Open", "params":{"item":{"file":"'$FILENAME'"}}}' http://192.167.0.13/jsonrpc
Unfortunately I get some errors:
curl: (6) Couldn't resolve host '—d'
curl: (3) [globbing] nested braces not supported at pos 54
HTTP/1.1 200 OK
Content-Length: 76
Content-Type: application/json
Date: Wed, 29 Jan 2014 19:16:56 GMT
{"error":{"code":-32700,"message":"Parse error."},"id":null,"jsonrpc":"2.0"}
Appearently there are some problems with the braces and the http answer states, that the command could not be executed. What's wrong with my code here?
Thanks!
This is my curl version:
curl 7.30.0 (mips-unknown-linux-gnu) libcurl/7.30.0 OpenSSL/0.9.8y
Protocols: file ftp ftps http https imap imaps pop3 pop3s rtsp smtp smtps tftp
Features: IPv6 Largefile NTLM NTLM_WB SSL
Update: use the simpler
request_body=$(cat <<EOF
{
"jsonrpc": "2.0",
"method": "Player.Open",
"params": {
"item": {
"file": "$FILENAME"
}
}
}
EOF
)
rather than what I explain below. However, if it is an option, use jq to generate the JSON instead. This ensures that the value of $FILENAME is properly quoted.
request_body=$(jq -n --arg fname "$FILENAME" '
{
jsonrpc: "2.0",
method: "Player.Open",
params: {item: {file: $fname}}
}'
It would be simpler to define a variable with the contents of the request body first:
#!/bin/bash
header="Content-Type: application/json"
FILENAME="/media/file.avi"
request_body=$(< <(cat <<EOF
{
"jsonrpc": "2.0",
"method": "Player.Open",
"params": {
"item": {
"file": "$FILENAME"
}
}
}
EOF
))
curl -i -X POST -H "$header" -d "$request_body" http://192.167.0.13/jsonrpc
This definition might require an explanation to understand, but note two big benefits:
You eliminate a level of quoting
You can easily format the text for readability.
First, you have a simple command substitution that reads from a file:
$( < ... ) # bash improvement over $( cat ... )
Instead of a file name, though, you specify a process substitution, in which the output of a command is used as if it were the body of a file.
The command in the process substitution is simply cat, which reads from a here document. It is the here document that contains your request body.
My suggestion:
#!/bin/bash
FILENAME="/media/file 2.avi"
curl -i -X POST -H "Content-Type: application/json" -d '{"jsonrpc": "2.0", "method": "Player.Open", "params":{"item":{"file":"'"$FILENAME"'"}}}' http://192.167.0.13/jsonrpc
The differences are hyphen in -d (instead of a dash) and double quotes around $FILENAME.
Here is another way to insert data from a file into a JSON property.
This solution is based on a really cool command called jq.
Below is an example which prepares request JSON data, used to create a CoreOS droplet on Digital Ocean:
# Load the cloud config to variable
user_data=$(cat config/cloud-config)
# Prepare the request data
request_data='{
"name": "server name",
"region": "fra1",
"size": "512mb",
"image": "coreos-stable",
"backups": false,
"ipv6": true,
"user_data": "---this content will be replaced---",
"ssh_keys": [1234, 2345]
}'
# Insert data from file into the user_data property
request_data=$(echo $request_data | jq ". + {user_data: \"$user_data\"}")