I have been reading old blocks from the solana JSON RPC API (using python), but now I am trying to subscribe to the block production on the solana network (to get up live updates).
I tried to pull updates through the RPC API using
{"jsonrpc": "2.0", "id": "1", "method": "blockSubscribe", "params": ["all"]}
This doesn't work, with response: 'code': -32601, 'message': 'Method not found'
Looking at the docs.solana.com info, it states that:
This subscription is unstable and only available if the validator was
started with the --rpc-pubsub-enable-block-subscription flag. The
format of this subscription may change in the future
I assume this means I need to run solana-test-validator --rpc-pubsub-enable-block-subscription, but this just returns:
error: Found argument '--rpc-pubsub-enable-block-subscription' which wasn't expected, or isn't valid in this context
Did you mean --rpc-port?
USAGE:
solana-test-validator --ledger <DIR> --rpc-port <PORT>
I can't seem to find any more information on how to subscribe to blocks using the RPC.
Any ideas or help with what I'm doing wrong?
Thanks in advance
You are correct that the validator has to run --rpc-pubsub-enable-block-subscription. For mainnet-beta usage, it is recommended to either find a private rpc with this enabled or have your own. Please note though, the method is marked currently as unstable.
It looks like rpc-pubsub-enable-block-subscription is not available on the test validator. You can find the full command list here.
This subscription is unstable and only available if the validator was started with the --rpc-pubsub-enable-block-subscription flag. The format of this subscription may change in the future
Related
The title sounds quite comprehensive, but my baseline question is quite simple, I guess.
Context
I Azure, I have an IoT hub, which I am sending messages to. I use a modified version one of the samples from the Azure Iot SDK for python.
Sending works fine. However, instead of a string, I send a JSON structure.
When I watch the events flowing into the IoT hub, using the Cloud shell, it looks like this:
PS /home/marcel> az iot hub monitor-events --hub-name weathertestiothub
This extension 'azure-cli-iot-ext' is deprecated and scheduled for removal. Please remove and add 'azure-iot' instead.
Starting event monitor, use ctrl-c to stop...
{
"event": {
"origin": "raspberrypi-zero-wh",
"payload": "{ \"timestamp\": \"1608643863720\", \"locationDescription\": \"Attic\", \"temperature\": \"21.941\", \"relhumidity\": \"71.602\" }"
}
}
Issue
The data seems fine, except the payload looks strange here. BUT, the payload is literally what I send from the device, using the SDK sample.
Is this the correct way to do it? At the end, I have a very hard time to actually get the data into the Time Series Insights model. So I guess, my structure is to be blamed.
Question
What is a recommended JSON data structure to send to the IoT hub for later use?
You should add the following 2 lines to your message in your python SDK sample:
msg.content_encoding = "utf-8"
msg.content_type = "application/json"
This should resolve your formatting concern.
We've also updated our samples to reflect this: https://github.com/Azure/azure-iot-sdk-python/blob/master/azure-iot-device/samples/sync-samples/send_message.py
I ended up using the tip by #elhorton, but it was not the key change. Nonetheless, the formatting in the Azure Shell Monitor looks now much better:
"event": {
"origin": "raspberrypi-zero-wh",
"payload": {
"temperature": 21.543947753906245,
"humidity": 69.22964477539062,
"locationDescription": "Attic"
}
}
The key was:
include the message source time in ISO format
from datetime import datetime
timestampIso = datetime.now().isoformat()
message.custom_properties["iothub-creation-time-utc"] = timestampIso
Using the locationDescription as the Time Series ID Property See https://learn.microsoft.com/en-us/azure/time-series-insights/how-to-select-tsid (Maybe I could also have taken the iothub-connection-device-id, but I did not test that alone specifically)
I guess using "iothub-connection-device-id" will make "raspberrypi-zero-wh" as the name of the time series instance. I agree with your choice of using "locationDescription" as TSID; so Attic becomes the time series instance name, temperature and humidity will be your variables.
while running the vorto dashboard im getting the following error
JWT expired, getting new Token Wed Aug 26 2020 07:38:56 GMT+0100 (BST)... StatusCodeError: 401 -
{"status":401,"error":"gateway:authentication.failed","message":"Multiple authentication
mechanisms were applicable but none succeeded.","description":"For a successful authentication
see the following suggestions: { The JSON Web Token is not valid. },
{ Please provide a valid JWT in the authorization header prefixed with 'Bearer ' }."
The contents of config.json is as follows
{
"client_id": "xxxxxxxxxxx",
"client_secret": "xxxxxxxxxxxx",
"scope": "xxxxxxxxxx",
"intervalMS": 10000
}
Tried with setting the contents of config.json as environment variables. Then also im getting same error. Screenshot of web front end on accessing localhost:8080 is attached
Tried with the following links Error running Vorto Dashboard for Bosch iot suite. But still its not working. Please help me in solving this issue
I have discussed the matter internally to Bosch (disclaimer: I am an employee).
After discussing with the Bosch Suite Auth team, here is a summary of what happened.
The Suite Auth team recently transitioned from Keycloack to Hydra for their authentication technology
The relevant bit here is that previously, the scopes passed to the token request were ignored
The Vorto Dashboard app had been passing the wrong key for the scope parameter all along, when requesting a token, but it was ignored
Now that this parameter is relevant, the (incorrect) notation was not failing to produce a token, but obtained one that was not suitable to authorize with Bosch IoT Things, because it did not contain the appropriate scope
In turn, fixing this key produces a token that successfully authorizes with Bosch IoT Things
If you're in a hurry, you can check out this branch with the fix (it's literally an 8 characters change set).
Otherwise, you can monitor this GitHub ticket for closure - I will close it when the fix is merged to the master branch of the Vorto Examples project.
I'm setting up an automated system to convert and visualise 3D models through the Forge APIs. The actual conversion and visualisation is pretty straight forward, but keeping track of the process is not as simple.
Autodesk recommends using webhooks, but documentation of this is quite sparse.
My main problem is that I'm unable to debug the webhooks. I get no indication to weather a hook has been posted or not.
I've read all of the similar questions here on stack overflow, in the FAQ and in the documentation (among others: Why is webhook workflow not taken into consideration when creating modelderivative job?).
I'm processing a conversion for a model with 'modelId'. And want to listen to the events 'extraction.updated'.
I'm registering a hook with a POST like this:
{
"callbackUrl":"https://my-service.com/callbacks/modelId",
"scope":{
"workflow":"modelId"
}
}
My job is registered like this:
{
"input":{
"urn":"{theUrnForTheModel}"
},
"output":{
"formats":[
{
"type":"svf",
"views":[
"3d",
"2d"
]
}
]
},
"misc":{
"workflow":"modelId"
}
}
From what I can see the hooks never fire. I don't get any errors or indications that something fail on my server.
Am I required to post hookAttribute when creating the hook? This is documented as not mandatory. Am I required to have a fixes endpoint on my end, or is it ok to include the specific model id in the url?
A few points to check:
What's the response on POST hook? Should return 201
Which verb does your /callbacks/modelId accepts? Should accept POST
Have you tried extraction.finished event?
I need to get vulnerabilities by component at JSON format, but all I've get by using CVE Details API just single vulnerabilities where no components or something, only describe.
Here is an example of link
http://www.cvedetails.com/json-feed.php?numrows=10&vendor_id=0&product_id=0&version_id=0&hasexp=0&opec=0&opov=0&opcsrf=0&opfileinc=0&opgpriv=0&opsqli=0&opxss=0&opdirt=0&opmemc=0&ophttprs=0&opbyp=0&opginf=0&opdos=0&orderby=3&cvssscoremin=0
Here is an example of JSON:
{
"cve_id": "CVE-2016-4951",
"cwe_id": "0",
"summary": "The tipc_nl_publ_dump function in net/tipc/socket.c in the Linux kernel through 4.6 does not verify socket existence, which allows local users to cause a denial of service (NULL pointer dereference and system crash) or possibly have unspecified other impact via a dumpit operation.",
"cvss_score": "7.2",
"exploit_count": "0",
"publish_date": "2016-05-23",
"update_date": "2016-05-24",
"url": "http://www.cvedetails.com/cve/CVE-2016-4951/"
}
Are there any way to get vulnerabilities by name of component? (new and old)
Red Hat maintains a CVE API that can be searched by component, e.g.:
https://access.redhat.com/labs/securitydataapi/cve.json?package=kernel&after=2017-02-17
Documentation for the API can be found here.
Note that the data is probably limited to components in Red Hat products.
An alternative to vendor specific CVE API's is CIRCL's Common Vulnerabilities and Exposure Web Interface and API.
Its web interface can be found at https://cve.circl.lu/ and API documentation here https://cve.circl.lu/api/
Bit late for a proper reply, but maybe it'll still be useful. A while back I was a bit frustrated with the options available, so I built https://cveapi.com.
You can call GET https://v1.cveapi.com/.json and get the NIST json response for that CVE back.
It doesn't require any auth and is free to use.
I was wondering if Wirecloud offers complete support for object storage with FI-WARE Testbed instead of Fi-lab. I have successfully integrated Wirecloud with Testbed and have developed a set of widgets that are able to upload/download files to specific containers in Fi-lab with success. However, the same widgets do not seem to work in Fi-lab, as i get an error 500 when trying to retrieve the auth tokens (also with the well known object-storage-test widget) containing the following response:
SyntaxError: Unexpected token
at Object.parse (native)
at create (/home/fiware/fi-ware-keystone-proxy/controllers/Token.js:343:25)
at callbacks (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:164:37)
at param (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:138:11)
at pass (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:145:5)
at Router._dispatch (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:173:5)
at Object.router (/home/fiware/fi-ware-keystone-proxy/node_modules/express/lib/router/index.js:33:10)
at next (/home/fiware/fi-ware-keystone-proxy/node_modules/express/node_modules/connect/lib/proto.js:195:15)
at Object.handle (/home/fiware/fi-ware-keystone-proxy/server.js:31:5)
at next (/home/fiware/fi-ware-keystone-proxy/node_modules/express/node_modules/connect/lib/proto.js:195:15)
I noticed that the token provided in the beggining (to start the transaction) is
token: Object
id: "%fiware_token%"
Any idea regarding what might have gone wrong?
The WireCloud instance available at FI-WARE's testbed is always the latest stable version while the FI-LAB instance is currently outdated, we're working on updating it as soon as possible. One of the things that changes between those versions is the Object Storage API, so sorry for the inconvenience as you will not be able to use widgets/operators using the Object Storage in both environments.
Anyway, the response you were obtaining seems to indicate the object storage instance you are accessing is not working properly, so you will need to send an email to one of the available mail lists for getting help (fiware-testbed-help or fiware-lab-help) telling what is happening to you (remember to include your account information as there are several object storage nodes and ones can be up and the others down).
Regarding the strange request body:
"token": {
id: "%fiware_token%"
}
This behaviour is normal, as the WireCloud client code has no direct access to the IdM token of the user. It's the WireCloud's proxy which replaces the %fiware_token% pattern with the correct value.