Is Fiware-STH-Comet service compatible with NGSI-LD (linked data) format? - comet

I'm using Fiware's Orion LD service (https://github.com/FIWARE/context.Orion-LD) in a Docker environment in order to store and manage data in the NGSI-LD format. I want to post a subscription to Orion-LD so that Fiware-STH-Comet will be notified of new values, but I'm not sure if STH-Comet service is also compatible with the NGSI-LD format. Can anyone help me with that?

STH-Comet is not suppporting NGSI-LD at this stage

Related

Postgraphile "Only `POST` requests are allowed." error

I have Postgres running locally. I can access the database locally with psql postgres:///reviewapp and with \dt I can see a few tables.
If I run npx postgraphile -c "postgres:///reviewapp" I dont get any errors in the terminal:
PostGraphile v4.12.4 server listening on port 5000 🚀
‣ GraphQL API: http://localhost:5000/graphql
‣ GraphiQL GUI/IDE: http://localhost:5000/graphiql (RECOMMENDATION: add '--enhance-graphiql')
‣ Postgres connection: postgres:///reviewapp
‣ Postgres schema(s): public
‣ Documentation: https://graphile.org/postgraphile/introduction/
‣ Node.js version: v14.15.5 on darwin x64
‣ Join Mark in supporting PostGraphile development: https://graphile.org/sponsor/
* * *
However when I go to http://localhost:5000/graphql I have an error on the screen:
{"errors":[{"message":"Only POST requests are allowed."}]}
You're visiting the /graphql endpoint which speaks GraphQL (over POST requests), but you're sending it a web request (over GET). Instead, use the /graphiql end point to view the GraphiQL GraphQL IDE - this endpoint speaks web, and will give you a nice interface for communicating with the /graphql endpoint. See this output from the PostGraphile command:
‣ GraphQL API: http://localhost:5000/graphql
‣ GraphiQL GUI/IDE: http://localhost:5000/graphiql (RECOMMENDATION: add '--enhance-graphiql')
I recommend you add the --enhance-graphiql option to the PostGraphile CLI to get an even more powerful IDE in the browser.
It is because when you type in your address into the address bar of your browser, a GET request is being sent, while your Postgraphile instance only accepts POST requests. So this is the problem. You either avoid sending GET requests, or try and ensure that Postraphile accepts GET requests as well.
A very simple solution would be to create a very simple and small website that will act as a proxy and upon load, it would send a POST request to http://localhost:5000/graphql
There is a GitHub ticket where a middleware is suggested, read this for further information: https://github.com/graphile/postgraphile/issues/442

Embedding Camunda into an existing Java application

I have pulled the Camunda latest image and running Camunda in it's own docker container. I have a dmn uploaded to Camunda Cockpit and I am able to make Rest calls to get data from the decision table that I have uploaded to the Camunda Cockpit.
However, I do not want to depend on Camunda running independently. I have an existing huge application(a micro-service running in it's own docker container) and I want to embed Camunda into my micro-service (that uses Osgi, Java, Docker, Maven, etc.).
Can someone please guide me with this?
For a Spring Boot micro service you can add the required starter and config files to your deployment and should be good to go. See e.g. https://start.camunda.com/ to get everything you need.
You can then access Camunda via Java API or REST (if starter was included).
If you do not run in a Spring Boot environment then the way of bootstrapping Camunda may differ. In plain Java, without any container usage it would be along those lines:
ProcessEngine processEngine = ProcessEngineConfiguration
.createStandaloneProcessEngineConfiguration()
.setJdbcUrl("jdbc:h2:./camunda-db/process-engine;DB_CLOSE_DELAY=1000")
.setDatabaseSchemaUpdate("true")
.setJobExecutorActivate(true)
.buildProcessEngine();
processEngine.getRepositoryService()
.createDeployment()
.addClasspathResource("myProcess.bpmn")
.deploy();
ProcessInstance pi = processEngine.getRuntimeService()
.startProcessInstanceByKey("myProcess");
In a standard Spring environment you would bootstrap the engine by loading the context:
ClassPathXmlApplicationContext applicationContext =
new ClassPathXmlApplicationContext("/spring-context.xml");
ProcessEngine processEngine = (ProcessEngine) applicationContext.getBean("processEngine");
processEngine.getRepositoryService()
.createDeployment()
.addClasspathResource("myProcess.bpmn")
.deploy();
Also see:
https://docs.camunda.org/manual/latest/user-guide/process-engine/process-engine-bootstrapping/
https://docs.camunda.org/get-started/quick-start/install/
Update based on comment:
The Camunda OSGI support is described here:
https://github.com/camunda/camunda-bpm-platform-osgi
You would need to upgrade the project to a more recent version, which is likely not a huge effort as the version have remained compatible.
(I would also encourage you to consider migrating the micro service to Spring Boot instead. Complexity, available knowledge in the market, support lifetime,..)

Keycloak logging to JSON format message field

I have been trying to set up keycloak logging to be scraped by fluentd to be used in elasticsearch. So far I have used the provided CLI string to use in my helm values script.
cli:
# Custom CLI script
custom: |
/subsystem=logging/json-formatter=json:add(exception-output-type=formatted, pretty-print=true, meta-data={label=value})
/subsystem=logging/console-handler=CONSOLE:write-attribute(name=named-formatter, value=json)
However, as you can see in the picture provided, the logs that are generated seem to be completely json apart from the core of the log, the message field. Currently the message field is provided as comma separated key-value pairs. Is there any way to tell keycloak, jboss or wildfly that it needs to provide the message in JSON too? This allows me to efficiently search through the data in elastic.
Check this project on GitHub: keycloak_jsonlog_eventlistener: Outputs Keycloak events as JSON into the server log.
Keycloak JSON Log Eventlistener
Primarily written for the Jboss Keycloak docker image, it will output Keycloak events as JSON into the keycloak server log.
The idea is to parse logs once they get to logstash via journalbeat.
Tested with Keycloak version 8.0.1

Configuring Orion Context Broker, Wilma PEP Proxy and Keyrock IdM

My name is Joe and I'm in traineeship about IoT security and Identity Management. In order to develop some solutions to a project I've been assigned, I have to configure and integrate Orion, Wilma and Keyrock (and potentially a PDP, but that comes later). I've found some tutorials and FIWARE official guides, but I'm seriously in trouble with the configuration.
I've already learned the "theory" behind: I'm aware of the FIWARE security architecture but the problem is on practice.
As a first approach to the problem, I thought that trying to get the token with a token request could be a good way to start, as follows:
curl -X POST --data "grant_type=password&username=user&password=pwd”
http://192.168.100.241:5000/oauth2/token --header
"'Host':'192.168.100.241','Content-Type':'application/x-www-form-urlencoded','Authorization':'Basic
base64(client_id+":"+client_secret)'"
where 192.168.100.241 is the IP address of the host where Keystone runs.
The response to this is the following:
{
"error": {
"message": "Impossibile trovare la risorsa.",
"code": 404,
"title": "Not Found"
} }
Now, how this problems can be solved? Perhaps I'm missing something or probably I'm unaware of something.
And later, how can the PEP Proxy enforce some policies on Orion requests (or receive them directly and later, if allowed, communicate them to Orion)?
Could you help me? I'm terribly in trouble.
Thank you :-)
You can see how to integrate Orion Context Broker, Keyrock IdM and Wilma PEP Proxy in the following link:
https://www.slideshare.net/daltoncezane/integrating-fiware-orion-keyrock-and-wilma
I already had these doubts like you. I hope it helps.
Include client_id and secret_id in the grant_type :
grant_type=password&username=${_user}&password=${_pass}&client_id=${CLIENT_ID}&client_secret=${CLIENT_SECRET}

Not able to create accounts for Private Ethereum Blockchain using Geth and Web3 API

I am unable to create accounts for Private Ethereum Blockchain using Geth and Web3 API.
personal.newAccount(passwd) is not working for me. Please explain how to create account using above command.
And also, I am unable to install "ethereumjs-accounts".
If you try to search the internet why the "geth json rpc personal api" is not working, you will find an excellent answer on Ethereum Stack Exchange which I'd like to quote in full:
First, a note on safety:
You should not make the personal API available over RPC
If you are on a local, trusted machine, you should use IPC instead of RPC. Otherwise, anyone who can connect to your node via RPC can try to brute-force your passwords and steal your Ether.
All administrative APIs are available by default over IPC, so no need to use any flags with geth
To connect via IPC:
Install my library:
npm install web3_extended
var web3_extended = require('web3_extended');
var options = {
host: '/home/user/.ethereum/geth.ipc',
ipc:true,
personal: true,
admin: false,
debug: false
};
var web3 = web3_extended.create(options);
web3.personal.newAccount("password",function(error,result){
if(!error){
console.log(result);
}
});
Replace the host variable with the proper path for your system.
Note: All requests via IPC must be asynchronous.
Some Alternatives:
I don't know why you want to create new accounts via web3, but it's likely not the best way to do what you're trying to achieve. It is much safer and more modular to use a hooked web3 provider with a client-side light wallet or to simply use the Mist browser which handles all accounts for you.
Now for the technique (don't do this)
You need to enable the personal API over RPC. Do this by starting geth with
geth --rpc --rpcapi "db,eth,net,web3,personal"
Then you can use the personal_newAccount method via RPC. It's not implemented in web3.js, so you need to manually issue the RPC request. For example with curl:
curl -X POST --data '{"jsonrpc":"2.0","method":"personal_newAccount","params":["password"],"id":1}' localhost:8545
creates a new account with password password and returns the address:
{"id":1,"jsonrpc":"2.0","result":"0x05ca0ddf7e7506672f745b2b567f1d33b7b55f4f"}
There is some basic documentation
Alternatively:
Use the unofficial extended web3.js
this allows you to use the personal, admin and miner APIs via a standard web3.js interface.
Published on Feb 16 at 8:34 and released under terms of CC BY-SA 3.0 by Tjaden Hess.
The command must be personal.newAccount()
Then the console asks for passphrase, give your required password then it again asks for confirmation.
An output in the form of Address("0x----------------------------") will appear.It is 1 account/address for your private network.