com.webMethods.jms.naming.keystore vs com.webMethods.jms.ssl.keystore - webmethods

We are trying to secure and encrypt the communication between our application and webMethods by using the CA signed certificates. During analysis, we got below parameters to be set as system properties in JBoss 6.4.
Could you please explain the difference between below parameters and which one should be used for configuring?
com.webMethods.jms.naming.keystore
com.webMethods.jms.ssl.keystore

JNDI (which is used to look up arbitrary objects like JMS connection factories, JDBC connection factories, EJBs, etc.) is 100% independent from JMS (which is a messaging API). The JNDI specification and JMS specification are completely different and are implemented in different ways. Therefore, both JNDI and JMS each need their own way to secure their communication. The com.webMethods.jms.naming.keystore property is used for securing JNDI communication, and com.webMethods.jms.ssl.keystore is used for securing JMS communication.

Related

Fiware: How to restrict user access to specific entity for Orion Context Broker API using keystone & keypass

First of all, I'm using the Telefonica implementations of Identity Manager, Authorization PDP and PEP Proxy, instead of the Fiware reference implementations which are Keyrock, AuthZForce and Wilma PEP Proxy. The source code and reference documentation of each component can be found in the following GitHub repos:
Telefonica keystone-spassword:
GitHub /telefonicaid/fiware-keystone-spassword
Telefonica keypass:
GitHub /telefonicaid/fiware-keypass
Telefonica PEP-Proxy:
GitHub /telefonicaid/fiware-pep-steelskin
Besides, I'm working with my own in-house installation of the components, NO Fi-Lab. In addition to security components, I've an IoT Agent-UL instance and an Orion Context Broker instance.
Starting from that configuration, I've created a domain in keystone (Fiware-Service) and a project inside the domain (Fiware-ServicePath). Then I've one device connected to the platform, sendding data to the IoT Agent behind the PEP Proxy. The whole device message is represented as a single Entity in Orion Context Broker.
So, the question is:
How can I restrict a specific keystone user to access only to the entity associated to this device, at the level of the Orion Context Broker API?
I know that I can allow/deny user acces to specific API via keystone Roles and XACML Policies but that implies that I should create one Policy per User-Device pair.
I could use some help with this, to know if I'm on the right way.
I do not think Access Control can be done to Orion without Security GEs. Each GE has a specific purpose and access control is not one of the Orion's purposes.
As stated in the Security Considerations from Orion documentation:
Orion doesn't provide "native" authentication nor any authorization mechanisms to enforce access control. However, authentication/authorization can be achieved the access control framework provided by FIWARE GEs.
Also, there is something related in another link:
Orion itself has no security. It’s designed to be run behind a proxy server which provides security and access control. Used within the FIWARE Lab, they run another service build on node.js, “PEP Proxy Wilma”, in front of it. Wilma checks that you have obtained a token from the FIWARE lab and put it in the headers.
Besides, the link below can endorse my opinion about Orion and access control:
Fiware-Orion: Access control on a per subscription basis
My opinion is that you are in the right way using the other security components.
About "create one Policy per User-Device pair" as you mention, maybe it would be better you thought about "group policies" instead.

MySQL JDBC SSL Client Certificates without keystore (SSLSocketFactory)

Google Cloud SQL (MySQL) supports SSL Client Certificates for securing connections. I have gotten these working with the mysql CLI client and with MySQL-python without any drama, but Java's JDBC driver model seems to be significantly less flexible. I was able to get it working by importing the necessary keys and certificates into my keystore, but it does not appear that I can easily provide a specific certificate to use for a particular connection at runtime.
For my use storing all the certificates in a single keystore per JVM won't work, we have a multi-tenant environment with dozens of isolated client certificates. The PostgreSQL JDBC documentation offhandedly mentions it should be possible by implementing your own SSLSocketFactory (source):
Information on how to actually implement such a class is beyond the scope of this documentation. Places to look for help are the JSSE Reference Guide and the source to the NonValidatingFactory provided by the JDBC driver.
The Java SSL API is not very well known to the JDBC driver developers and we would be interested in any interesting and generally useful extensions that you have implemented using this mechanism. Specifically it would be nice to be able to provide client certificates to be validated by the server.
The only implementation I have seen is GoogleCloudPlatform/cloud-sql-mysql-socket-factory which on-the-fly queries the Google Cloud APIs to retreive emphemeral ssl client certificates. This is what I'm starting with, but I'm disheartened by the fact some basic socket properties (notably connectTimeout and socketTimeout) are not currently implemented.
Are there other SSLSocketFactory implementations I should be aware of? It seems like a generic implementation would be useful for multiple JDBC drivers (MySQL connector/J, PostgreSQL pgJDBC and Oracle offer some client cert support). That way JDBC connection strings could support client certificates as standardize parameters (base64 encoded?) just as usernames and passwords are currently included.

How to secure different fiware GE in the same virtual machine?

I'm deploying some Generic Enablers(Orion, Cygnus, Proton-Cep, Wirecloud) in the same VM using dockers.
Reading the fiware documentation it uses has an example a wilma proxy securing an instance of orion and getting the authorization through IdM.
Wilma configurations do not seem to support different redirections
I need to secure all these services that I'm using which need to be accessed from outside the server, my question is if is it possible to use Wilma to secure all Generic Enablers or should I implement one instance of Wilma for each service provided?

Cassandra configuration - is native transport necessary on all cluster nodes?

Does Cassandra require both of the following options to be on?
start_native_transport: true
start_rpc: true
Are these required on all Cassandra nodes?
As far as I can tell, the purpose of each is thus:
* native transport - for servicing CQL clients
* rpc - for cluster inter node communication
are these correct?
If they are, I guess I should enable rpc on all nodes, and perhaps native transport on only one node? Is this correct?
The native transport is the CQL Native Protocol (as opposed to the Thrift Protocol) and is the way all modern Cassandra Driver's communicate with the server. This includes all reads/writes/schemachanges/etc
Hence you cannot make start_native_transport as false.

Different Between rpc (remote procedure call) and webservices

I wanna know the basic different between rpc and webservices. which should be prefer.
I wanna choose between json-rpc and jax-ws.
Web service:
Web services are typically application programming interfaces (API) or Web APIs that are accessed via Hypertext Transfer Protocol (HTTP) and executed on a remote system hosting the requested services. Web services tend to fall into one of two camps: big Web services and RESTful Web services.
RPC:
Remote Procedure Calls. It enables a system to make calls to programs such as NFS across the network transparently, enabling each system to interpret the calls as if they were local. In this case, it would make exported filesystems appear as thought they were local.
Which one is preferable:
RPC would be used only for internal/in-house servers where you have influence on both the client and server code. The most frequent case is to forward services which only exist on a few machines. For example, to minimize the number of licenses or support overhead needed by forwarding requests to a central machine, or to provide access to software that is other operating system specific (eg, Linux programs that need to use an old program only available on SGIs.) The other case is to reduce startup costs.
We can identify two major classes of Web services, REST-compliant Web services, in which the primary purpose of the service is to manipulate XML representations of Web resources using a uniform set of "stateless" operations; and arbitrary Web services, in which the service may expose an arbitrary set of operations
May it will be helpful to you...