Pub/Sub Communication between twincat3 and azure - stl

I am new in this field. My condition is, I have a Beckhoff PLC using Twincat3 software. I am using OPC UA to upload data to OPC UA server and then send data to the cloud (Azure SQL database) through Azure IoT Hub. I wanted to make a pub/sub communication. The next steps, I will analyze the data with power bi and display it on several power bi mobile with different types of information. The problem is I have a bit confusion about how Pub/Sub communication applied in this connection. I have read about MQTT and AMPQ but do I need to write a code to be able to apply Pub/Sub communication? Thanks!

Azure IoT Hub is a Pub/Sub service. You can subscribe multiple stream processors to the data that hits the hub, and each one will see the entire stream. These stream processors can be implemented in custom code, perhaps with an Azure Function, but also with Logic Apps or Azure Stream Analytics.

You can set up OPC UA servers at both the PLC and the cloud. Each can subscribe to objects on the other for two way exchange. Otherwise, make the OPCUA objects available on the PLC, the subscribe to then from your cloud service.
Of course you will need to enable all the necessary ports and handle certificate exchange.
If you are using the Beckhoff OPC UA server, you annotate the required variables / structs with attributes. See the documentation.
If you want to use MQTT, instead you will need to write some code, using the MQTT library for TwinCAT. You will also need to get your broker set up and again, handle security. There are decent examples for the main providers Inthe Beckhoff documentation for the MQTT library.

Related

MQTT measurements export to JSON without connection to a broker

in my setup an IoT device is connected to an MQTT broker and publishes measurements. We duplicate this traffic to another PC where we want to perform analytics on the MQTT data. We cannot create a new client to this broker and subscribe to the topics, we just want to implement a sort of sniffer for these messages and extract the measurements as a JSON.
I have experimented with scapy and various python scripts but haven't succeeded. For example, it seems that the mqtt-paho library for python requires a connection to the actual broker, but as I said this is not an option. Any idea how to approach the problem?

Sending a message from Azure Service bus to AWS SQS

We have a requirement to implement Azure Service Bus as Integration point to various Applications (including apps hosted in AWS). Each application will have its own SQS. So the idea is to have Azure Service Bus with Topics and Subscription filters to route messages to each SQS accordingly. However I am not sure as to how we can pick messages from a subscription filter and push the message to AWS SQS. I am not able to see any solution for this.
These two are inherently two different messages services and you will either need to find a third party connector/bridge between the two or create your own. This would be a process that would be retrieving messages from one broker and forwarding it to another.
When it comes to a third party, there's an example that you could have a look at. NServiceBus has a community extension called Router. The router allows achieving exactly what you're looking for.
Disclaimer: I contribute and work on NServiceBus

Where to store key-value pairs for runtime retrieval from within Cloud Function?

In a Cloud Function I need to retrieve a bunch of key-value pairs to process. Right now I'm storing them as json-file in Cloud Storage.
Is there any better way?
Env-variables don't suite as (a) there are too many kv pairs, (b) the same gcf may need different sets of kv depending on the incoming params, (c) those kv could be changed over time.
BigQuery seems to be an overkill, also given that some kv have few levels of nesting.
Thanks!
You can use Memorystore, but it's not persistent see the FAQ.
Cloud Memorystore for Redis provides a fully managed in-memory data
store service built on scalable, secure, and highly available
infrastructure managed by Google. Use Cloud Memorystore to build
application caches that provides sub-millisecond data access. Cloud
Memorystore is compatible with the Redis protocol, allowing easy
migration with zero code changes.
Serverless VPC Access enables you to connect from the Cloud Functions environment directly to your Memorystore instances.
Note: Some resources, such as Memorystore instances, require connections to come from the same region as the resource.
Update
For persisted storage you could use Firestore.
See a tutorial about using Use Cloud Firestore with Cloud Functions

Data at rest encryption

I've been looking for info on how/if Forge encrypts data at rest. We have some customers with sensitive models that are asking the question.
Is data at rest encrytped?
If so, what method of encrypted is used and is it on by default?
If not, is this a planned feature in the future?
The Forge REST API is using https which means you are using the SSL protocol to transfer data between the client and server (both way). SSL encrypts the data for you automatically using the 'trusted' certificate. Here is a complete article on the protocol if you interested reading more about it.
Edited based on comments below - if we are talking about storage, all the data stored on the Forge servers are encrypted with your developer keys. Forge encrypts your data at the object level as it writes it to disks and decrypts it for you when you access it.

designing an agnostic configuration service

Just for fun, I'm designing a few web applications using a microservices architecture. I'm trying to determine the best way to do configuration management, and I'm worried that my approach for configuration may have some enormous pitfalls and/or something better exists.
To frame the problem, let's say I have an authentication service written in c++, an identity service written in rust, an analytics services written in haskell, some middletier written in scala, and a frontend written in javascript. There would also be the corresponding identity DB, auth DB, analytics DB, (maybe a redis cache for sessions), etc... I'm deploying all of these apps using docker swarm.
Whenever one of these apps is deployed, it necessarily has to discover all the other applications. Since I use docker swarm, discovery isn't an issue as long all the nodes share the requisite overlay network.
However, each application still needs the upstream services host_addr, maybe a port, the credentials for some DB or sealed service, etc...
I know docker has secrets which enable apps to read the configuration from the container, but I would then need to write some configuration parser in each language for each service. This seems messy.
What I would rather do is have a configuration service, which maintains knowledge about how to configure all other services. So, each application would start with some RPC call designed to get the configuration for the application at runtime. Something like
int main() {
AppConfig cfg = configClient.getConfiguration("APP_NAME");
// do application things... and pass around cfg
return 0;
}
The AppConfig would be defined in an IDL, so the class would be instantly available and language agnostic.
This seems like a good solution, but maybe I'm really missing the point here. Even at scale, tens of thousands of nodes can be served easily by a few configuration services, so I don't forsee any scaling issues. Again, it's just a hobby project, but I like thinking about the "what-if" scenarios :)
How are configuration schemes handled in microservices architecture? Does this seem like a reasonable approach? What do the major players like Facebook, Google, LinkedIn, AWS, etc... do?
Instead of building a custom configuration management solution, I would use one of these existing ones:
Spring Cloud Config
Spring Cloud Config is a config server written in Java offering an HTTP API to retrieve the configuration parameters of applications. Obviously, it ships with a Java client and a nice Spring integration, but as the server is just a HTTP API, you may use it with any language you like. The config server also features symmetric / asymmetric encryption of configuration values.
Configuration Source: The externalized configuration is stored in a GIT repository which must be made accessible to the Spring Cloud Config server. The properties in that repository are then accessible through the HTTP API, so you can even consider implementing an update process for configuration properties.
Server location: Ideally, you make your config server accessible through a domain (e.g. config.myapp.io), so you can implement load-balancing and fail-over scenarios as needed. Also, all you need to provide to all your services then is just that exact location (and some authentication / decryption info).
Getting started: You may have a look at this getting started guide for centralized configuration on the Spring docs or read through this Quick Intro to Spring Cloud Config.
Netflix Archaius
Netflix Archaius is part of the Netflix OSS stack and "is a Java library that provides APIs to access and utilize properties that can change dynamically at runtime".
While limited to Java (which does not quite match the context you have asked), the library is capable of using a database as source for the configuration properties.
confd
confd keeps local configuration files up-to-date using data stored in external sources (etcd, consul, dynamodb, redis, vault, ...). After configuration changes, confd restarts the application so that it can pick up the updated configuration file.
In the context of your question, this might be worthwhile to try as confd makes no assumption about the application and requires no special client code. Most languages and frameworks support file-based configuration so confd should be fairly easy to add on top of existing microservices that currently use env variables and did not anticipate decentralized configuration management.
I don't have a good solution for you, but I can point out some issues for you to consider.
First, your applications will presumably need some bootstrap configuration that enables them to locate and connect to the configuration service. For example, you mentioned defining the configuration service API with IDL for a middleware system that supports remote procedure calls. I assume you mean something like CORBA IDL. This means your bootstrap configuration will not be just the endpoint to connect to (specified perhaps as a stringified IOR or a path/in/naming/service), but also a configuration file for the CORBA product you are using. You can't download that CORBA product's configuration file from the configuration service, because that would be a chicken-and-egg situation. So, instead, you end up with having to manually maintain a separate copy of the CORBA product's configuration file for each application instance.
Second, your pseudo-code example suggests that you will use a single RPC invocation to retrieve all the configuration for an application in a single go. This coarse level of granularity is good. If, instead, an application used a separate RPC call to retrieve each name=value pair, then you could suffer major scalability problems. To illustrate, let's assume an application has 100 name=value pairs in its configuration, so it needs to make 100 RPC calls to retrieve its configuration data. I can foresee the following scalability problems:
Each RPC might take, say, 1 millisecond round-trip time if the application and the configuration server are on the same local area network, so your application's start-up time is 1 millisecond for each of 100 RPC calls = 100 milliseconds = 0.1 second. That might seem acceptable. But if you now deploy another application instance on another continent with, say, a 50 millisecond round-trip latency, then the start-up time for that new application instance will be 100 RPC calls at 50 milliseconds latency per call = 5 seconds. Ouch!
The need to make only 100 RPC calls to retrieve configuration data assumes that the application will retrieve each name=value pair once and cache that information in, say, an instance variable of an object, and then later on access the name=value pair via that local cache. However, sooner or later somebody will call x = cfg.lookup("variable-name") from inside a for-loop, and this means the application will be making a RPC every time around the loop. Obviously, this will slow down that application instance, but if you end up with dozens or hundreds of application instances doing that, then your configuration service will be swamped with hundreds or thousands of requests per second, and it will become a centralised performance bottleneck.
You might start off writing long-lived applications that do 100 RPCs at start-up to retrieve configuration data, and then run for hours or days before terminating. Let's assume those applications are CORBA servers that other applications can communicate with via RPC. Sooner or later you might decide to write some command-line utilities to do things like: "ping" an application instance to see if it is running; "query" an application instance to get some status details; ask an application instance to gracefully terminate; and so on. Each of those command-line utilities is short-lived; when they start-up, they use RPCs to obtain their configuration data, then do the "real" work by making a single RPC to a server process to ping/query/kill it, and then they terminate. Now somebody will write a UNIX shell script that calls those ping and query commands once per second for each of your dozens or hundreds of application instances. This seemingly innocuous shell script will be responsible for creating dozens or hundreds of short-lived processes per second, and of those short-lived processes will make numerous RPC calls to the centralised configuration server to retrieve name=value pairs one at a time. That sort of shell script can pu a massive load on your centralised configuration server.
I am not trying to discourage you from designing a centralised configuration server. The above points are just warning about scalability issues you need to consider. Your plan for an application to retrieve all its configuration data via one coarse-granularity RPC call will certainly help you to avoid the kinds of scalability problems I mentioned above.
To provide some food for thought, you might want to consider a different approach. You could store each application's configuration files on a web sever. A shell start script "wrapper" for an application can do the following:
Use wget or curl to download "template" configuration files from the web server and store the files on the local file system. A "template" configuration file is a normal configuration file but with some placeholders for values. A placeholder might look like ${host_name}.
Also use wget or curl to download a file containing search-and-replace pairs, such as ${host_name}=host42.pizza.com.
Perform a global search-and-replace of those search-and-replace terms on all the downloaded template configuration files to produce the configuration files that are ready to use. You might use UNIX shell tools like sed or a scripting language to perform this global search-and-replace. Alternatively, you could use a templating engine like Apache Velocity.
Execute the actual application, using a command-line argument to specify the path/to/downloaded/config/files.