MySQL Trigger to RabbitMQ Communication - mysql

I have an application on Sql Server that send change data to Target database using Sql Service Broker. I just capture the data from Trigger and push data into service broker Queue. Now I want to make compatible my application to MySql. Now the problem is how i achive exactly the same implementation in MySql because its not supported. If I use external Message Broker like RabbitMq how MySql Table Trigger directly communicate with RabbitMq.
Thanks in Advance

If you can use Kafka as an alternative to RabbitMQ there are some tutorials on how to accomplish a similar goal:
Using Kafka Connect
Using GCP services
Change data capture (CDC) is a term used to classify this pattern of reacting to data changes and then delivering those changes in real-time to a downstream process.

Related

How to publish AWS SNS data to MySql database

I am new to AWS/Database.
Since i am completely beginner to this, any suggestions will be appreciated.
Currently in the project it has been planned like data from AWS database will be pushed using SNS HTTP fanout to external MySql Database.
NOTE :
1.The data will be pushed by the Client using AWS SNS
2. We have no access to the AWS account nor we are planning to have a AWS account.
3. External MySql database is a private database running on Linux Server
I have gone through the Official documentation of AWS SNS, and also some websites. This is all i found :
Use external applications like Zapier to map the data.
Develop some application to map the data.
Is it like using a Servlet application in the receiver side to update the table, or is there any other methods?
AWS DB -----> SNS -----> _________ -----> External MySql DB
Thanks
If you cannot have an AWS Account, you can have your own web server consume the SNS Messages. SNS can deliver messages to an HTTP/HTTPS endpoint in a predefined structure. Read more details here. You can enable such an endpoint on your own server and share your server URL with the AWS Account owner. They can create a subscription from their SNS topic to your endpoint.
For setting up this endpoint, there are many options. ExpressJS is one such popular framework to quickly implement HTTP APIs.
Probably, option two would be more suited, or at least first to be considered. For that option you would have have to develop a lambda function which would receive data from SNS, re-format if needed and upload it to MySQL. So your architecture would look like:
Data--->SNS--->Lambda function---> MySQL
Depending on the amount of incoming data to the SNS, you may add SQS queue as well to the mix, to buffer the records and enable fun-out architecture. For example:
/---> SQS queue 1---> Lambda function 1---> MySQL
Data -->SNS --/
\
\--- SQS queue 2 ---> Lambda function 2, EC2 instance, Container ---> Other destination
Other solutions are possible. But I would first consider the above, before looking into other ways.

Sync RDBMS with Apache Directory Ldap

Currently, I am on a requirement to sync data from apache direcotry ldap to any of the RDBMS Databases (MySQL, PostgreSQL). Directory approximately holds a few million of records for now and may grow in future. Ldap directory is being the primary data source for now but the motive is to have real time data in both Ldap as well as in RDBMS since We have a plan to use RDBMS for real-time analytics purpose.
Option1:
Thinking of using spring cloud data flow. A source spring boot app to read ldap data that are changed after the last sync run. Source app pushes data to queue(RabbitMQ for now). Sink would be another spring boot app that collects data directly from queue and persists the data into RDBMS. We will be able to better track and manage the sync process jobs using spring cloud data flow dashboard offerings.
Option2:
Spring LdapTemplate helps us to talk to ldap directory in our application. One approach would be to intercept the ldapTemplate calls wherever applicable and push the data to queue and then an intermediate app reads data from queue(RabbitMQ) and converts the ldap response to the required format that can be updated into RDBMS DB.
I am new to Ldap and spring cloud data flow. So far, I have got only these 2 approaches considering my project's existing technology and system landscape. Any other suggestions/ approach are really appreciated. Thanks in advance.
One another approach if LDAP is Microsoft ad server then creating windows service in C# which will connect to your LDAP server and fetch data every day and send data to your rdbms through socket connection. Which is reliable and consistent.

How can I create a history graph in Wirecloud using history measurements?

I am developing a Fiware application and I am using many Fiware GEs (Wirecloud, IoT Agent, Orion, Cygnus, MondoDB, MySQL) that are integrated locally on my linux pc using docker.
I managed to make Orion to receive measurements from a temperature sensor and store them in a MySQL database through Cygnus.
Now, I would like to create a history graph in Wirecloud using those measurements. I tried to use a History Module to Linear Graph operator that intermediates between an NGSI source operator and a Linear graph widget but I don't know what URL should I use for the HistoryMod Server URL.
I've tried to open the user manual for the History Module operator but the link is broken so I can not read it.
I am posting some images with the wiring, HistoryModule settings, NGSI source settings and the Linear Graph error that I am receiving, for better understanding.
My questions are the following:
What URL should I use for the HistoryMod Server URL?
Am I wiring the components correctly or am I missing something?
I don't know the History Module operator, but I think that your wiring is correct: data resource, operator that request it and a widget to show the results.
In your HistoryModule settings try to change your URL for: http://130.206.82.141:8666/STH/v1/contextEntities/
Did you try to change your Linear Graph widget for a table? (for test that the problem is in the operator and isn't in your widget).
How do you please managed to connect to the database (mysql). I have almost same configuration using postgres instead, on a remote server and managed via docker-compose.
I can configure everything from the docker-compose file, but I don't know how to start postgres running, so as to create the database to use.

Using zabbix_sender for host discovery

I'm writing an application which delivers data from remote devices over an HTTP API. These devices are on a mobile data connection and have limited resources.
I wish to receive custom monitoring data over the HTTP API, relying on the security model designed in the application, and push that data to Zabbix directly (or indirectly) from node.js. I do not wish to use Zabbix Agent on the remote devices.
I see that I can use zabbix_sender to send data to a Zabbix server containing a pre-configured host. This works great. I intend to deliver monitoring data over my custom API, and when received give this data to zabbix_sender inside the server network.
The problem is there are many devices in the field and more are being added all the time.
TL;DR:
When zabbix_sender provides a custom hostname which doesn't exist in Zabbix already, it fails.
I would like to auto-add discovered hosts, based upon new hostnames from zabbix_sender. How would I do this?
Also, extra respect if anyone can give examples of how to avoid zabbix_sender and send data directly from node.js to the Zabbix server. I mean: suggest an NPM package that you have experience using. (Update: Found working node.js package here: https://www.npmjs.com/package/node-zabbix-sender)
Zabbix configuration: I'm learning from Zabbix 2.4 installed in Docker, no custom configuration from this Dockerhub: https://hub.docker.com/r/zabbix/zabbix-2.4/
Probably the best would be to use the Zabbix API to create hosts directly.
Alternatively, you could set up an action and emulate active agent connection, which would make Zabbix create the host via the active agent auto-regstration.
You could also use low level discovery (LLD) to send in JSON, which would result in hosts/items being created, based on prototypes.
In all of these cases you have to wait for one minute (by default) for the hosts to appear in the Zabbix cache, then you can send the data.
Also note that Zabbix 2.4 is not supported anymore, it will receive no fixes - it is not a "long-term support" release.

Implement two phase commit protocol between EJB Application(Running on Glassfish) and Swing application

I have a EJB application running on Glassfish server which stores data on MySQL DB which I call as Global DB.
I have two exact remote Swing applications which are stand alone applications accessing EJB's using RMI. They have their own local DB in case of lost connection.
My aim is to implement two phase commit protocol i.e to make one participant as coordinator and others as participants.
One method which I could think of was to implement using JMS i.e send a message across queue and make remote clients listen to these messages and take appropriate action.
I do this my sending a message on Buttonclick of one of the Swing application.
Problem is, even tough I have implemented MessageListener, onMessage() method does not receive any message for the other client.
Each Remote client has following properties set:
props.setProperty("java.naming.factory.initial", "com.sun.enterprise.naming.SerialInitContextFactory");
props.setProperty("java.naming.factory.url.pkgs", "com.sun.enterprise.naming");
props.setProperty("java.naming.factory.state", "com.sun.corba.ee.impl.presentation.rmi.JNDIStateFactoryImpl");
props.setProperty("org.omg.CORBA.ORBInitialHost", "localhost");
props.setProperty("org.omg.CORBA.ORBInitialPort", "3700");
This is to connect to Glassfish server and access the connectionFactory and Queue which I have already configured.
Is it because only application running on server are allowed to receive messages and not remote applications?
Any suggestions for topology for 2 PC are welcome.
For this, we used JMS for exchanging the messages between these systems i.e one acting as coordinator who will initiate the process by sending message on the queue and others will respond accordingly by sending back again a message on the queue.
Since you are using EJB,you can use JTA to manage transcation,it a standard implementation of two-phased commit protocal,and JMS support JTA too.
Here are my steps:
config the trans-attribute to Required/Mandatory /Supports, depends on you need.
in your client get UserTransaction by lookup jndi from the EJB server.
start the transaction from client.
commit/rollback the transaction at client side
This is the so called "Client owner tranaction design pattern". I suggest you to read the book javatransactionsbook