In node.js how they perform database check for real time notificaion?
Is it using any framework like "ondatabaseupdate"'s function?
Or just use setInterval at node js every second to perform check db?
I'm quite new on node.js i'm not sure im gonna to use
redis to store data or mysql.
That's called message queuing. Redis has an inbuilt PubSub adapter that can be used for this.
The basic idea is that clients can publish (emit) events on a defined channel, and other clients can subscribe (listen) to whatever is coming in. The listener could be routing incoming data to a pool of workers, which, for example, write your data in a database.
The Redis site has an article about pub/sub which is quite detailed.
Related
I have a full stack app that uses React, Node.js, Express, and MySQL. I want the react app to respond to database updates similar to Firebase: When data changes, I want a real-time notification sent to my app.
I want to use stock MySQL (no plugins), so that I can use AWS RDB or whatever.
I will use socket.io to push the real-time notifications to the web app.
To avoid off-target responses, I'll summarize various approaches that are not what I am looking for:
The server could poll, or each client could poll. (Not real-time, but included for completeness. When I search, polling is the only solution I find.)
Write a wrapper that handles all MySQL updates, handles subscriptions, and sends the notifications. This is a complicated component that adds complexity. Firebase is popular because it both increases performance and reduces complexity. I like Firebase a lot but want to do the same thing with MySQL.
Use Firebase to handle the real-time notifications. The MySQL wrapper could use Firebase to handle the subscriptions and notifications, but there is still the problem of triggering the notifications in the first place. Also, I don't want to use Firebase. (For example, my application needs to run in an air-gapped environment.)
The question: Using a stock MySQL database, when a table changes, can a notification server discover the change in real-time (no polling), so that it can send notifications?
The approach that works is to listen to the binary logs. This way, any change to the database will be communicated in real-time. The consumer of the binary logs can then publish this information in a number of ways. A common choice is to feed a stream of events to Apache Kafka.
Debezium, Maxwell, and NiFi work this way.
Currently I am using MySQL as a main database which is not real time database. To synchronize client side data with server and keep data when offline I am considering to add real time database as a slave database in my architecture. Synchronizing data is not easy, so I want to use cloud firestore.
What I searched until now, It seems there is no pratical way to synchronize between not real time RDMS(in my case is MySQL) and cloud firestore. I can not migrate current data to cloud firestore because other services depend on that.
If there is no pratical solution for this, please suggest me the best way. Thanks.
Not sure, but it seems there is no solution to synchronize that. To synchronize such this case, on the client side has to be implemented manually.
I have been thinking about this as well. What I have thought up so far is this.
Use Firestore to create a document.
Write cloud function to listen to create events on the collection that stores that document.
send the created document over http to a rest endpoint that stores that data on a relational db (mysql, postgres), if server is down or status code is anything other than 200, assign true to a boolean flag on the document called syncFailed.
write a worker that periodically fetches documents where syncFailed is true and sends this document to the endpoint again until sync is a success.
THINGS TO CONSIDER
Be careful in adopting this approach with update events. You may accidently create an infinite loop that will cost you all your wealth. If you do add a update listener on a document, make sure you devise a way to avoid creating an infinite loop. You may update a document from an update-listening cloud function that in turn re-triggers this cloud function which in turn updates the document which in turn re-re-triggers this cloud function and so on. Make sure you use a boolean flag on the document or some field as a sentinel value to get out of such a looping situation.
I have a requirement to make Endpoint calls to multiple Web API's designed by other companies. These calls will be made on periodic basis like once an hour, or once a day to post and retrieve some data (business to business transactions). Am working with .NET framework and ServiceStack.
Am not sure, what would be the best approach to achieve this type of functionality?
Maybe, I can have a Windows Service application which scans through the relevant config tables in SQL Server and generate CURL commands and execute them? Not sure, whether this will be the correct approach or there is something better you would like to propose?
I have never worked with CURL before, these are just initial thoughts.
To achieve this your backend needs a data structure to hold all necessary data for the requests (which can be a database table as you suggest) and a scheduling mechanism. This could be as simple as a timer and when triggered it picks up the requests and executes them (by using the built-in HttpClient for instance). IMO you should keep this logic within the application itself, no need to make things complicated by introducing a system-dependant service that then issues curl commands on the os level.
I am trying to implement real time notification system, the approach that i am going to use is by opening a different socket using node's socket.io module, and then record each event of a user, afterwards send data to Mysql and use this data for notifications. So is it a better approach, any suggestions.
Personally I would go for redis+node. Main reason is to lower disk IO. Negative side might be possible data loss with redis (server reboot, service restart etc.). But that also can be configured in redis.
Each day hundreds of thousands of items are inserted, updated and deleted on our service (backend using .Net and a MySql database).
Now we are integrating our service with another service using their RESTful API. Each time an item is inserted, updated or deleted on our service we also need to connect to their web service and use POST, PUT, DELETE.
What is a good implementation of this case?
It seems like not a very good idea to connect to their API each time a user inserts an item on our service as it would be a quite slow experience for the user.
Another idea was to update our database like usual. Then set up another server constant connecting to our database and fetching data that needs to be posted to the RESTful API. Is this the way to go?
How would you solve it? Any guides of implementing stuff like this would be great! Thanks!
It depends if you delay in updating the other service is acceptable or not. If not, than create a event and put this in queue of event processor who can send this to second service.
If delay is acceptable than there can be background batch job that can run periodically and send the data.