I need to sync data from MySQL Database to Redis Cache every 15 minutes so that cache as latest data.
I am using ubuntu for hosting (Node.js) webservcies. So everytime there is call for rest api it needs to fetch data from cache and serve it.
So now do I need write a background job to sync MySQL data to Cache memory.
If I need to write a background job can I write In node.js and sync it and run as a background job in Ubuntu using crontab command.
Yes. You can write a nodejs script and run it thru crontab command to sync data from MySQL to Redis.
Per my experience, you need some nodejs packages below to help implement the needs.
NodeJS ORM for MySQL:
Sequelize: http://docs.sequelizejs.com/en/latest/ (npm install
sequelize mysql)
Redis Client for NodeJS:
ioredis: https://github.com/luin/ioredis (npm install ioredis)
node_redis: https://github.com/NodeRedis/node_redis (npm install
redis)
The sample code ~/sync-mysql-redis.js:
// Create a mysql client connection
var Sequelize = require('sequelize');
var sequelize = new Sequelize('mysql://user:pass#azure_mysql_host:3306/dbname');
// Create a redis client using node_redis
var redis = require("redis");
var client = redis.createClient(6379, '<redis_host>');
// Query entities data from MySQL table
sequelize.query("SELECT * FROM `t_entity`", { type: sequelize.QueryTypes.SELECT})
.then(function(entities) {
for(var entity in entites) { // for-each entity from entites list
var hash_key = entity.Id // for example, get the entity id as redis hash
for(var prop in entity) { // for-each property from entity
client.hset([hash_key, prop, entity[prop]], redis.print); // mapping a mysql table record to a redis hash
}
}
});
For crontab configuration, you need to vim /etc/crontab as root or sudo user:
$ sudo vim /etc/crontab
# Add a crontab record to run nodejs script interval 15 mins
*/15 * * * * node \home\user\sync-mysql-redis.js
Related
I am programmatically starting an IPFS node using JS ipfs-core(npm package) with a custom repository using a different storage backend(similar to S3). Now once the node is started in the AWS instance, I want to send requests to the node using a remote client written in Java.
java-ipfs-http-client can connect to the API port. But, the API and gateway service does not get initiated when the node is started. The Java server will be running on a different machine.
Is it possible to access the ipfs node started using ipfs-core programmatically from a java server running on a different instance?
Found the solution.
When we initialize node programmatically, we need to manually start API/Gateway in the following way.
import * as IPFS from 'ipfs-core'
import { HttpApi } from 'ipfs-http-server'
import { HttpGateway } from 'ipfs-http-gateway'
async function startIpfsNode () {
const ipfs = await IPFS.create()
const httpApi = new HttpApi(ipfs)
await httpApi.start()
const httpGateway = new HttpGateway(ipfs)
await httpGateway.start()
}
startIpfsNode()
This will start the ipfs node along with the API and Gateway
The configuration of API and Gateway port can be changed programmatically in the following way
const ipfs = IPFS.create()
await ipfs.config.set('Addresses.API', '/ip4/127.0.0.1/tcp/5002');
await ipfs.config.set('Addresses.Gateway', '/ip4/127.0.0.1/tcp/9090');
Once the API is started, the IPFS node can accessed from a Java Program using java-ipfs-http-client
This is the error :
reverie-pc#reveriepc-Latitude-3400:~/VasanthkumarV/prisma$ sudo npm install -g prisma
[sudo] password for reverie-pc:
npm WARN deprecated request#2.88.2: request has been deprecated, see
https://github.com/request/request/issues/3142
/usr/bin/prisma -> /usr/lib/node_modules/prisma/dist/index.js
+ prisma#1.34.10
updated 1 package in 29.734s
(base) reverie-pc#reveriepc-Latitude-3400:~/VasanthkumarV/prisma$ prisma init test
? Set up a new Prisma server or deploy to an existing server? Use existing database
? What kind of database do you want to deploy to? MySQL
? Does your database contain existing data? Yes
? Enter database host localhost
? Enter database port 3306
? Enter database user root
? Enter database password [hidden]
? Please select the schema you want to introspect database_test
Introspecting database database_test 435ms
Created datamodel definition based on 24 tables.
? Select the programming language for the generated Prisma client Prisma JavaScript Client
Created 3 new files:
prisma.yml Prisma service definition
datamodel.prisma GraphQL SDL-based datamodel (foundation for database)
docker-compose.yml Docker configuration file
Next steps:
1. Open folder: cd test
2. Start your Prisma server: docker-compose up -d
3. Deploy your Prisma service: prisma deploy
4. Read more about introspection:url
▸ Syntax Error: Expected Name, found Int "1"
Get in touch if you need help: https://slack.prisma.io/
To get more detailed output, run $ export DEBUG="*"
(node:14055) [DEP0066] DeprecationWarning: OutgoingMessage.prototype._headers is deprecated
Generating schema... !
How to resolve this error..and what is the procedure to connect Prisma server with local database (MySQL)?? and what about the prisma deployment??
How to connect prisma with existing db?
It looks like you are using Prisma 1 which is currently in maintenance mode.
Given that this looks like a new project, I'd suggest you take a look at Prisma 2 which includes many improvements and a simpler mental model.
I'm trying to figure out a way to make one instance of a module depend on the successful deployment of another instance of the same module. Unfortunately, although resources support it, modules don't seem to support the explicit depends_on switch:
➜ db_terraform git:(master) ✗ terraform plan
Error: module "slave": "depends_on" is not a valid argument
I have these in the root module: main.tf
module "master" {
source = "./modules/database"
cluster_role = "master"
..
server_count = 1
}
module "slave" {
source = "./modules/database"
cluster_role = "slave"
..
server_count = 3
}
resource "aws_route53_record" "db_master" {
zone_id = "<PRIVZONE>"
name = "master.example.com"
records = ["${module.master.instance_private_ip}"]
type = "A"
ttl = "300"
}
I want master to be deployed first. What I'm trying to do is launch two AWS instances with a database product installed. Once the master comes up, its IP will be used to create a DNS record. Once this is done, the slaves get created and will use the IP to "enlist" with the master as part of the cluster. How do I prevent the slaves from coming up concurrently with the master? I'm trying to avoid slaves failing to connect with master since the DB record may not have been created by the time the slave is ready.
I've read recommendations for using a null_resource in this context, but it's not clear to me how it should be used to help my problem.
Fwiw, here's the content of main.tf in the module.
resource "aws_instance" "database" {
ami = "${data.aws_ami.amazonlinux_legacy.id}"
instance_type = "t2.xlarge"
user_data = "${data.template_file.db_init.rendered}"
count = "${var.server_count}"
}
Thanks in advance for any answers.
I have a SlashDB installation on top of MySQL 5.7. I use it to serve custom REST API calls to allow other people to access the data in the DB. Most of these happen through the 'SQL Pass-thru' feature.
When executing straight SQL queries, changes to the DB are committed immediately. However, this is not true when I execute stored functions (through select [function name]). The function would execute perfectly, but any changes to the data is not committed until I issue commit;. The main problem is that this causes stranded locks on tables and other MySQL objects.
Anybody has any idea what's happening here?
Currently the only walk around is by manually adding ?autocommit=true to the connection string in /etc/slashdb/databases.cfg.
For example
myChinook:
alternate_key: {}
autoconnect: true
autoload: true
autoload_user: {dbpass: chinook, dbuser: chinook}
connection: 127.0.0.1:3308/Chinook?autocommit=true
creator: admin
db_encoding: utf-8
db_id: myChinook
db_schema: null
db_type: mysql
desc: ''
excluded_columns: {}
execute: []
foreign_keys: {}
owners: [admin]
read: []
write: []
After making manually changes in files you need to restart SlashDB service
sudo service slashdb stop
sudo service slashdb start
Or to call stored procedures instead of select on stored function.
I'm trying to monitor the amount of the restarts, cpu and memory in PM2 module managed microservices and create an alert if the module is restarting using AWS cloud watch.
pm2 list
Command returns the data in a UI formatted way which I would like to avoid parsing.
Is there any way to get the number of process restarts in a more machine-readable friendly format than the one returned by the pm2 list command.
I looked at the pm2 get command but can't find documentation about the keys I can use there.
You can get all kinds of details (including restarts) in json format with
pm2 prettylist (pretty)
or with
pm2 jlist (raw).
pm2 also has an api:
var pm2 = require('pm2');
// Connect or launch PM2
pm2.connect(function(err) {
// Start a script on the current folder
pm2.start('test.js', { name: 'test' }, function(err, proc) {
if (err) throw new Error('err');
// Get all processes running
pm2.list(function(err, process_list) {
console.log(process_list);
// Disconnect to PM2
pm2.disconnect(function() { process.exit(0) });
});
});
});
Details on the api: pm2-api