I have RestComm Connect configured to use an external MySQL DB (according to 'How to get started with Restcomm-Connect and Mysql'). The setup works fine (able to make Alice<->Bob calls via Olympus).
Now I would like to try out a new RestComm Connect release and configure it to use the same MySQL DB instance - i.e. to use the same 'restcomm' database (I want to share existing clients' accounts between the two RestComm instances).
So the target setup would be e.g.:
Restcomm-JBoss-AS7-8.2.0.1221 ---
\
--> MySQL DB ['restcomm' database]
/
Restcomm-JBoss-AS7-8.2.0.1304 ---
In this case both RestComm instances share the same 'restcomm' database.
Is the above setup feasible, or are there instance specific data stored in the DB which can't be shared (i.e. besides tables such as restcomm_accounts or restcomm_clients)?
Of course only one of the RestComm instances would be running at given time.
Any tips, ideas or suggestions would be appreciated.
Thanks,
Dominik
Yes it's designed to share the same DB, even with both or more instances active.
Related
I am trying to connect to a MySQL database from Data Fusion, but I am getting the following error. Communications link failure The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. The database is accessed through public IP through port 3306, from my machine I can connect perfectly, but from Data Fusion I cannot.
As John Hanley pointed out in his comment, it's probably due to a connectivity issue with your SQL instance.
A possible reason would be that you have not enabled your instance to be connected via its public IP. If that's the case, go to your SQL instance and edit its configuration, adding a network (if you haven't done so previously) and providing an IP range to include your Data Fusion instance. Keep in mind that if you configure your instance to accept connections using its public IP address, also configure it to use SSL to keep your data secure. If that was the issue, now you should be able to connect properly.
Also, be sure to check that the Dataproc cluster that your data Fusion instance is using under the hood has the proper configuration (you shouldn't worry about this if you haven't changed anything about the Dataproc cluster).
This is the best advice I can give without further details. If this doesn't work for you, we're going to need more information.
My two cents:
I use one windows pc with several linux box at home.
My main station is the Windows one.
I'v been using Sqlyog just to prob tables on the remote db.
I use my class C addr. to address the server.
It works well. By design I want to access the db server
from anywhere in my home...
60% to 75%ishhh problems of that kind would be related
to the .cnt file configuration.
Regards
Steph
I just opened an account on Amazon AWS. In this account, I created a mysql database instance, that I am now trying to connect to on my home computer use mySQL Workbench. I have entered the database endpoint (as listed in my account) and added the user name I set up for the master username for the database. When I hit "test connection" (using standard TCP/IP connection) however, I get a "Failed to connect..." message. I have a feeling that the problem may be that I need to use SSL and/or SSH. But I am a neophyte here, and I don't know how to properly set this up or configure mySQL Workbench with this. I am seeking assistance
You need to allow your mysql server to the user my user policy.
You can allow your Public IP address.
Please refer below case:
Cannot ping AWS EC2 instance
I think there that my database instance was misconfigured somehow, though not as JERRY suggests. I created a new MySQL DB instance and was able to connect to that without needing any other special configuration changes. So I am now using the new instance, and have deleted the old one. I wish I could provide more insight into what the problem with the first DB was, but the insight I have is (as I said) after I created the 2nd DB instance, no other configuration was necessary
I am trying out a small POC (learning experiment) on docker. I have 3 docker images, one each for a storefront, a search engine and a database engine called, storefront, solr, docmysql respectively. I have tried running them in a docker swarm (on a single node) on ec2 and it works fine.
In the POC, I next needed to move this to AWS ECS using the EC2 launch type on a single Non-Amazon ECS-Optimized AMI. I have installed and started a ecs-agent on this. I have created 3 services with one task for each of the 3 images configured as containers within the task. The question is about connecting to the database from the storefront.
The storefront has a property file where the database connection is typically defined as
"jdbc:mysql://docmysql/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false".
This worked when I ran it as a docker swarm. Once I moved it to ECS (EC2 launch type), I had to expose the port 3306 from my task/container for the docmysql service. This gave me a service endpoint of docmysql.local, with 'local' being a private namespace. I tried changing the connection string to
"jdbc:mysql://docmysql.local/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false"
in the property file and it always fails with " Name or service not known". What should my connection string be? When the service is created I see 2 entries in Route 53, one SRV record and a A record. The A record has as its name a .docmysql.local, If I use this in the database connection string, I see that it works but obvious not the right thing to do with the hadcoded taskid. I have read about AWS Cloud Map (servicediscovery) but still not very clear how to go about it. I will not be putting any loadbalancer in front of my DB task in the service, there will always be only one task for the db.
So what is the best way to generate the connection string that works. Also why did I not have issues when I ran it as a docker swarm.
I know I can use an RDS instead of stating my own database, I will try that but for now need this working as I have started with this. Thanks for any help.
Well, I've raised some points before my own solution within the problem:
Do you need your instance to scale using ECS? If not, migrate it to RDS.
Do you need to deploy it on EC2-Type? If not, use fargate, it is more simple to handle.
Now, I've faced that issue on Fargate, and discovered that depending on your container/task definitions, it can be used inside the same task for testing purposes, so, 127.0.0.1 should be the answer.
On different tasks you need to work with awsvpc network mode so, you will have this:
Each task that uses the awsvpc network mode receives its own elastic network interface, which is attached to the container instance that hosts it. (FROM AWS)
My suggestion is to create a Lambda Function to discover your network interface dynamically.
Read this for deeply understanding:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
https://aws.amazon.com/blogs/developer/invoking-aws-lambda-functions-from-java/
I have the following setup:
An EC2 instance hosting both an application server and a database (mysql), belonging to a security group: let's call it "AppServerSG", and assigned an elastic Public IP (AWS also assigns it a private IP).
Various EC2 worker instances which need to connect to the application server's database when booting up. These worker instances belong to another security group: let's call it "WorkerSG".
The inbound rules for the Security Groups look as follows.
For AppServerSG:
80 (HTTP) 0.0.0.0/0
3306 (MYSQL) WorkerSG
For WorkerSG
80 (HTTP) AppServerSG
So essentially only the application server should be reachable from outside, and the workers and application should be able to communicate with each other.
However connecting to the database from a worker instance only succeeds when the database host is set as the application server's private IP, not the public elastic IP.
The only way to connect to the database from a worker instance using the application server's public IP seems to require changing the MYSQL rule to allow all connections (0.0.0.0/0) on the AppServerSG, which is something I'm very reluctant to do out of security concerns.
Hard-coding the private IP into the worker instances is also not such a good idea, since every time the app server instance is stopped/restarted, it is assigned a new private IP, which would then require manually changing the database address that each worker instance needs to connect to.
I'm basically wondering if someone has run into similar trouble because this doesn't seem like the way things should work, so either I'm doing something wrong in my setup, or there's a workaround somehow.
Would very much appreciate the help !
Edit:
The motivation behind this setup is that in the event that I want to take the whole thing offline, I can safely bring it back online without having to change the configurations of the application server and the workers.
Had I used RDS, when taking the application offline/online again I would have to take a snapshot of the DB and stop it, then create a new DB based on the snapshot, which would have a different address, which would then bring me back to the problem of changing the configuration.
Honestly if I'm going to have to edit the configuration every time I restart the application anyway, I'd rather have the database on the application server and save myself the costs associated with RDS.
The main issue here is that I don't understand why the security groups don't seem to apply when I'm using the public elastic IP for the database address, is it by design on the AWS side, or a mistake in the configuration somewhere on my part ?
Really the recommended configuration would have you using an RDS DB instance, setting your DB security group to accept connections from the appropriate EC2 security groups only. In this configuration, you CAN set up your DB user like user#% and still enforce access to the DB only to the specified EC2 security groups.
In this way, you shift the burden of DB access control to the AWS security model, rather than MySQL user configuration. Of course, you would still need to configure DB users to have access only to those appropriate resources within the DB.
I've started to check mysql connector j's replication paradigm and see that we can seperate read and write operations on master and slave databases.
I've checked below page and get some clues on the operation but still need to know how does mysql-jdbc understands which server is master and which servers are slaves ? ( might be a silly one, sorry for this )
http://www.dragishak.com/?p=307
The ReplicationDriver or NonRegisteringReplicationDriver decides the first url as master and the rest considered as slaves
The point you should of take into consideration is : If you are using ReplicationDriver or NonRegisteringReplicationDriver you need to give at least two hosts contains the same db instance. Otherwise you will get an SQLException telling : "Must specify at least one slave host to connect to for master/slave replication load-balancing functionality".
One more point : You don't actually need to create an instance of NonRegisteringReplicationDriver. Because ReplicationDriver is also using it. You can check it by let your application throw and Exception. What you will see is; the DB connection was tried by NonRegisteringReplicationDriver.connect(..) method.
Edit(!) : You actually don't need to create non of spesific driver for your system. What you need to know is what are you doing and the correct connection url. Because the Driver class itself checks the url against replication pattern and loadbalance pattern. Then it triggers the required driver instance.