I am trying to create CloudSQL Instance with Failover Replica using Deployment Manager.
I am able to create a Read Replica but couldn't create Failover Replica.
Can you please provide me a Deployment Manager Script or suggest me with the changes to the code below:
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/examples/v2/sqladmin/jinja
Thanks
Here you have a tutorial on how to create a CloudSQL database with high availability (Master + Failover replica).
For this purpose the Deployment Manager doesn't really make a difference. I will go on how to create the database and replica with the gcloud SDK, if you want to use the console it's explained on the link I provided.
Create the database and replica from the Cloud Shell with this command:
gcloud sql instances create [MASTER INSTANCE NAME] --enable-bin-log --backup-start-time=00:01 --failover-replica-name=[FAILOVER INSTANCE NAME]
Check the rest of the options for gcloud sql instances create here. You need the flags --enable-bin-log enabled for this, and as you have binary logs you need to enable the backups. The "backup-start-time=" is in UTC time.
NOW, the main issue you are facing is that you want to modify that template to deploy a master and failover replica, but the template is deploying a FIRST GENERATION instance (see the "replicationType: SYNCHRONOUS" value), and the failover replica is limited to SECOND GENERATION instances.
The API request for what you are trying to accomplish would go something like this:
{
"name": "master-db",
"settings": {
"tier": "db-n1-standard-1",
"backupConfiguration": {
"binaryLogEnabled": true,
"startTime": "00:01",
"enabled": true
}
},
"failoverReplica": {
"name": "failover-db"
}
}
Check the sqladmin API explorer page to explore the different possible values easily. Afterwards converting the calls to a jinja template should be easy.
Related
I am not able to delete the distributor and distribution database because it is saying that it is currently in use on Azure Managed Instance.I tried transactional replication between azure managed instance to azure sql vm. Then I was trying to delete replication ,publisher,subscriber and distributor.I was successful in dropping replication,publisher and subscriber but my distributor is not getting deleted.
I am trying to do:
exec sp_dropdistributor #no_checks = 1, #ignore_distributor = 1
Then I got this below statement as error:
Msg 21122, Level 16, State 1, Procedure sys.sp_dropdistributiondb,
Line 125 [Batch Start Line 6]
Cannot drop the distribution database 'distribution' because it is
currently in use.
I even tried to disable distributor using Disable publishing and distributor wizard.The process was unsuccessful.
What steps should I now follow to delete my distributor?
Ankita, can you please file support request for this issue to be troubleshooted? "New support request" option on Managed Instance Portal blade will guide you through the process.
I have also encountered this issue. Eventually I was able to drop the database via the Azure Portal.
Go to your SQL managed instance, scroll down in the "Overview" tab, open the distribution database and delete the database via the button on the top.
The proces which prevented deletion of the database via sp_dropdistributor will keep on running. It can't be killed via KILL. Haven't gotten any feedback on what to do about that yet.
I am using Ubuntu 18.04 on Google compute engine.
I am using the steps as shown in Google cloud documentation. My command is
sudo gcloud logging write "logname" "A simple entry"
The entry gets created but under the resource type as 'global'. However i want it to be created under resource name as compute engine.
I have tried setting logname as "projects/campuskudos-980/logs/appengine.googleapis.com%2Fvm.syslog" but that didn't work out
sudo gcloud logging write "logname" "A simple entry"
I want the logs to be created under GCE VM Instance resource type. So I can filter it out on stackdriver
Currently there’s no way to specify the resource type when using gcloud logging write command. As explained in the documentation for simplicity, this command makes several assumptions about the log entry. For instance, it always sets the resource type to global.
Right now, there are two ways to do that:
1- With the gcloud logging write command, use logname and specify something like projects/[PROJECT_ID]/logs/compute.googleapis.com. After that, using advanced filters on Stackdriver Logging as explained in the documentation, you can filter logs using an advanced filter to query all entries inside ‘compute.googleapis.com’.
For e.g.:
logName: (“projects/[PROJECT_ID]/logs/compute.googleapis.com”)
2- Call directly to API as explained in documentation specifying resource type as gce_instance.
Then that entry will appear under GCE VM Instance resource type on Stackdriver Logging.
MarkLogic setup is as follows
3 hosts
Data confniguration
- 1 master forest on each host
- 1 replica for each host on different host
We have MarkLogic cluster (3 hosts) with failover) deployed on Azure VMs
We are using MarkLogic ContentPump (MLCP) to ingest data into MarkLogic
This is what we have implemented
Installed Java on 1st host
Copied MLCP tool
Ingested data by providing 1st server as host parameter
Now we got batch of xmls to update back to MarkLogic
With failover implementation, due to some reason 1st host is not available, so when i tried to ingest data thru 2nd host, i started getting error that record was ingested in different host, so update can't happen from here.
So i would like to know the best practices to be followed for ingestion process
To enable the system to reliably failover, you will also need to setup replicas for the Security, App Services & any other system database you may be using as part of your architecture.
The reason you are unable to connect to the other hosts is that the Security database is on host 1, so you are unable to authenticate. Once that is configured for failover, you should no longer run into those issues.
The documentation covers that setup here:
https://docs.marklogic.com/guide/cluster/config-both-failover#id_57935
I am trying out a small POC (learning experiment) on docker. I have 3 docker images, one each for a storefront, a search engine and a database engine called, storefront, solr, docmysql respectively. I have tried running them in a docker swarm (on a single node) on ec2 and it works fine.
In the POC, I next needed to move this to AWS ECS using the EC2 launch type on a single Non-Amazon ECS-Optimized AMI. I have installed and started a ecs-agent on this. I have created 3 services with one task for each of the 3 images configured as containers within the task. The question is about connecting to the database from the storefront.
The storefront has a property file where the database connection is typically defined as
"jdbc:mysql://docmysql/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false".
This worked when I ran it as a docker swarm. Once I moved it to ECS (EC2 launch type), I had to expose the port 3306 from my task/container for the docmysql service. This gave me a service endpoint of docmysql.local, with 'local' being a private namespace. I tried changing the connection string to
"jdbc:mysql://docmysql.local/hybris64?useConfigs=maxPerformance&characterEncoding=utf8&useSSL=false"
in the property file and it always fails with " Name or service not known". What should my connection string be? When the service is created I see 2 entries in Route 53, one SRV record and a A record. The A record has as its name a .docmysql.local, If I use this in the database connection string, I see that it works but obvious not the right thing to do with the hadcoded taskid. I have read about AWS Cloud Map (servicediscovery) but still not very clear how to go about it. I will not be putting any loadbalancer in front of my DB task in the service, there will always be only one task for the db.
So what is the best way to generate the connection string that works. Also why did I not have issues when I ran it as a docker swarm.
I know I can use an RDS instead of stating my own database, I will try that but for now need this working as I have started with this. Thanks for any help.
Well, I've raised some points before my own solution within the problem:
Do you need your instance to scale using ECS? If not, migrate it to RDS.
Do you need to deploy it on EC2-Type? If not, use fargate, it is more simple to handle.
Now, I've faced that issue on Fargate, and discovered that depending on your container/task definitions, it can be used inside the same task for testing purposes, so, 127.0.0.1 should be the answer.
On different tasks you need to work with awsvpc network mode so, you will have this:
Each task that uses the awsvpc network mode receives its own elastic network interface, which is attached to the container instance that hosts it. (FROM AWS)
My suggestion is to create a Lambda Function to discover your network interface dynamically.
Read this for deeply understanding:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
https://aws.amazon.com/blogs/developer/invoking-aws-lambda-functions-from-java/
How to create CloudFormation template for setting up WordPress on one instance and MySQL on another EC2 instance?
I used this part for MySQL, but doesn't work, please give a suggestion... thank you.
"DatabaseServer" : {
"Type": "AWS::EC2::Instance",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"packages" : {
"yum" : {
"mysql" : [],
"mysql-server" : [],
"mysql-devel" : [],
"mysql-libs" : []
}
}
},
"services" : {
"sysvinit" : {
"mysqld" : { "enabled" : "true", "ensureRunning" : "true" }
}
}
}
},
AWS CloudFormation offers a large list or sample AWS CloudFormation Templates, which you can start/test easily from the AWS Management Console. There are several MySQL based solutions available, including a dedicated one for WordPress as well as a generic LAMP stack, in particular:
WordPress is web software you can use to create a beautiful website or blog.
Single EC2 Instance with local MySQL database
wordpress-via-cfn-bootstrap.template - one EC2 instance and one RDS instance
Several others for usage with an SCM solution like Chef/Puppet/...
A simple LAMP stack running a PHP "Hello World" application.
Single EC2 Instance with local MySQL database
Single EC2 Instance web server with Amazon RDS database instance
Highly Available Web Server with Multi-AZ Amazon RDS Instance
I recommend to explore the selected template(s) and continue from there by tailoring it to your needs, e.g. by splitting the single EC2 instance into two (though I'd highly recommend to use a solution based on an Amazon RDS for MySQL database instead, which is much more robust and easier to handle for starters, for only slightly increased cost).
To create a MySQL RDS instance with CFN you need to use the resource AWS::RDS::DBInstance with mysql engine and the wanted EngineVersion. There are some others props that you need to initialize that are not specific for MySQL Instance.
You can use Altostra Designer to create the CFN template super quick.