MySql InnO DB cluster node not rejoin automatically to the Cluster - mysql

I have implemented MySQL InnoDB cluster with 3 nodes.when I stop MySql service on one of the slave node, Cluster change the slave node status to Missing state.
When I start Mysql service on stopped node Cluster not rejoin the node automatically.I need to manually rejoin the node cluster using
mysql-js>cluster.rejoinInstance('ic#ic-2:3306');
Status of My Cluster
mysql-js> cluster.status();
{
"clusterName": "MyCluster",
"defaultReplicaSet": {
"name": "default",
"primary": "ic-1:3306",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures. 1 member is not active",
"topology": {
"ic-1:3306": {
"address": "ic-1:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"ic-2:3306": {
"address": "ic-2:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "(MISSING)"
},
"ic-3:3306": {
"address": "ic-3:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
}
}
Ther is any possibility to rejoin the node to cluster automatically?

Related

How to fix failed VMSS deployment with error "unknown network allocation error"

I am trying to deploy a 3-tier architecture to Azure using the Azure PowerShell CLI and a customized ARM template with parameters. I am not having any issues with the powershell script or the template's validity.
Within the template, among other things are two Virtual Machine Scale Sets, one for the front-end and one for the back-end. Front-end is windows and back-end is red hat. The front-end is behind an application gateway while the back-end is behind a load balancer. What's weird is that the front-end VMSS is deploying no problem and all is well. The back-end VMSS fails every time I try to deploy it, with a vague "Unknown network allocation error" message that I have no idea how to debug (since it provides no specifics unlike all of my other error messages so far).
I based the ARM template on an exported template from a working model of this architecture in another resource group and modified the parameters and have spent a while cleaning up issues and errors with Azure's exported template. I have tried deleting and starting from scratch but it doesn't seem to fix this problem. I thought it was possible I reached the limit of free-subscription processors so I tried making the front-end VMSS dependent on the back-end VMSS so the back-end VMSS would be created first, but the same issue still happened.
Here is the back-end VMSS part of the template:
{
"type": "Microsoft.Compute/virtualMachineScaleSets",
"apiVersion": "2018-10-01",
"name": "[parameters('virtualMachineScaleSets_JakeAppBESS_name')]",
"location": "westus2",
"dependsOn": [
"[parameters('loadBalancers_JakeAppBESSlb_name')]"
],
"sku": {
"name": "Standard_B1ls",
"tier": "Standard",
"capacity": 1
},
"properties": {
"singlePlacementGroup": true,
"upgradePolicy": {
"mode": "Manual"
},
"virtualMachineProfile": {
"osProfile": {
"computerNamePrefix": "jakeappbe",
"adminUsername": "Jake",
"adminPassword": "[parameters('JakeApp_Password')]",
"linuxConfiguration": {
"disablePasswordAuthentication": false,
"provisionVMAgent": true
},
"secrets": []
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage",
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Premium_LRS"
}
},
"imageReference": {
"publisher": "RedHat",
"offer": "RHEL",
"sku": "7.4",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppBESS_name'), 'Nic')]",
"properties": {
"primary": true,
"enableAcceleratedNetworking": false,
"dnsSettings": {
"dnsServers": []
},
"enableIPForwarding": false,
"ipConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppBESS_name'), 'IpConfig')]",
"properties": {
"subnet": {
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/virtualNetworks/', parameters('virtualNetworks_JakeAppVnet_name'), '/subnets/BEsubnet')]"
},
"privateIPAddressVersion": "IPv4",
"loadBalancerBackendAddressPools": [
{
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/loadBalancers/', parameters('loadBalancers_JakeAppBESSlb_name'), '/backendAddressPools/bepool')]"
}
],
"loadBalancerInboundNatPools": [
{
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/loadBalancers/', parameters('loadBalancers_JakeAppBESSlb_name'), '/inboundNatPools/natpool')]"
}
]
}
}
]
}
}
]
},
"priority": "Regular"
},
"overprovision": true
}
},
For reference, here's the front-end VMSS's part of the template so you can compare and see that there aren't many differences:
` {
"type": "Microsoft.Compute/virtualMachineScaleSets",
"apiVersion": "2018-10-01",
"name": "[parameters('virtualMachineScaleSets_JakeAppFESS_name')]",
"location": "westus2",
"dependsOn": [
"[parameters('applicationGateways_JakeAppFE_AG_name')]",
],
"sku": {
"name": "Standard_B1ls",
"tier": "Standard",
"capacity": 1
},
"properties": {
"singlePlacementGroup": true,
"upgradePolicy": {
"mode": "Manual"
},
"virtualMachineProfile": {
"osProfile": {
"computerNamePrefix": "jakeappfe",
"adminUsername": "Jake",
"adminPassword": "[parameters('JakeApp_Password')]",
"windowsConfiguration": {
"provisionVMAgent": true,
"enableAutomaticUpdates": true
},
"secrets": []
},
"storageProfile": {
"osDisk": {
"createOption": "FromImage",
"caching": "ReadWrite",
"managedDisk": {
"storageAccountType": "Premium_LRS"
}
},
"imageReference": {
"publisher": "MicrosoftWindowsServer",
"offer": "WindowsServer",
"sku": "2016-Datacenter",
"version": "latest"
}
},
"networkProfile": {
"networkInterfaceConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppFESS_name'), 'Nic')]",
"properties": {
"primary": true,
"enableAcceleratedNetworking": false,
"dnsSettings": {
"dnsServers": []
},
"enableIPForwarding": false,
"ipConfigurations": [
{
"name": "[concat(parameters('virtualMachineScaleSets_JakeAppFESS_name'), 'IpConfig')]",
"properties": {
"subnet": {
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/virtualNetworks/', parameters('virtualNetworks_JakeAppVnet_name'), '/subnets/FEsubnet')]"
},
"privateIPAddressVersion": "IPv4",
"applicationGatewayBackendAddressPools": [
{
"id": "[concat('/subscriptions/', parameters('subscription_id'), '/resourceGroups/', parameters('resource_Group'), '/providers/Microsoft.Network/applicationGateways/', parameters('applicationGateways_JakeAppFE_AG_name'), '/backendAddressPools/appGatewayBackendPool')]"
}
]
}
}
]
}
}
]
},
"priority": "Regular"
},
"overprovision": true
}
},
I expected them to both behave similarly. Granted, back-end is RH linux while front-end is windows, and the front-end is behind an application gateway while the back-end is behind a load balancer, but this setup is working perfectly fine in my other resource group that was deployed through the portal instead of through ARM. But every time I try to deploy this I get this error:
New-AzureRmResourceGroupDeployment : 1:30:56 AM - Resource Microsoft.Compute/virtualMachineScaleSets 'ProdBESS' failed with message '{
"status": "Failed",
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "NetworkingInternalOperationError",
"message": "Unknown network allocation error."
}
]
}
}'
Okay I finally figured out what the issue was, so if anyone searching finds this thread in the future having the same error:
Apparently the part of the template dealing with the load balancer for the VMSS (which was exported from azure portal) had two conflicting inbound nat pools (overlapping port ranges). Once I deleted the part of the template creating the conflicting extra nat pool my VMSS deployed properly without issue.
No idea at all why the azure portal exported me a template with an extra nat pool that had never existed (there was only 1 on the original LB I exported the template from).

How can I configure the password and username of a ClearDb MySQL database in an ARM template

Is there a way to set the PW and username of a ClearDB MySQL database in an ARM template?
Here's the resource:
{
"type": "SuccessBricks.ClearDB/databases",
"name": "[parameters('databases_cmsdbbasic_name')]",
"apiVersion": "2014-04-01",
"location": "East US 2",
"plan": {
"name": "Jupiter"
},
"tags": {},
"scale": null,
"properties": {
"hostname": "us-cdbr-azure-east2-d.cloudapp.net",
"name": "[parameters('databases_cmsdbbasic_name')]",
"id": "8C7C711AC0C0A028CE3E9D45814B868A",
"size_mb": "0",
"max_size_mb": "10240",
"status": {
"name": "Healthy",
"message": "Database is healthy and ready for use.",
"level": "Info"
}
},
"dependsOn": []
},
Based on my knowledge, it is not possible.
Publisher ClearDB does not allow user to add user and password information in the template. Currently, the user's name randomly generated by cleardb. It also does not support change user name. You need rest password on ClearDB portal.

Is it possible to create cluster in EMR by giving all the configurations from json file

I want to automate the cluster creation task in EMR. I have a json file
which contains configurations which need to be applied on new cluster and I want to write a shell script which automates this task for me.
Is it possible to create cluster in EMR by giving all the configurations from json file?
For example I have this file
{
"Cluster": {
"Ec2InstanceAttributes": {
"EmrManagedMasterSecurityGroup": "sg-00b10b71",
"RequestedEc2AvailabilityZones": [],
"AdditionalSlaveSecurityGroups": [],
"AdditionalMasterSecurityGroups": [],
"RequestedEc2SubnetIds": [
"subnet-02291b3e"
],
"Ec2SubnetId": "subnet-02291b3e",
"IamInstanceProfile": "EMR_EC2_DefaultRole",
"Ec2KeyName": "perf_key_pair",
"Ec2AvailabilityZone": "us-east-1e",
"EmrManagedSlaveSecurityGroup": "sg-f2b30983"
},
"Name": "NitinJ-Perf",
"ServiceRole": "EMR_DefaultRole",
"Tags": [
{
"Value": "Perf-Nitink",
"Key": "Qubole"
}
],
"Applications": [
{
"Version": "3.7.2",
"Name": "Ganglia"
},
{
"Version": "2.7.3",
"Name": "Hadoop"
},
{
"Version": "2.1.1",
"Name": "Hive"
},
{
"Version": "0.16.0",
"Name": "Pig"
},
{
"Version": "0.8.4",
"Name": "Tez"
}
],
"MasterPublicDnsName": "ec2-34-229-254-217.compute-1.amazonaws.com",
"ScaleDownBehavior": "TERMINATE_AT_INSTANCE_HOUR",
"InstanceGroups": [
{
"RequestedInstanceCount": 4,
"Status": {
"Timeline": {
"ReadyDateTime": 1499150835.979,
"CreationDateTime": 1499150533.99
},
"State": "RUNNING",
"StateChangeReason": {
"Message": ""
}
},
"Name": "Core Instance Group",
"InstanceGroupType": "CORE",
"EbsBlockDevices": [],
"ShrinkPolicy": {},
"Id": "ig-34P3CVF8ZL5CW",
"Configurations": [],
"InstanceType": "r3.4xlarge",
"Market": "ON_DEMAND",
"RunningInstanceCount": 4
},
{
"RequestedInstanceCount": 1,
"Status": {
"Timeline": {
"ReadyDateTime": 1499150804.591,
"CreationDateTime": 1499150533.99
},
"State": "RUNNING",
"StateChangeReason": {
"Message": ""
}
},
"Name": "Master Instance Group",
"InstanceGroupType": "MASTER",
"EbsBlockDevices": [],
"ShrinkPolicy": {},
"Id": "ig-3V7EHQ36187PY",
"Configurations": [],
"InstanceType": "r3.4xlarge",
"Market": "ON_DEMAND",
"RunningInstanceCount": 1
}
],
"Configurations": [
{
"Properties": {
"hive.vectorized.execution.enabled": "true"
},
"Classification": "hive-site"
}
]
}
}
Can I create a cluster on EMR by using some command like
aws emr create-cluster --cli-input-json file://'pwd'/emr_cluster_up.json
There is no such option through the AWS CLI as per the AWS CLI documentation. But if you want to automate the EMR cluster creation using a JSON file. You can use Cloud formation and automate the cluster creation.
Getting Started with AWS CloudFormation

Is it possible to execute system.run[] with Zabbix 3.0 JSON-RPC API?

I'm trying to remotely stop/start services using systemctl inside Zabbix's system.run[] request/item but it doesn't seem to work.
I'm using Zabbix 3.0 JSON-RPC API and my JSON looks like this:
{
"jsonrpc": "2.0",
"method": "item.get",
"params": {
"filter": {
"host": "host-name",
"key_": "system.run[sudo systemctl stop nginx.service]"
}
},
"id": 1,
"auth": "my-token"
}
Result:
{"jsonrpc":"2.0","result":[],"id":1}
But I'm not too sure about validity of this request because all the information I've seen on system.run[] so far was related to zabbix_get. Is it even possible to execute system.run[] this way? What am I doing wrong?
This is obviously just filtering items but I have no idea how to replicate what zabbix_get does using Zabbix JSON-RPC API. There is no information I could find about this.
This works well for gathering data, tho:
{
"jsonrpc": "2.0",
"method": "item.get",
"params": {
"filter": {
"host": "host-name",
"key_": "vm.memory.size[used]"
}
},
"id": 1,
"auth": "my-token"
}
Result:
{
"jsonrpc": "2.0",
"result": [
{
"itemid": "455",
"type": "0",
"snmp_community": "",
"snmp_oid": "",
"hostid": "12241",
"name": "Used memory",
"key_": "vm.memory.size[used]",
"delay": "60",
"history": "90",
"trends": "365",
"status": "0",
"value_type": "3",
"trapper_hosts": "",
"units": "B",
"multiplier": "0",
"delta": "0",
"snmpv3_securityname": "",
"snmpv3_securitylevel": "0",
"snmpv3_authpassphrase": "",
"snmpv3_privpassphrase": "",
"formula": "1",
"error": "",
"lastlogsize": "0",
"logtimefmt": "",
"templateid": "106",
"valuemapid": "0",
"delay_flex": "",
"params": "",
"ipmi_sensor": "",
"data_type": "0",
"authtype": "0",
"username": "",
"password": "",
"publickey": "",
"privatekey": "",
"mtime": "0",
"flags": "0",
"interfaceid": "2",
"port": "",
"description": "",
"inventory_link": "0",
"lifetime": "30",
"snmpv3_authprotocol": "0",
"snmpv3_privprotocol": "0",
"state": "0",
"snmpv3_contextname": "",
"evaltype": "0",
"lastclock": "1466142275",
"lastns": "142277413",
"lastvalue": "3971121455",
"prevvalue": "3971001230"
}
],
"id": 1
}
If someone managed to execute system.run[] using JSON-RPC API, please, share your solution.
Thank you.
No, there seem to be a few things wrong. First, the Zabbix API is JSON-RPC (not REST). Second, the item.get method is primarily used to get item configuration from the server.
To request item values from an agent (and this is how remote commands are implemented with the system.run item key), you can use the already mentioned zabbix_get:
$ zabbix_get -s host-name -k "system.run[sudo systemctl stop nginx.service]"
Note that when you say "This works well for gathering data", you are not telling Zabbix to collect data at that point - it just returns you some data that is already in the database. In the case of remote commands, the best you could get would be "1" that indicates that last time this remote command was sent to the agent successfully.

How to convert docker run command into json file?

I was wondering if anyone knows how to create a json file that would be the same as running:
docker run -p 80:80 -p 443:443 starblade/pydio-v4
I trying something very ambitious, I want to start my docker container in kubernetes-mesos cluster but can't seem to get the ports correct in the json file, alas I am still very new to this.
Thanks,
TT
Here are my json files:
`
{
"id": "frontend-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 3,
"replicaSelector": {"name": "frontend"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "frontend-controller",
"containers": [{
"name": "pydio-v4",
"image": "starblade/pydio-v4",
"ports": [{"containerPort": 10001, "protocol": "TCP"}]
}]
}
},
"labels": {"name": "frontend"}
}},
"labels": {"name": "frontend"}
}
{
"id": "frontend",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 80,
"port": 443,
"targetPort": 10001,
"selector": {
"name": "frontend"
},
"publicIPs": [
"${servicehost}"
]
}
Docker container Env info pulled from docker inspect command:
"Env": [
"FRONTEND_SERVICE_HOST=10.10.10.14",
"FRONTEND_SERVICE_PORT=443",
"FRONTEND_PORT=tcp://10.10.10.14:443",
"FRONTEND_PORT_443_TCP=tcp://10.10.10.14:443",
"FRONTEND_PORT_443_TCP_PROTO=tcp",
"FRONTEND_PORT_443_TCP_PORT=443",
"FRONTEND_PORT_443_TCP_ADDR=10.10.10.14",
"KUBERNETES_SERVICE_HOST=10.10.10.2",
"KUBERNETES_SERVICE_PORT=443",
"KUBERNETES_PORT=tcp://10.10.10.2:443",
"KUBERNETES_PORT_443_TCP=tcp://10.10.10.2:443",
"KUBERNETES_PORT_443_TCP_PROTO=tcp",
"KUBERNETES_PORT_443_TCP_PORT=443",
"KUBERNETES_PORT_443_TCP_ADDR=10.10.10.2",
"KUBERNETES_RO_SERVICE_HOST=10.10.10.1",
"KUBERNETES_RO_SERVICE_PORT=80",
"KUBERNETES_RO_PORT=tcp://10.10.10.1:80",
"KUBERNETES_RO_PORT_80_TCP=tcp://10.10.10.1:80",
"KUBERNETES_RO_PORT_80_TCP_PROTO=tcp",
"KUBERNETES_RO_PORT_80_TCP_PORT=80",
"KUBERNETES_RO_PORT_80_TCP_ADDR=10.10.10.1",
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"PYDIO_VERSION=6.0.5"
],
"ExposedPorts": {
"443/tcp": {},
"80/tcp": {}
},
`
The pod and service both start and run ok.
However I am unable to access the running Pydio site on any of the master, minion or frontend ips.
Note:
I am running a modified version of the this docker container:
https://registry.hub.docker.com/u/kdelfour/pydio-docker/
My container has been tested and it runs as expected.
You should see the login screen once it is running.
Please let me know if I can provide any other information.
Thanks again.
So, I finally got this to work using the following .json files:
frontend-service.json
{
"id": "frontend",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 443,
"selector": {
"name": "frontend"
},
"publicIPs": [
"${servicehost}"
]
}
frontend-controller.json
{
"id": "frontend-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "frontend"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "frontend-controller",
"containers": [{
"name": "pydio-v4",
"image": "starblade/pydio-v4",
"ports": [{"containerPort": 443, "hostPort": 31000}]
}]
}
},
"labels": {"name": "frontend"}
}},
"labels": {"name": "frontend"}
}
I now have pydio with SSL running in a Mesos-Kubernetes env on GCE.
Going to run some tests using more hostPorts to see if I can get more than one replica running on one host. At this point I can resize up to 3.
Hope this helps someone.
Thanks,
TT