Deploy dockerized mysql container to edge device - mysql

I have built a docker application using docker-compose which has mysql involved in it.I have pushed those containers to azure and wanted to deploy it in an edge device using Azure IoT Edge. For this i used docker application container and mysql container to deploy in edge device, Application is running but mysql is not running at edge device after deployment.
Here is the container create options that i have given for mysql module
Is it because as i am using the root as User? Which is refusing connection with different client.
{
"Env": [
"ACCEPT_EULA=Y",
"MSSQL_ROOT_PASSWORD=root"
],
"HostConfig": {
"PortBindings": {
"13306/tcp": [
{
"HostPort": "13306"
}
],
"32000/tcp": [
{
"HostPort": "32000"
}
]
},
"Mounts": [
{
"Type": "volume",
"Source": "sqlVolume",
"Target": "/var/lib/mysql"
}
]
}

Related

Failed to start minikube: Error while starting minikube. Error: X Exiting due to MK_USAGE: Container runtime must be set to "containerd" for rootless

I'm getting the error and I believe the way to solve it is by running: minikube start --container-runtime=containerd
but the extension seems to run minikube start. So how am I supposed to add the flag?
Here's the launch.json file
{
"configurations": [
{
"name": "Cloud Run: Run/Debug Locally",
"type": "cloudcode.cloudrun",
"request": "launch",
"build": {
"docker": {
"path": "Dockerfile"
}
},
"image": "dai",
"service": {
"name": "dai",
"containerPort": 8080,
"resources": {
"limits": {
"memory": "256Mi"
}
}
},
"target": {
"minikube": {}
},
"watch": true
}
]
}
Cloud Code for VS Code doesn't support such settings at the moment. But you can configure minikube to apply these settings with minikube config set.
The Cloud Run emulation creates a separate minikube profile called cloud-run-dev-internal. So you should be able to run the following:
minikube config set --profile cloud-run-dev-internal container-runtime containerd
You have to delete that minikube profile to cause the setting to take effect for your next launch:
minikube delete --profile cloud-run-dev-internal

Connection to RDS MySql from ECS Fargate wordpress container times out

I have a container running (wordpress container if being more specific), which tries to connect to mysql rds instance.
Parameters for the fargate ecs service container:
{
"executionRoleArn": "ignore-this",
"containerDefinitions": [
{
"name": "MyCoolContainer",
"image": "wordpress:latest",
"essential": true,
"environment": [
{"name": "WORDPRESS_DB_HOST", "value": "host:3306"},
{"name": "WORDPRESS_DB_USER", "value": "user"},
{"name": "WORDPRESS_DB_PASSWORD", "value": "password"},
{"name": "WORDPRESS_DB_NAME", "value": "name"}
],
"portMappings": [
{
"hostPort": 80,
"protocol": "tcp",
"containerPort": 80
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/aws/ecs/fargate/prefix",
"awslogs-region": "eu-west-1",
"awslogs-stream-prefix": "prefix"
}
}
}
],
"requiresCompatibilities": [
"FARGATE"
],
"networkMode": "awsvpc",
"cpu": "256",
"memory": "512",
"family": "wordpress"
}
Also, for security groups, I have opened 22, 80, 443, 3306 ports for any IP address.
But the container in ECS still fails to start with the reason:
[17-Sep-2019 08:42:24 UTC] PHP Warning: mysqli::__construct():
(HY000/2002): Connection timed out in Standard input code on line 22
MySQL Connection Error: (2002) Connection timed out
MySQL Connection Error: (2002) Connection timed out
However I can ensure that the RDS instance is accessable, when trying to connect from a local machine with a command:
mysql -uuser -ppassword -hhost -P3306
Also, I can ensure that a (wordpress) container successfuly runs on local machine and successfully connects to a remote RDS database with no timeouts.
EDIT
This is how my environment looks like from ECS UI panel:
(I have tried to copy paste these values into my local mysql command and it connected successfully.)
I suspect there is something wrong with aws services configuration. Any ideas?
Thanks to Adiii and some other articles found on the internet i have a complete solution to this problem.
You need to simply attach a NAT Gateway to the subnet in which you are launching your ECS Fargate instance.
Simply launching in a public subnet with an Internet Gateway for some weird reason does not solve the problem (even though logically thinking it should).
TL;DR:
NAT Gateway is needed. AWS is f****d up.

Elastic Beanstalk does not update image from ECR automatically

I have
"Update": "true" in Dockerrun.aws.json
which should automatically update the image and container in the EC2 ionstance when i update the image in ECR.
But when i ssh into the instance after pushing a new image , i still see the container and image not updated.
[root#ip-10-20-60-125 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8e3bab8da13 258e7bc272bd "./graphhopper.sh we…" 8 days ago Up 8 days 8989/tcp tender_mayer
[root#ip-10-20-60-125 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
aws_beanstalk/current-app latest 258e7bc272bd 8 days ago 813MB
openjdk 8-jdk b8d3f94869bb 6 weeks ago 625MB
Dockerrun.aws.json has this
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "xxxxx",
"Key": "xxxxx"
},
"Image": {
"Name": "213074117100.dkr.ecr.us-east-1.amazonaws.com/xxxxx:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8989"
}
],
"Volumes": [
{
"HostDirectory": "/data",
"ContainerDirectory": "/data"
}
],
"Logging": "/var/log/eb",
"Command": "xxxxx"
}
Is there a setting somewhere other than update: "true" ?
If i do a eb deploy, it will pull and update. But "Update": "true" should pull and update automatically when i update the image, which is not happening.
From this AWS Documentation and this thread AWS Beanstalk docker image automatic update doesn't work it seems that update=true just does the docker pull before docker run and it will not update the container on a new image update.
From my current research, it seems there is no way to automate this process at this moment.

Cluster communication and firewalls in Google Container Engine

I'm trying to set up the following environment on Google Cloud and have 3 major problems with it:
Database Cluster
3 nodes
one port open to world, a few ports open to the compute cluster
Compute Cluster
- 5 nodes
- communicated with the database cluster
- two ports open to the world
- runs Docker containers
a) The database cluster runs fine, I have the configuration port open to world, but I don't know how to limit the other ports to only the compute cluster?
I managed to get the first Pod and Replication-Controller running on the compute cluster and created a service to open the container to the world:
controller:
{
"id": "api-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 2,
"replicaSelector": {
"name": "api"
},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "apiController",
"containers": [{
"name": "api",
"image": "gcr.io/my/api",
"ports": [{
"name": "api",
"containerPort": 3000
}]
}]
}
},
"labels": {
"name": "api"
}
}
}
}
service:
{
"id": "api-service",
"kind": "Service",
"apiVersion": "v1beta1",
"selector": {
"name": "api"
},
"containerPort": "api",
"protocol": "TCP",
"port": 80,
"selector": { "name": "api" },
"createExternalLoadBalancer": true
}
b) The container exposes port 3000, the service port 80. Where's the connection between the two?
The firewall works with labels. I want 4-5 different pods running in my compute cluster with 2 of them having open ports to the world. There can be 2 or more containers running on the same instance. The labels however are specific to the nodes, not the containers.
c) Do I expose all nodes with the same firewall configuration? I can't assign labels to containers, so not sure how to expose the api service for example?
I'll try my best to answer all of your questions as best I can.
First off, you will want to upgrade to using v1 of the Kubernetes API because v1beta1 and v1beta3 will no longer be available after Aug. 5th:
https://cloud.google.com/container-engine/docs/v1-upgrade
Also, Use YAML. It's so much less verbose ;)
--
Now on to the questions you asked:
a) I'm not sure I completely understand what you are asking here but it sounds like running the services in the same cluster (with resource limits) would be way easier than trying to deal with cross cluster networking.
b) You need to specify a targetPort so that the service knows what port to use on the container. This should match port 3000 that you have in your resource controller. See the docs for more info.
{
"kind": "Service",
"apiVersion": "v1",
"metadata: {
"labels": [{
"name": "api-service"
}],
},
"spec": {
"selector": {
"name": "api"
},
"ports": [{
"port": 80,
"targetPort": 3000
}]
"type": "LoadBalancer"
}
}
c) Yes. In Kubernetes the kube-proxy accepts traffic on any node and routes it to the appropriate node or local pod. You don't need to worry about mapping the load balancer to, or writing firewall rules for those specific nodes that happen to be running your pods (it could actually change if you do a rolling update!). kube-proxy will route traffic to the right place even if your service is not running on that node.

How do I mount a persistent disk to a container volume?

I'm playing around with Google's managed VM feature and finding you can fairly easily create some interesting setups. However, I have yet to figure out whether it's possible to use persistent disks to mount a volume on the container, and it seems not having this feature limits the usefulness of managed VMs for stateful containers such as databases.
So the question is: how can I mount the persistent disk that Google creates for my Compute engine instance, to a container volume?
Attaching a persistent disk to a Google Compute Engine instance
Follow the official persistent-disk guide:
Create a disk
Attach to an instance during instance creation, or to a running instance
Use the tool /usr/share/google/safe_format_and_mount to mount the device file /dev/disk/by-id/google-...
As noted by Faizan, use docker run -v /mnt/persistent_disk:/container/target/path to include the volume in the docker container
Referencing a persistent disk in Google Container Engine
In this method, you specify the volume declaratively (after initializing it as mentioned above...) in the Replication Controller or Pod declaration. The following is a minimal excerpt of a replication controller JSON declaration. Note that the volume has to be declared read-only because no more than two instances may write to a persistent disk at one time.
{
"id": "<id>",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 3,
"replicaSelector": {
"name": "<id>"
},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "<id>",
"containers": [
{
"name": "<id>",
"image": "<docker_image>",
"volumeMounts": [
{
"name": "persistent_disk",
"mountPath": "/pd",
"readOnly": true
}
],
...
}
],
"volumes": [
{
"name": "persistent_disk",
"source": {
"persistentDisk": {
"pdName": "<persistend_disk>",
"fsType": "ext4",
"readOnly": true
}
}
}
]
}
},
"labels": {
"name": "<id>"
}
}
},
"labels": {
"name": "<id>"
}
}
If your persistent disk is attached and mounted already to the instance, I believe you can use it as a data volume with your docker container. I was able to find docker documentation which explains the steps on how to manage data in containers.