I have
"Update": "true" in Dockerrun.aws.json
which should automatically update the image and container in the EC2 ionstance when i update the image in ECR.
But when i ssh into the instance after pushing a new image , i still see the container and image not updated.
[root#ip-10-20-60-125 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8e3bab8da13 258e7bc272bd "./graphhopper.sh we…" 8 days ago Up 8 days 8989/tcp tender_mayer
[root#ip-10-20-60-125 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
aws_beanstalk/current-app latest 258e7bc272bd 8 days ago 813MB
openjdk 8-jdk b8d3f94869bb 6 weeks ago 625MB
Dockerrun.aws.json has this
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "xxxxx",
"Key": "xxxxx"
},
"Image": {
"Name": "213074117100.dkr.ecr.us-east-1.amazonaws.com/xxxxx:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8989"
}
],
"Volumes": [
{
"HostDirectory": "/data",
"ContainerDirectory": "/data"
}
],
"Logging": "/var/log/eb",
"Command": "xxxxx"
}
Is there a setting somewhere other than update: "true" ?
If i do a eb deploy, it will pull and update. But "Update": "true" should pull and update automatically when i update the image, which is not happening.
From this AWS Documentation and this thread AWS Beanstalk docker image automatic update doesn't work it seems that update=true just does the docker pull before docker run and it will not update the container on a new image update.
From my current research, it seems there is no way to automate this process at this moment.
Related
I'm getting the error and I believe the way to solve it is by running: minikube start --container-runtime=containerd
but the extension seems to run minikube start. So how am I supposed to add the flag?
Here's the launch.json file
{
"configurations": [
{
"name": "Cloud Run: Run/Debug Locally",
"type": "cloudcode.cloudrun",
"request": "launch",
"build": {
"docker": {
"path": "Dockerfile"
}
},
"image": "dai",
"service": {
"name": "dai",
"containerPort": 8080,
"resources": {
"limits": {
"memory": "256Mi"
}
}
},
"target": {
"minikube": {}
},
"watch": true
}
]
}
Cloud Code for VS Code doesn't support such settings at the moment. But you can configure minikube to apply these settings with minikube config set.
The Cloud Run emulation creates a separate minikube profile called cloud-run-dev-internal. So you should be able to run the following:
minikube config set --profile cloud-run-dev-internal container-runtime containerd
You have to delete that minikube profile to cause the setting to take effect for your next launch:
minikube delete --profile cloud-run-dev-internal
I am trying to save the results of an Azure CLI (az pipelines list) command to a variable, but the shell/script hangs.
If I run the command on its own, it works:
PS C:\> az pipelines list --project PROJECT_NAME --name PIPELINE_NAME --output json
This command is in preview. It may be changed/removed in a future release.
[
{
"authoredBy": {
# ...
},
"createdDate": "2019-12-07T00:08:03.620000+00:00",
"draftOf": null,
"drafts": [],
"id": 541,
"latestBuild": null,
"latestCompletedBuild": null,
"metrics": null,
"name": "PIPELINE_NAME",
"path": "\\",
"project": {
"abbreviation": null,
"defaultTeamImageUrl": null,
"description": null,
"id": "99a1b81a-ca3b-418a-86cf-0965eaba6dab",
"lastUpdateTime": "2019-12-13T20:54:20.28Z",
"name": "PROJECT_NAME",
"revision": 462,
"state": "wellFormed",
"url": "https://dev.azure.com/ORGANIZATION_NAME/_apis/projects/99a1b81a-ca3b-418a-86cf-0965eaba6dab",
"visibility": "private"
},
"quality": "definition",
"queue": {
"id": 501,
"name": "Azure Pipelines",
"pool": {
"id": 65,
"isHosted": true,
"name": "Azure Pipelines"
},
"url": "https://dev.azure.com/ORGANIZATION_NAME/_apis/build/Queues/501"
},
"queueStatus": "enabled",
"revision": 30,
"type": "build",
"uri": "vstfs:///Build/Definition/541",
"url": "https://dev.azure.com/ORGANIZATION_NAME/99a1b81a-ca3b-418a-86cf-0965eaba6dab/_apis/build/Definitions/541?revision=30"
}
]
PS C:\>
However, if I try to assign the results to a variable, the shell/script hangs instead:
PS C:\> $pipelines = az pipelines list --project PROJECT_NAME --name PIPELINE_NAME --output json
This command is in preview. It may be changed/removed in a future release.
And the cursor jumps to the character 61 position and just stays there forever.
What may be the cause of this behaviour? I feel like the preview warning is causing some trouble, but I was not sure how to suppress it.
Any insight is greatly appreciated.
Thanks!
Okay, this is going to sound odd, but this is a rendering issue only - the app hasn't hung at all, it just stopped the console from outputting correctly, including the prompt after the command finishes.
At the top of your script add the following:
$PSBackgroundColor = $Host.UI.RawUI.BackgroundColor
$PSForegroundColor = $Host.UI.RawUI.ForegroundColor
function Reset-Console {
$Host.UI.RawUI.BackgroundColor = $PSBackgroundColor
$Host.UI.RawUI.ForegroundColor = $PSForegroundColor
}
Then after running the command:
Reset-Console
This fixed the issue for me.
As mentioned by the previous answer this is due to the color output of the azure cli (In this case the warning text) messes up the terminal.
Since PR [Core] Knack adoption #12604 it is possible to disable coloring of output for Azure cli by setting the environment variable AZURE_CORE_NO_COLOR to True (o alternatively by setting the [core] no_color=True option in ~/.azure/config
I am using i successfully with version 2.14.2 of Azure CLI.
From the description of PR [Core] Knack adoption #12604
[Core] PREVIEW: Allow disabling color by setting AZURE_CORE_NO_COLOR
environment variable to True or [core] no_color=True config (#12601)
I have built a docker application using docker-compose which has mysql involved in it.I have pushed those containers to azure and wanted to deploy it in an edge device using Azure IoT Edge. For this i used docker application container and mysql container to deploy in edge device, Application is running but mysql is not running at edge device after deployment.
Here is the container create options that i have given for mysql module
Is it because as i am using the root as User? Which is refusing connection with different client.
{
"Env": [
"ACCEPT_EULA=Y",
"MSSQL_ROOT_PASSWORD=root"
],
"HostConfig": {
"PortBindings": {
"13306/tcp": [
{
"HostPort": "13306"
}
],
"32000/tcp": [
{
"HostPort": "32000"
}
]
},
"Mounts": [
{
"Type": "volume",
"Source": "sqlVolume",
"Target": "/var/lib/mysql"
}
]
}
I'm trying to set up the following environment on Google Cloud and have 3 major problems with it:
Database Cluster
3 nodes
one port open to world, a few ports open to the compute cluster
Compute Cluster
- 5 nodes
- communicated with the database cluster
- two ports open to the world
- runs Docker containers
a) The database cluster runs fine, I have the configuration port open to world, but I don't know how to limit the other ports to only the compute cluster?
I managed to get the first Pod and Replication-Controller running on the compute cluster and created a service to open the container to the world:
controller:
{
"id": "api-controller",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 2,
"replicaSelector": {
"name": "api"
},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "apiController",
"containers": [{
"name": "api",
"image": "gcr.io/my/api",
"ports": [{
"name": "api",
"containerPort": 3000
}]
}]
}
},
"labels": {
"name": "api"
}
}
}
}
service:
{
"id": "api-service",
"kind": "Service",
"apiVersion": "v1beta1",
"selector": {
"name": "api"
},
"containerPort": "api",
"protocol": "TCP",
"port": 80,
"selector": { "name": "api" },
"createExternalLoadBalancer": true
}
b) The container exposes port 3000, the service port 80. Where's the connection between the two?
The firewall works with labels. I want 4-5 different pods running in my compute cluster with 2 of them having open ports to the world. There can be 2 or more containers running on the same instance. The labels however are specific to the nodes, not the containers.
c) Do I expose all nodes with the same firewall configuration? I can't assign labels to containers, so not sure how to expose the api service for example?
I'll try my best to answer all of your questions as best I can.
First off, you will want to upgrade to using v1 of the Kubernetes API because v1beta1 and v1beta3 will no longer be available after Aug. 5th:
https://cloud.google.com/container-engine/docs/v1-upgrade
Also, Use YAML. It's so much less verbose ;)
--
Now on to the questions you asked:
a) I'm not sure I completely understand what you are asking here but it sounds like running the services in the same cluster (with resource limits) would be way easier than trying to deal with cross cluster networking.
b) You need to specify a targetPort so that the service knows what port to use on the container. This should match port 3000 that you have in your resource controller. See the docs for more info.
{
"kind": "Service",
"apiVersion": "v1",
"metadata: {
"labels": [{
"name": "api-service"
}],
},
"spec": {
"selector": {
"name": "api"
},
"ports": [{
"port": 80,
"targetPort": 3000
}]
"type": "LoadBalancer"
}
}
c) Yes. In Kubernetes the kube-proxy accepts traffic on any node and routes it to the appropriate node or local pod. You don't need to worry about mapping the load balancer to, or writing firewall rules for those specific nodes that happen to be running your pods (it could actually change if you do a rolling update!). kube-proxy will route traffic to the right place even if your service is not running on that node.
I'm playing around with Google's managed VM feature and finding you can fairly easily create some interesting setups. However, I have yet to figure out whether it's possible to use persistent disks to mount a volume on the container, and it seems not having this feature limits the usefulness of managed VMs for stateful containers such as databases.
So the question is: how can I mount the persistent disk that Google creates for my Compute engine instance, to a container volume?
Attaching a persistent disk to a Google Compute Engine instance
Follow the official persistent-disk guide:
Create a disk
Attach to an instance during instance creation, or to a running instance
Use the tool /usr/share/google/safe_format_and_mount to mount the device file /dev/disk/by-id/google-...
As noted by Faizan, use docker run -v /mnt/persistent_disk:/container/target/path to include the volume in the docker container
Referencing a persistent disk in Google Container Engine
In this method, you specify the volume declaratively (after initializing it as mentioned above...) in the Replication Controller or Pod declaration. The following is a minimal excerpt of a replication controller JSON declaration. Note that the volume has to be declared read-only because no more than two instances may write to a persistent disk at one time.
{
"id": "<id>",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 3,
"replicaSelector": {
"name": "<id>"
},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "<id>",
"containers": [
{
"name": "<id>",
"image": "<docker_image>",
"volumeMounts": [
{
"name": "persistent_disk",
"mountPath": "/pd",
"readOnly": true
}
],
...
}
],
"volumes": [
{
"name": "persistent_disk",
"source": {
"persistentDisk": {
"pdName": "<persistend_disk>",
"fsType": "ext4",
"readOnly": true
}
}
}
]
}
},
"labels": {
"name": "<id>"
}
}
},
"labels": {
"name": "<id>"
}
}
If your persistent disk is attached and mounted already to the instance, I believe you can use it as a data volume with your docker container. I was able to find docker documentation which explains the steps on how to manage data in containers.