Building a ConsoleMe image using Packer - containers

I am trying to build a ConsoleMe image using Packer and I am getting the following error message: Error: Failed to prepare build: "amazon-ebs" 1 error(s) occurred: * Destination must be specified.
How can I solve it?
Below is the template that I'm using for the build
"builders": [
{
"ami_name": "consoleme {{timestamp}}",
"instance_type": "t2.micro",
"region": "us-east-1",
"source_ami_filter": {
"filters": {
"name": "ubuntu/images/*ubuntu-focal-20.04-amd64-server-*",
"root-device-type": "ebs",
"virtualization-type": "hvm"
},
"most_recent": true,
"owners": ["099720109477"]
},
"ssh_username": "ubuntu",
"type": "amazon-ebs"
}
],
"provisioners": [
{
"inline": ["sleep 30"],
"type": "shell"
},
{
"inline": [
"export CONFIG_LOCATION={{user `CONFIG_LOCATION`}}",
"export CONSOLEME_CONFIG_ENTRYPOINT={{user `CONSOLEME_CONFIG_ENTRYPOINT`}}",
"sudo mkdir -p /apps/consoleme",
"sudo chown -R ubuntu /apps/"
],
"type": "shell"
},
{
"destination": "{{user `app_archive`}}",
"source": "{{user `app_archive`}}",
"type": "file"
},
{
"inline": [
"tar -xzf {{user `app_archive`}} -C /apps/consoleme/",
"sudo cp -R /apps/consoleme/packer/root/* /",
"rm {{user `app_archive`}}"
],
"variables": {
"CONFIG_LOCATION": "{{env `CONFIG_LOCATION`}}",
"CONSOLEME_CONFIG_ENTRYPOINT": "{{env `CONSOLEME_CONFIG_ENTRYPOINT`}}",
"app_archive": ""
}
}```

Related

The mysql configuration file '/opt/bitnami/mysql/conf/my.cnf' is not writable

Now I am using this command to in stall mysql in my kubernetes 1.15:
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm install my-release bitnami/mysql
when the mysql stateful sets started, shows this waring:
mysql 11:12:23.57 WARN ==> The mysql configuration file '/opt/bitnami/mysql/conf/my.cnf' is not writable. Configurations based on environment variables will not be applied for this file.
and I have changed the mysql root password in kubernetes secrets, and now I did not able to use the new password to login into mysql, what should I do to fix it? I am using this command to login:
mycli -h 172.30.184.8 -u mysql -p 9Ve5TEHgcY // way 1
mysql -h localhost -uroot -p 6a4vUXlTyeRvzizZ // way 2
shows error:
Access denied for user 'root'#'localhost' (using password: YES)'
what should I do to fix the waring? and login into mysql success. This is the stateful sets define:
{
"kind": "StatefulSet",
"apiVersion": "apps/v1beta2",
"metadata": {
"name": "ajidou-mysql",
"namespace": "dabai-fat",
"selfLink": "/apis/apps/v1beta2/namespaces/dabai-fat/statefulsets/ajidou-mysql",
"uid": "894b8cfe-c000-408e-9eee-f0b18eff9aed",
"resourceVersion": "81999050",
"generation": 9,
"creationTimestamp": "2021-04-26T10:11:23Z",
"labels": {
"app.kubernetes.io/component": "primary",
"app.kubernetes.io/instance": "ajidou-mysql",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/name": "mysql",
"helm.sh/chart": "mysql-8.5.5"
},
"annotations": {
"meta.helm.sh/release-name": "ajidou-mysql",
"meta.helm.sh/release-namespace": "dabai-fat"
}
},
"spec": {
"replicas": 1,
"selector": {
"matchLabels": {
"app.kubernetes.io/component": "primary",
"app.kubernetes.io/instance": "ajidou-mysql",
"app.kubernetes.io/name": "mysql"
}
},
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"app.kubernetes.io/component": "primary",
"app.kubernetes.io/instance": "ajidou-mysql",
"app.kubernetes.io/managed-by": "Helm",
"app.kubernetes.io/name": "mysql",
"helm.sh/chart": "mysql-8.5.5"
},
"annotations": {
"checksum/configuration": "41e7174a05fda333d37a84fdb7a7730102c7036e38fcbacd7567e3d58b3e675e"
}
},
"spec": {
"volumes": [
{
"name": "config",
"configMap": {
"name": "ajidou-mysql",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "mysql",
"image": "docker.io/bitnami/mysql:8.0.24-debian-10-r0",
"ports": [
{
"name": "mysql",
"containerPort": 3306,
"protocol": "TCP"
}
],
"env": [
{
"name": "BITNAMI_DEBUG",
"value": "false"
},
{
"name": "MYSQL_ROOT_PASSWORD",
"valueFrom": {
"secretKeyRef": {
"name": "ajidou-mysql",
"key": "mysql-root-password"
}
}
},
{
"name": "MYSQL_DATABASE",
"value": "my_database"
}
],
"resources": {},
"volumeMounts": [
{
"name": "data",
"mountPath": "/bitnami/mysql"
},
{
"name": "config",
"mountPath": "/opt/bitnami/mysql/conf/my.cnf",
"subPath": "my.cnf"
}
],
"livenessProbe": {
"exec": {
"command": [
"/bin/bash",
"-ec",
"password_aux=\"${MYSQL_ROOT_PASSWORD:-}\"\nif [[ -f \"${MYSQL_ROOT_PASSWORD_FILE:-}\" ]]; then\n password_aux=$(cat \"$MYSQL_ROOT_PASSWORD_FILE\")\nfi\nmysqladmin status -uroot -p\"${password_aux}\"\n"
]
},
"initialDelaySeconds": 120,
"timeoutSeconds": 1,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 3
},
"readinessProbe": {
"exec": {
"command": [
"/bin/bash",
"-ec",
"password_aux=\"${MYSQL_ROOT_PASSWORD:-}\"\nif [[ -f \"${MYSQL_ROOT_PASSWORD_FILE:-}\" ]]; then\n password_aux=$(cat \"$MYSQL_ROOT_PASSWORD_FILE\")\nfi\nmysqladmin status -uroot -p\"${password_aux}\"\n"
]
},
"initialDelaySeconds": 30,
"timeoutSeconds": 1,
"periodSeconds": 10,
"successThreshold": 1,
"failureThreshold": 3
},
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"runAsUser": 1001
}
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "ajidou-mysql",
"serviceAccount": "ajidou-mysql",
"securityContext": {
"fsGroup": 1001
},
"affinity": {
"podAntiAffinity": {
"preferredDuringSchedulingIgnoredDuringExecution": [
{
"weight": 1,
"podAffinityTerm": {
"labelSelector": {
"matchLabels": {
"app.kubernetes.io/component": "primary",
"app.kubernetes.io/instance": "ajidou-mysql",
"app.kubernetes.io/name": "mysql"
}
},
"namespaces": [
"dabai-fat"
],
"topologyKey": "kubernetes.io/hostname"
}
}
]
}
},
"schedulerName": "default-scheduler"
}
},
"volumeClaimTemplates": [
{
"metadata": {
"name": "data",
"creationTimestamp": null,
"labels": {
"app.kubernetes.io/component": "primary",
"app.kubernetes.io/instance": "ajidou-mysql",
"app.kubernetes.io/name": "mysql"
}
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "8Gi"
}
},
"volumeMode": "Filesystem"
},
"status": {
"phase": "Pending"
}
}
],
"serviceName": "ajidou-mysql",
"podManagementPolicy": "OrderedReady",
"updateStrategy": {
"type": "RollingUpdate"
},
"revisionHistoryLimit": 10
},
"status": {
"observedGeneration": 9,
"replicas": 1,
"currentReplicas": 1,
"updatedReplicas": 1,
"currentRevision": "ajidou-mysql-f69895f8c",
"updateRevision": "ajidou-mysql-f69895f8c",
"collisionCount": 0
}
}
Now I try to reinstall the mysql in kubernetes and use the initial password to login into mysql:
$ exec kubectl exec -i -t -n dabai-fat ajidou-mysql-0 -c mysql "--" sh -c "clear; (bash || ash || sh)" ‹ruby-2.7.2›
sh: 1: clear: not found
I have no name!#ajidou-mysql-0:/$ mysql -h 172.30.184.8 -uroot -p F7cVhpiRsC
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'172.30.184.8' (using password: YES)
I have no name!#ajidou-mysql-0:/$ mysql -h 172.30.184.8 -uroot -p F7cVhpiRsC
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'172.30.184.8' (using password: YES)
I have no name!#ajidou-mysql-0:/$ mysql -h 172.30.184.8 -uroot -p F7cVhpiRsC
Enter password:
ERROR 1045 (28000): Access denied for user 'root'#'172.30.184.8' (using password: YES)
I have no name!#ajidou-mysql-0:/$
the UI shows error:
Readiness probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure. mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mysql/tmp/mysql.sock' (2)' Check that mysqld is running and that the socket: '/opt/bitnami/mysql/tmp/mysql.sock' exists!

storing json output in bash from cloudfromation

I am using aws ecs query to get list of properties being used by the current running task.
command -
cft = "aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b
I am storing this in an output variable
output= $( eval $cft)
Output:
"tasks": [
{
"attachments": [
{
"id": "da8a1312-8278-46d5-8e3b-6b6a1d96f820",
"type": "ElasticNetworkInterface",
"status": "ATTACHED",
"details": [
{
"name": "subnetId",
"value": "subnet-0a151f2eb959ad4"
},
{
"name": "networkInterfaceId",
"value": "eni-081948e3666253f"
},
{
"name": "macAddress",
"value": "02:2a:9i:5c:4a:77"
},
{
"name": "privateDnsName",
"value": "ip-172-56-17-177.us-west-2.compute.internal"
},
{
"name": "privateIPv4Address",
"value": "172.56.17.177"
}
]
}
],
"availabilityZone": "us-west-2a",
"clusterArn": "arn:aws:ecs:us-west-2:4984314772:cluster/secrets",
"containers": [
{
"taskArn": "arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b",
"name": "nginx",
"image": "nginx",
"lastStatus": "PENDING",
"networkInterfaces": [
{
"attachmentId": "da8a1312-8278-46d5-6b6a1d96f820",
"privateIpv4Address": "172.31.17.176"
}
],
"healthStatus": "UNKNOWN",
"cpu": "0"
}
],
"cpu": "256",
"createdAt": "2020-12-10T18:00:16.320000+05:30",
"desiredStatus": "RUNNING",
"group": "family:nginx",
"healthStatus": "UNKNOWN",
"lastStatus": "PENDING",
"launchType": "FARGATE",
"memory": "512",
"overrides": {
"containerOverrides": [
{
"name": "nginx"
}
],
"inferenceAcceleratorOverrides": []
},
"platformVersion": "1.4.0",
"tags": [],
"taskArn": "arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b",
"taskDefinitionArn": "arn:aws:ecs:us-west-2:4984314772:task-definition/nginx:17",
"version": 2
}
],
"failures": []
}
now if do an echo of $output.tasks[0].containers[0] nothing happens it prints the entire thing again, i want to store the result in output variable and refer different parameter like we do in json format.
You will need to use a json parser such as jq and so:
eval $cft | jq '.tasks[].containers[]'
To avoid using eval you could simple pipe the aws command into jq and so:
aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b | jq '.tasks[].containers[]'
or:
cft=$(aws ecs describe-tasks --cluster arn:aws:ecs:us-west-2:4984314772:cluster/secrets --tasks arn:aws:ecs:us-west-2:4984314772:task/secrets/86855757eec4487f9d4475a1f7c4cb0b | jq '.tasks[].containers[]')
echo $cft | jq '.tasks[].containers[]'

Configure Visual Studio Code task to execute multiple shell commands in sequence

Is there a possibility, to add multiple shell commands in sequence to a Visual Studio Code task with separate arguments and options? I managed to execute multiple commands using && to chain then together to a single command, as you could in any Linux shell. But i guess, there has to be a better way to do this.
I use Visual Studio Code on Ubuntu 18.04 LTS.
Here is the example of how i currently chained the commands for a build task in a task.json file to build a c++ project using cmake:
{
"tasks": [
{
"type": "shell",
"label": "build",
"command": "cd ${workspaceFolder}/build && cmake .. && make clean && make",
"group": {
"kind": "build",
"isDefault": true
}
}
],
"version": "2.0.0"
}
Update: -------------------------------------------------------------------
I tried to use the dependsOn: property to start the 4 separatly defined tasks. However, this resulted in all 4 commands being executed at the same time in different shell instances instead of in sequence as needed:
{
"tasks": [
{
"type": "shell",
"label": "build project",
"dependsOn": [
"Go to build folder",
"Cmake",
"make clean",
"make",
],
"group": {
"kind": "build",
"isDefault": true
}
},
{
"label": "Go to build folder",
"type": "shell",
"command": "cd ${workspaceFolder}/build",
"presentation": {
"group": "cmake-complile"
}
},
{
"label": "Cmake",
"type": "shell",
"command": "cmake ..",
"presentation": {
"group": "cmake-complile"
}
},
{
"label": "make clean",
"type": "shell",
"command": "make clean",
"presentation": {
"group": "cmake-complile"
}
},
{
"label": "make",
"type": "shell",
"command": "make",
"presentation": {
"group": "cmake-complile"
}
}
],
"version": "2.0.0"
}
Thanks to the many comments i found a solution which works well by setting "dependsOrder" to "sequence":
{
"tasks": [
{
"type": "shell",
"label": "build project",
"dependsOrder": "sequence",
"dependsOn": [
"Cmake",
"make clean",
"make",
],
"group": {
"kind": "build",
"isDefault": true
}
},
{
"label": "Cmake",
"type": "shell",
"command": "cmake ..",
"options": {
"cwd": "${workspaceFolder}/build",
},
},
{
"label": "make clean",
"type": "shell",
"command": "make clean",
"options": {
"cwd": "${workspaceFolder}/build",
},
},
{
"label": "make",
"type": "shell",
"command": "make",
"options": {
"cwd": "${workspaceFolder}/build",
},
}
],
"version": "2.0.0"
}

stream2es: command not found

I'd like to use a script to convert the Enron email dataset first in mbox format, then in a JSON doc. After that the script should automatically import this JSON into elasticsearch using stream2es utility. Here I faced the problem; when I launch the script everything goes well except the stream2es utility. In fact, stream2es: command not found appears.
I have a folder with the script, the Enron email folder and stream2es in it. I grant the permissions to streams2es, so I think I have everything to make the script work.
I'm going to post the script here:
#!/bin/sh
#
# Loading enron data into elasticsearch
#
# Prerequisites:
# make sure that stream2es utility is present in the path
# install beautifulsoup4 and lxml:
# sudo easy_install beautifulsoup4
# sudo easy_install lxml
#
# The mailboxes__jsonify_mbox.py and mailboxes__convert_enron_inbox_to_mbox.py are modified
# versions of https://github.com/ptwobrussell/Mining-the-Social-Web/tree/master/python_code
#
#if [ ! -d enron_mail_20110402 ]; then
# echo "Downloading enron file"
# curl -O -L http://www.cs.cmu.edu/~enron/enron_mail_20110402.tgz
# tar -xzf enron_mail_20110402.tgz
#fi
if [ ! -f enron.mbox.json ]; then
echo "Converting enron emails to mbox format"
python mailboxes__convert_enron_inbox_to_mbox.py allen-p > enron.mbox # allen-p is one of the folders within Enron dataset
echo "Converting enron emails to json format"
python mailboxes__jsonify_mbox.py enron.mbox > enron.mbox.json
rm enron.mbox
fi
echo "Indexing enron emails"
es_host="http://localhost:9200"
curl -XDELETE "$es_host/enron"
curl -XPUT "$es_host/enron" -d '{
"settings": {
"index.number_of_replicas": 0,
"index.number_of_shards": 5,
"index.refresh_interval": -1
},
"mappings": {
"email": {
"properties": {
"Bcc": {
"type": "string",
"index": "not_analyzed"
},
"Cc": {
"type": "string",
"index": "not_analyzed"
},
"Content-Transfer-Encoding": {
"type": "string",
"index": "not_analyzed"
},
"Content-Type": {
"type": "string",
"index": "not_analyzed"
},
"Date": {
"type" : "date",
"format" : "EEE, dd MMM YYYY HH:mm:ss Z"
},
"From": {
"type": "string",
"index": "not_analyzed"
},
"Message-ID": {
"type": "string",
"index": "not_analyzed"
},
"Mime-Version": {
"type": "string",
"index": "not_analyzed"
},
"Subject": {
"type": "string"
},
"To": {
"type": "string",
"index": "not_analyzed"
},
"X-FileName": {
"type": "string",
"index": "not_analyzed"
},
"X-Folder": {
"type": "string",
"index": "not_analyzed"
},
"X-From": {
"type": "string",
"index": "not_analyzed"
},
"X-Origin": {
"type": "string",
"index": "not_analyzed"
},
"X-To": {
"type": "string",
"index": "not_analyzed"
},
"X-bcc": {
"type": "string",
"index": "not_analyzed"
},
"X-cc": {
"type": "string",
"index": "not_analyzed"
},
"bytes": {
"type": "long"
},
"offset": {
"type": "long"
},
"parts": {
"dynamic": "true",
"properties": {
"content": {
"type": "string"
},
"contentType": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
}
}'
stream2es stdin --target $es_host/enron/email < enron.mbox.json
Can anyone help me to solve the stream2es command not found problem? Thank you guys.
command not found means that the shell cannot find the stream2es command. You have two options:
Your script either needs to call ./stream2es (i.e. call the stream2es script located in the same folder) or
you need to move stream2es in a folder that is located on your $PATH

Json parsing with Linux shell script

Here is the json file I want to parse. I specially need the json objects inside the json array. Shell script is the only tool I can use right now.
{
"entries": [
{
"author": {
"value": "plugin-demo Administrator",
"origin": "http://localhost:8080/webservice/person/18"
},
"creator": {
"value": "plugin-demo Administrator",
"origin": "http://localhost:8080/webservice/person/18"
},
"creationDate": "2015-11-04T15:14:18.000+0600",
"lastModifiedDate": "2015-11-04T15:14:18.000+0600",
"model": "http://localhost:8080/plugin-editorial/model/281/football",
"payload": [
{
"name": "basic",
"value": "Real Madrid are through"
}
],
"publishDate": "2015-11-04T15:14:18.000+0600"
},
{
"author": {
"value": "plugin-demo Administrator",
"origin": "http://localhost:8080/webservice/person/18"
},
"creator": {
"value": "plugin-demo Administrator",
"origin": "http://localhost:8080/webservice/person/18"
},
"creationDate": "2015-11-04T15:14:18.000+0600",
"lastModifiedDate": "2015-11-04T15:14:18.000+0600",
"model": "http://localhost:8080/plugin-editorial/model/281/football",
"payload": [
{
"name": "basic",
"value": "Real Madrid are through"
}
],
"publishDate": "2015-11-04T15:14:18.000+0600"
}
]
}
How can I do it in shell script?
Use something, anything other than shell.
Since the original answer I've found jq:
jq '.entries[0].author.value' /tmp/data.json
"plugin-demo Administrator"
Install node.js
node -e 'console.log(require("./data.json").entries[0].author.value)'
Install jsawk
cat data.json | jsawk 'return this.entries[0].author.value'
Install Ruby
ruby -e 'require "json"; puts JSON.parse(File.read("data.json"))["entries"][0]["author"]["value"]'
Just don't try and parse it in shell script.