Packer Autoinstall with Ubuntu 21.04/21.10 Desktop - packer

I was trying to automate a Packer build of Ubuntu Desktop 21.04 in vSphere with the HCL below. I since found that this will only start to work from 21.10 for desktop images.
See:
https://discourse.ubuntu.com/t/refreshing-the-ubuntu-desktop-installer/20659/76?u=nathanto
The original question is below to help others.
The key piece is where the boot command defines the seedfrom. That seems not to work in the sense that the user-data is never loaded. The VM boots, and the net.ifnames=0 argument from the boot command is applied (interfaces are named eth0).
The logic of the boot command is to press c to get to the grub> prompt, and then the commands are entered as shown in the boot_command below.
I see in the /proc/cmdline that the boot command is applied properly.
I can see no indication that the user-data is loaded though. If I look at the web server shown in the boot command, using Firefox from the booted VM, the user-data and meta-data files are there and accessible.
Does anyone have any ideas of how to debug this please?
source "vsphere-iso" "dev_vm" {
username = var.vcenter_username
password = var.vcenter_password
vcenter_server = var.vcenter_server
cluster = var.vcenter_cluster
datacenter = var.vcenter_datacenter
datastore = var.vcenter_vm_datastore
guest_os_type = "ubuntu64Guest"
insecure_connection = "true"
iso_checksum = "sha256:fa95fb748b34d470a7cfa5e3c1c8fa1163e2dc340cd5a60f7ece9dc963ecdf88"
iso_urls = ["https://releases.ubuntu.com/21.04/ubuntu-21.04-desktop-amd64.iso"]
http_directory = "./http"
vm_name = "dev_vm"
CPUs = 2
RAM = 2048
RAM_reserve_all = true
boot_wait = "3s"
convert_to_template = false
boot_command = [
"c",
"linux /casper/vmlinuz --- autoinstall ds='nocloud-net;seedfrom=http://{{.HTTPIP}}:{{.HTTPPort}}/' net.ifnames=0 ",
"<enter><wait>",
"initrd /casper/initrd<enter><wait>",
"boot<enter>"
]
network_adapters {
network = "xxx"
network_card = "e1000"
}
storage {
disk_size = 40960
disk_thin_provisioned = true
}
ssh_username = "xx"
ssh_password = "xx"
ssh_timeout = "60m"
}
build {
sources = [
"source.vsphere-iso.dev_vm"
]
...
}

I was working with packer with virtualbox installing Ubuntu server 20.04 with autoinstall and I watched the installation process happen. What I see is cloud init running and then after it reaches the point where on 20.04 the server reboots itself, the graphical installer launches.
I have tried 8 or 9 versions of the boot_command that people swear works and so far none of them have. I am looking for how people create the boot_command. I am new to Ubuntu autoinstall and packer, so I am learning.
Also they are revamping ubiquity
https://discourse.ubuntu.com/t/refreshing-the-ubuntu-desktop-installer/20659/73
So this may impact you as well.

Related

Problems with execution aws command via ssh jenkins

Good morning, how are you?
I have a problem with one execution via ssh in my jenkins.
Those characters that appear before did not appear, and we have not changed anything in the node..
The code we use is:
withCredentials([usernamePassword(credentialsId: 'id', passwordVariable: 'pass', usernameVariable: 'user')]) {
def remote = [:]
remote.name = 'id_nme'
remote.host = 'ip_node'
remote.user = user
remote.password = pass
remote.allowAnyHosts = true
remote.timeoutSec = 300
sshCommand remote: remote, command: "aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name [name_asg]"
}
When the command is launched, the job is stuck and does not progress.

Why I can not use a 5.7 nor a 5.6 mysql docker image instead of mysql 8 with terraform in windows?

I am testing a mysql_database inside a docker_container.mysql using terraform in windows, but every time I try to use an image different from mysql:8 inside the docker_image.mysql used by docker_container.mysql, terraform takes 5 minutes to create the mysql_database resource and throws the following error:
Error: Could not connect to server: dial tcp 127.0.0.1:3306: connectex: No connection could be made because the target machine actively refused it.
on main.tf line 33, in resource "mysql_database" "test":
33: resource "mysql_database" "test" {
And here is main.tf:
provider "docker" {
host = "npipe:////.//pipe//docker_engine"
}
resource "docker_image" "mysql" {
name = "mysql:8"
//keep_locally = true
}
resource "docker_container" "mysql" {
name = "mysql"
image = docker_image.mysql.latest
restart = "always"
env = [
"MYSQL_ROOT_PASSWORD=root"
]
volumes {
volume_name = "mysql-vol"
container_path = "/var/lib/mysql"
}
ports {
internal = 3306
external = 3306
}
}
provider "mysql" {
endpoint = "127.0.0.1:3306"
username = "root"
password = "root"
}
resource "mysql_database" "test" {
name = "test"
depends_on = [docker_container.mysql]
}
I am testing mysql image tags shown at https://hub.docker.com/_/mysql, specifically 5.6, 5.7 and 8, but only using mysql:8 seems to work Is there an other way in which I should reference those mysql image tags?
I tried to verify the issue, and I observed the same error as yours only for mysql 5.7 and 5.6 when you keep the same volumes.
After removing the following section from the terraform script
volumes {
volume_name = "mysql-vol"
container_path = "/var/lib/mysql"
}
and removing existing mysql docker images, mysql 5.6, mysql 5.7 and 8 worked as expected.
Btw, the error leading to failed connection was:
ERROR 2013 (HY000): Lost connection to MySQL server at 'handshake: reading initial communication packet', system error: 11

MySql Setup in Linux Docker Container Via Terraform

Requirement: Need to automate MySQL installation & Database creation on Linux(Ubuntu)Docker Container via Terra form.
I am doing all this stuff on my local machine & below is the Terra form configuration.
Terra form file:
resource "docker_container" "db-server1" {
name = "db-server"
image = docker_image.ubuntu.latest
ports {
internal = 80
external = 9093
}
provisioner "local-exec" {
command = "docker container start dbs-my"
}
provisioner "local-exec" {
command = "docker exec dbs-my apt-get update"
}
provisioner "local-exec" {
command = "docker exec dbs-my apt-get -y install mysql-server"
}
}
But in container there is no mysql service present, when i am trying to launch mysql command, i am getting below error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
Using Terraform for this at all is a little unusual; you might look at more Docker-native tools like Docker Compose to set this up. There are also several anti-patterns in this example: you should generally avoid installing software in running containers, and avoid running long sequences of imperative commands via Terraform, and it's usually not useful to run the bare ubuntu Docker image as-is.
You can run the Docker Hub mysql image instead:
resource "docker_image" "mysql" {
name = "mysql:8"
}
resource "random_password" "mysql_root_password" {
length = 16
}
resource "docker_container" "mysql" {
name = "mysql"
image = "${docker_image.mysql.latest}"
env {
MYSQL_ROOT_PASSWORD = "${random_password.mysql_root_password.result}"
}
mounts {
source = "/some/host/mysql/data/path"
target = "/var/lib/mysql/data"
type = "bind"
}
ports {
internal = 3306
external = 3306
}
}
If you wanted to do further setup on the created database, you could use the MySQL provider
provider "mysql" {
endpoint = "127.0.0.1:3306" # the "external" port
username = "root"
password = "${random_password.mysql_root_password.result}"
}
resource "mysql_database" "db" {
name = "db"
}

How do I enable docker's experimental features so Gitlab can build with it

In a small swarm that has experimental features enabled and one that hosts gitlab-runners I seem to be unable to build with experimental features enabled. (I want to build with the --squash option)
docker version on the host shows experimental: true, but the same command in the gitlab-ci-runner shows experimental: false.
I can't seem to find any additional configuration options...
my runner config:
cat /srv/data/gitlab-runner/etc/config.toml
concurrent = 4
check_interval = 0
[[runners]]
name = "bushbaby-general-ci"
url = "xxx"
token = "xxx"
environment = ["COMPOSER_CACHE_DIR=/cache/composer", "COMPOSER_ALLOW_SUPERUSER=1", "YARN_CACHE_FOLDER=/cache/yarn"]
executor = "docker"
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = true
disable_cache = false
volumes = ["/var/cache:/cache:rw"]
shm_size = 0
[runners.cache]
I figured it out
ensure this is placed inside the .gitlab-ci.yml
some_build:
stage: build
image: docker:git
services:
- name: docker:dind
command: ["--experimental"]
I presume the "privileged = true" in the runner config is also required but have not tested that

admin-username error of proxy

I tried to install proxy on development machine and I got the following error.
/etc/init.d/mysql-proxyd start
Starting mysql-proxy: 2011-02-26 15:51:45: (critical) admin-plugin.c:569: --admin-username needs to be set
2011-02-26 15:51:45: (critical) mainloop.c:267: applying config of plugin admin failed
2011-02-26 15:51:45: (critical) mysql-proxy-cli.c:596: Failure from chassis_mainloop. Shutting down.
[ OK ]
Since this is only a test machine, I do not want the security feature of proxy. How do I avoid the above error?
Either upgrade your version of mysql-proxy to 0.8.2 or greater or explicitly specify that you don't need the admin plugin by using mysql-proxy --plugins=proxy
[mysql-proxy]
daemon = true
user = mysql
proxy-skip-profiling = true
keepalive = true
max-open-files = 2048
event-threads = 50
pid-file = /var/run/mysql-proxy.pid
log-file = /var/log/mysql-proxy.log
log-level = debug
admin-address=:4401
admin-username=1
admin-password=1
admin-lua-script=/usr/local/lib/mysql-proxy/lua/admin.lua
proxy-address = 0.0.0.0:3307
proxy-backend-addresses = 192.168.2.1:3306
proxy-read-only-backend-addresses=192.168.6.2:3306, 192.168.6.1:3306
proxy-lua-script=/usr/lib/mysql-proxy/lua/proxy/balance.lua