packer build debian arm64 img appear no root file system is defined - qemu

problem
I need to create a qcow2 image of Debian (arm64) through packer and Debian (arm64) ISO. The community examples of packer are all AMD64. I modified the example of the packer community of AMD 64 HCl file can start QEMU arm64 virtual machine, load preset file and enter automatic installation,
However, no root file system is defined appears in the partition disks step. I checked the QEMU disk (qcow2), and partman auto doesn't work. Here are my relevant codes. Please point out the problems and solutions
Related code
packer .hcl
. HCl is also x86, but by modifying the binary file of QEMU, there is boot_ command qemu_ Args can be installed automatically
The amendments are as follows
boot_command = [
"<wait10>c<wait5><wait10>",
"linux /install.a64/vmlinuz --quiet",
" auto=true ",
" url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/${var.preseed_file} ",
" hostname=${var.vm_name} ", " domain=${var.domain} ", " interface=auto ",
" ---",
"<enter><wait>",
"initrd /install.a64/initrd.gz",
"<enter><wait>",
"boot<enter><wait>"
]
qemuargs = [
[ "-m", "8192" ],
[ "-cpu", "max" ],
[ "-smp", "4" ],
[ "-M", "virt" ],
# iso
[ "-device", "nec-usb-xhci" ],
[ "-device", "usb-storage,drive=install" ],
[ "-drive", "file=/root/packer-build-master/source/debian/debian-11.2.0-arm64-netinst.iso,if=none,id=install,media=cdrom,readonly=on" ],
# hd
[ "-drive", "if=none,file=build/test/arm64/base-bullseye,id=hd,cache=writeback,discard=ignore,format=qcow2" ],
["-device", "virtio-blk-device,drive=hd"],
# [ "-bios", "edk2-aarch64-code.fd" ],
[ "-bios", "/usr/share/qemu-efi-aarch64/QEMU_EFI.fd" ],
[ "-boot", "strict=off" ],
[ "-monitor", "none" ]
The preseed code is directly used by x86 because the community example does not have arm. Another pitfall here is that the packer variable is used in the preset file, but this is a feature community example. This feature has not been implemented yet. My local preset has changed the variable to the actual value
preseed code
Operation results and error reporting contents
Connect the QEMU virtual machine through VNC to check whether the disk partition is reached, and then no root file system is defined appears
My solution ideas and tried methods
Find the partition example of Debian preset about amr64, but it is not found
Official link:
https://www.debian.org/releases/stable/arm64/apbs04.en.html#preseed -Partman#preset partition related
https://salsa.debian.org/installer-team/partman-auto#pressed There are various architecture partition examples in the open source warehouse, but there is no arm64
What I want to achieve
Find the partition about arm64 Debian preset, so that the automatic installation of packer can continue

Related

How do I use the HTML/CSV reporters in pa11y with GitHub Actions?

I'm trying to get pa11y to output HTML and CSV reports.
Here's the errors:
Unable to load reporter "csv"
Unable to load reporter "html"
I have pa11y configured to generate cli, csv, and html reports, but only the cli report is output correctly.
My pa11yconfig.json looks like this:
{
"standard": "WCAG2AAA",
"level": "notice",
"defaults": {
"chromeLaunchConfig": {
"args": [
"--no-sandbox"
]
},
"reporters": [
"cli",
"csv",
"html"
],
"runners": [
"axe",
"htmlcs"
],
"timeout": 1000000,
"wait": 2000
}
}
And I'm running pa11y like this:
pa11y-ci --sitemap "$SITEMAP_URL" > "$OUTPUT_DIR/success-pa11y-report.txt" 2> "$OUTPUT_DIR/failures-pa11y-report.txt"
This command is being executed as part of GitHub Actions, which looks like this:
- name: Install pa11y.
run: npm install -g pa11y-ci
- name: 'TEST: Run pa11y tests.'
run: my-pa11y-script.sh
My understanding is that the reporters are now bundled with pa11y, so how can I get pa11y to recognize them?
As noted by #José Luis, pa11y and pa11y-ci reporters are different.
There is no csv reporter for pa11y-ci, but there is a bundled json reporter.
As for html reporters, there is an html reporter included with pa11y, but for pa11y-ci, you need to download the pa11y-ci-reporter-html npm module.
Reference:
The pa11y-ci docs currently refer to the deprecated pa11y-html-reporter module, which will not work; I've opened a PR to update the docs.

Crontab to reload PM2 cluster processes

I'm around this issue for one day with no effective results. Imagine I want to reload (zero downtime, thus not restart) a PM2 cluster process every hour with crontab.
The PM2 app is this one, which I start with sudo pm2 start app.json (btw, I think it's not relevant, but I'm using pm2 as sudo, I can't recall why)
{
"apps" : [{
"name" : "autocosts.prod",
"script" : "bin/server.js",
"cwd" : "/var/www/autocosts.prod/",
"node_args" : "--use_strict",
"args" : "-r prod --print --pdf --social --googleCaptcha --googleAnalytics --database",
"exec_mode" : "cluster",
"instances" : 4,
"wait_ready" : true,
"listen_timeout" : 50000,
"watch" : false,
"exp_backoff_restart_delay" : 200,
"env": {
"NODE_ENV": "production"
},
"log_date_format": "DD-MM-YYYY"
}]
}
My crontab line is
# reload every hour
0 * * * * /usr/local/bin/node /usr/local/bin/pm2 reload /var/www/autocosts.prod/app.json > /var/log/pm2/app.cron.log 2>&1
But on that log file I get an error
[PM2] Applying action reloadProcessId on app [autocosts.prod](ids: [ 0, 1, 2, 3 ])
[PM2][ERROR] Process 0 not found
Process 0 not found
It seems PM2 can't detect the id numbers of the cluster.
How to have crontabs to reload (zero downtime) a PM2 cluster process?
I found the problem, was indeed due to the fact that I was using pm2 as sudo and crontab in sudo mode has no access to a series of environments.
Change owner of PM2 to yourself (do which pm2 to find the full path of the executable), such that you can use it without sudo. Do the same for ~/.pm2.

MalformedPolicyDocument error while creating an IAM Policy

I'm trying to create a managed policy by AWS CLI:
POLICY='
{
"Version":"2012-10-17",
"Statement":
[{
"Effect":"Allow",
"Action":
[
"cloudformation:*"
],
"Resource":"*"
},
{
"Effect":"Deny",
"Action":
[
"cloudformation:UpdateStack",
"cloudformation:DeleteStack"
],
"Resource": "'${arn}'"
}]
}'
# Create policy if not already created
[ $(aws iam list-policies | grep -ce CloudFormation-policy-${StackName}) -eq 0 ] && (aws iam create-policy --policy-name CloudFormation-policy-${StackName} --policy-document "'${POLICY}'")
When I run the script I get this error:
An error occurred (MalformedPolicyDocument) when calling the CreatePolicy operation: Syntax errors in policy.
I can't figure out where the error is.
Any idea?
Each operating systems has its own way of treating single quote vs double quote escaping and as per AWS CLI documentation:
When passing in large blocks of data, you might find it easier to save
the JSON to a file and reference it from the command line. JSON data
in a file is easier to read, edit, and share with others.
Quoting Strings approach might not be best choice while passing Json data, instead use Loading parameters from file approach.

MySQL import hangs on Vagrant CoreOS box on Mac

I have a local development setup using the following:
Mac Yosemite 10.10.3
Vagrant 1.7.3
CoreOS alpha version 681.0.0
2 Docker containers one for apache PHP and another for mysql both based on Ubuntu 12.10
Its set up to sync the local dev directory ~/Sites to the Vagrant box using NFS, since my working directories as well as the MySQL directories are located here (~/Sites/.coreos-databases/mysql). From what I have read this is not the best type of setup but it has worked for me for quite some time as well as others at work.
Recently I have not been able to import any database dumps into this setup. The import starts and hangs approximately half way through the process. It happens on the command line as well as with Sequel Pro. It does import some of the the tables, but freezes exactly at the same spot everytime. It doesn't seem to matter what the size of the dump is - the one I have been attempting is only 104Kb. Someone else is having the same issue with a 100MB+ dump - freezing at the same spot approx halfway.
My Vagrantfile:
Vagrant.configure("2") do |config|
# Define the CoreOS box
config.vm.box = "coreos-alpha"
config.vm.box_url = "http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json"
# Define a static IP
config.vm.network "private_network",
ip: "33.33.33.77"
# Share the current folder via NFS
config.vm.synced_folder ".", "/home/core/sites",
id: "core",
:nfs => true,
:mount_options => ['nolock,vers=3,udp,noatime']
# Provision docker with shell
# config.vm.provision
config.vm.provision "shell",
path: ".coreos-devenv/scripts/provision-docker.sh"
end
Dockerfile for mysql:
# Start with Ubuntu base
FROM ubuntu:12.10
# Install some basics
RUN apt-get update
# Install mysql
RUN apt-get install -y mysql-server
# Clean up after install
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Add a grants file to set up remote user
# and disbale the root user's remote access.
ADD grants.sql /etc/mysql/
# Add a conf file for correcting "listen"
ADD listen.cnf /etc/mysql/conf.d/
# Run mysqld on standard port
EXPOSE 3306
ENTRYPOINT ["/usr/sbin/mysqld"]
CMD ["--init-file=/etc/mysql/grants.sql"]
I 'vagrant ssh' in and run dmesg and this is what it spits out after it freezes:
[ 465.504357] nfs: server 33.33.33.1 not responding, still trying
[ 600.091356] INFO: task mysqld:1501 blocked for more than 120 seconds.
[ 600.092388] Not tainted 4.0.3 #2
[ 600.093277] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 600.094442] mysqld D ffff880019dbfbc8 0 1501 939 0x00000000
[ 600.095953] ffff880019dbfbc8 ffffffff81a154c0 ffff88001ec61910 ffff880019dbfba8
[ 600.098871] ffff880019dbffd8 0000000000000000 7fffffffffffffff 0000000000000002
[ 600.101594] ffffffff8150b4e0 ffff880019dbfbe8 ffffffff8150ad57 ffff88001ed5eb18
[ 600.103794] Call Trace:
[ 600.104376] [<ffffffff8150b4e0>] ? bit_wait+0x50/0x50
[ 600.105934] [<ffffffff8150ad57>] schedule+0x37/0x90
[ 600.107505] [<ffffffff8150da7c>] schedule_timeout+0x20c/0x280
[ 600.108369] [<ffffffff8101d799>] ? read_tsc+0x9/0x10
[ 600.109370] [<ffffffff810d731e>] ? ktime_get+0x3e/0xa0
[ 600.110353] [<ffffffff8101d799>] ? read_tsc+0x9/0x10
[ 600.111327] [<ffffffff810d731e>] ? ktime_get+0x3e/0xa0
[ 600.112347] [<ffffffff8150a31c>] io_schedule_timeout+0xac/0x130
[ 600.113368] [<ffffffff810a9ee7>] ? prepare_to_wait+0x57/0x90
[ 600.114358] [<ffffffff8150b516>] bit_wait_io+0x36/0x50
[ 600.115332] [<ffffffff8150b145>] __wait_on_bit+0x65/0x90
[ 600.116343] [<ffffffff81146072>] wait_on_page_bit+0xc2/0xd0
[ 600.117453] [<ffffffff810aa360>] ? autoremove_wake_function+0x40/0x40
[ 600.119304] [<ffffffff81146179>] filemap_fdatawait_range+0xf9/0x190
[ 600.120646] [<ffffffff81152ffe>] ? do_writepages+0x1e/0x40
[ 600.121346] [<ffffffff81147f96>] ? __filemap_fdatawrite_range+0x56/0x70
[ 600.122397] [<ffffffff811480bf>] filemap_write_and_wait_range+0x3f/0x70
[ 600.123460] [<ffffffffa0207b1e>] nfs_file_fsync_commit+0x23e/0x3c0 [nfs]
[ 600.124399] [<ffffffff811e7bf0>] vfs_fsync_range+0x40/0xb0
[ 600.126163] [<ffffffff811e7cbd>] do_fsync+0x3d/0x70
[ 600.127092] [<ffffffff811e7f50>] SyS_fsync+0x10/0x20
[ 600.128086] [<ffffffff8150f089>] system_call_fastpath+0x12/0x17
Any ideas as whats going on here?
I am also using this same setup. Vagrant defaults to UDP so removing that from your setup seems to work. Haven't tested it though but I didn't run into the MYSQL issues you had.
config.vm.synced_folder ".", "/home/core/sites",
id: "core",
nfs_version: "4",
:nfs => true,
:mount_options => ['nolock,noatime']
This worked for me. YMMV.

rabbitmq 3.3.4 shovel configuration is crashing start process

I'm trying to configure the shovel plugin via the config file (running in docker) but I get this error:
BOOT FAILED
===========
Error description:
{error,{failed_to_cluster_with,[rabbit#dalmacpmfd57],
"Mnesia could not connect to any nodes."}}
The config is set up this way because the destination for shovel will be created on demand when a dev environment is spun up... the source is a permanent rabbitmq instance running that the new, dev environment will attach to.
Here is the config file contents:
[
{rabbitmq_shovel,
[{shovels,
[{indexer_replica_static,
[{sources,
[{broker, [ "amqp://guest:guest#rabbitmq/newdev" ]},
{declarations,
[{'queue.declare', [{queue, <<"Indexer_Replica_Static">>}, durable]},
{'queue.bind',[ {exchange, <<"Indexer">>}, {queue, <<"Indexer_Replica_Static">>}]}
]
}
]
},
{destinations,
[{broker, "amqp://"},
{declarations, [ {'exchange.declare', [ {exchange, <<"Indexer_Replica_Static">>}
, {type, <<"fanout">>}, durable]},
{'queue.declare', [
{queue, <<"Indexer_Replica_Static">>},
durable]},
{'queue.bind',
[ {exchange, <<"Indexer_Replica_Static">>}
, {queue, <<"Indexer_Replica_Static">>}
]}
]
}
]
},
{queue, <<"Indexer_Replica_Static">>},
{prefetch_count, 0},
{ack_mode, on_confirm},
{publish_properties, [ {delivery_mode, 2} ]},
{reconnect_delay, 2.5}
]
}
]
},
{reconnect_delay, 2.5}
]
}
].
[UPDATE]
This is being run in docker but since I couldn't debug the issue in docker I tried booting up rabbit locally with the same config file. I noticed in the logs that the rabbit config system variable I set (RABBITMQ_CONFIG_FILE) isn't reflected in the log and the shovel settings haven't been applied (no surprise huh). I verified the variable with an echo statement and the correct path is displayed: /dev/rabbitmq_server-3.3.4/rabbitmq
=INFO REPORT==== 3-Sep-2014::15:30:37 ===
node : rabbit#dalmacpmfd57
home dir : /Users/e002678
config file(s) : (none)
cookie hash : n6vhh8tY7Z+uR2DV6gcHUg==
log : /usr/local/rabbitmq_server-3.3.4/sbin/../var/log/rabbitmq/rabbit#dalmacpmfd57.log
sasl log : /usr/local/rabbitmq_server-3.3.4/sbin/../var/log/rabbitmq/rabbit#dalmacpmfd57- sasl.log
database dir : /usr/local/rabbitmq_server-3.3.4/sbin/../var/lib/rabbitmq/mnesia/rabbit#dalmacpmfd57
Thanks!