Problems with execution aws command via ssh jenkins - json

Good morning, how are you?
I have a problem with one execution via ssh in my jenkins.
Those characters that appear before did not appear, and we have not changed anything in the node..
The code we use is:
withCredentials([usernamePassword(credentialsId: 'id', passwordVariable: 'pass', usernameVariable: 'user')]) {
def remote = [:]
remote.name = 'id_nme'
remote.host = 'ip_node'
remote.user = user
remote.password = pass
remote.allowAnyHosts = true
remote.timeoutSec = 300
sshCommand remote: remote, command: "aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name [name_asg]"
}
When the command is launched, the job is stuck and does not progress.

Related

Packer Autoinstall with Ubuntu 21.04/21.10 Desktop

I was trying to automate a Packer build of Ubuntu Desktop 21.04 in vSphere with the HCL below. I since found that this will only start to work from 21.10 for desktop images.
See:
https://discourse.ubuntu.com/t/refreshing-the-ubuntu-desktop-installer/20659/76?u=nathanto
The original question is below to help others.
The key piece is where the boot command defines the seedfrom. That seems not to work in the sense that the user-data is never loaded. The VM boots, and the net.ifnames=0 argument from the boot command is applied (interfaces are named eth0).
The logic of the boot command is to press c to get to the grub> prompt, and then the commands are entered as shown in the boot_command below.
I see in the /proc/cmdline that the boot command is applied properly.
I can see no indication that the user-data is loaded though. If I look at the web server shown in the boot command, using Firefox from the booted VM, the user-data and meta-data files are there and accessible.
Does anyone have any ideas of how to debug this please?
source "vsphere-iso" "dev_vm" {
username = var.vcenter_username
password = var.vcenter_password
vcenter_server = var.vcenter_server
cluster = var.vcenter_cluster
datacenter = var.vcenter_datacenter
datastore = var.vcenter_vm_datastore
guest_os_type = "ubuntu64Guest"
insecure_connection = "true"
iso_checksum = "sha256:fa95fb748b34d470a7cfa5e3c1c8fa1163e2dc340cd5a60f7ece9dc963ecdf88"
iso_urls = ["https://releases.ubuntu.com/21.04/ubuntu-21.04-desktop-amd64.iso"]
http_directory = "./http"
vm_name = "dev_vm"
CPUs = 2
RAM = 2048
RAM_reserve_all = true
boot_wait = "3s"
convert_to_template = false
boot_command = [
"c",
"linux /casper/vmlinuz --- autoinstall ds='nocloud-net;seedfrom=http://{{.HTTPIP}}:{{.HTTPPort}}/' net.ifnames=0 ",
"<enter><wait>",
"initrd /casper/initrd<enter><wait>",
"boot<enter>"
]
network_adapters {
network = "xxx"
network_card = "e1000"
}
storage {
disk_size = 40960
disk_thin_provisioned = true
}
ssh_username = "xx"
ssh_password = "xx"
ssh_timeout = "60m"
}
build {
sources = [
"source.vsphere-iso.dev_vm"
]
...
}
I was working with packer with virtualbox installing Ubuntu server 20.04 with autoinstall and I watched the installation process happen. What I see is cloud init running and then after it reaches the point where on 20.04 the server reboots itself, the graphical installer launches.
I have tried 8 or 9 versions of the boot_command that people swear works and so far none of them have. I am looking for how people create the boot_command. I am new to Ubuntu autoinstall and packer, so I am learning.
Also they are revamping ubiquity
https://discourse.ubuntu.com/t/refreshing-the-ubuntu-desktop-installer/20659/73
So this may impact you as well.

MySql Setup in Linux Docker Container Via Terraform

Requirement: Need to automate MySQL installation & Database creation on Linux(Ubuntu)Docker Container via Terra form.
I am doing all this stuff on my local machine & below is the Terra form configuration.
Terra form file:
resource "docker_container" "db-server1" {
name = "db-server"
image = docker_image.ubuntu.latest
ports {
internal = 80
external = 9093
}
provisioner "local-exec" {
command = "docker container start dbs-my"
}
provisioner "local-exec" {
command = "docker exec dbs-my apt-get update"
}
provisioner "local-exec" {
command = "docker exec dbs-my apt-get -y install mysql-server"
}
}
But in container there is no mysql service present, when i am trying to launch mysql command, i am getting below error:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
Using Terraform for this at all is a little unusual; you might look at more Docker-native tools like Docker Compose to set this up. There are also several anti-patterns in this example: you should generally avoid installing software in running containers, and avoid running long sequences of imperative commands via Terraform, and it's usually not useful to run the bare ubuntu Docker image as-is.
You can run the Docker Hub mysql image instead:
resource "docker_image" "mysql" {
name = "mysql:8"
}
resource "random_password" "mysql_root_password" {
length = 16
}
resource "docker_container" "mysql" {
name = "mysql"
image = "${docker_image.mysql.latest}"
env {
MYSQL_ROOT_PASSWORD = "${random_password.mysql_root_password.result}"
}
mounts {
source = "/some/host/mysql/data/path"
target = "/var/lib/mysql/data"
type = "bind"
}
ports {
internal = 3306
external = 3306
}
}
If you wanted to do further setup on the created database, you could use the MySQL provider
provider "mysql" {
endpoint = "127.0.0.1:3306" # the "external" port
username = "root"
password = "${random_password.mysql_root_password.result}"
}
resource "mysql_database" "db" {
name = "db"
}

How do I enable docker's experimental features so Gitlab can build with it

In a small swarm that has experimental features enabled and one that hosts gitlab-runners I seem to be unable to build with experimental features enabled. (I want to build with the --squash option)
docker version on the host shows experimental: true, but the same command in the gitlab-ci-runner shows experimental: false.
I can't seem to find any additional configuration options...
my runner config:
cat /srv/data/gitlab-runner/etc/config.toml
concurrent = 4
check_interval = 0
[[runners]]
name = "bushbaby-general-ci"
url = "xxx"
token = "xxx"
environment = ["COMPOSER_CACHE_DIR=/cache/composer", "COMPOSER_ALLOW_SUPERUSER=1", "YARN_CACHE_FOLDER=/cache/yarn"]
executor = "docker"
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = true
disable_cache = false
volumes = ["/var/cache:/cache:rw"]
shm_size = 0
[runners.cache]
I figured it out
ensure this is placed inside the .gitlab-ci.yml
some_build:
stage: build
image: docker:git
services:
- name: docker:dind
command: ["--experimental"]
I presume the "privileged = true" in the runner config is also required but have not tested that

Grails H2 Database DbConsole - Database backup

I have grails 2.0 which comes with H2 database and dbconsole.
I want to take the database backup from dbconsole:
databse url : "jdbc:mysql://localhost/opal"
Username : root
password: (none)
in the tools section of dbconsole there is a option to backup the database.
it will ask 3 things
Target file name: ~/backup.zip(by default)
Source directory:
Source database name: opal (name of my database)
when i press run , it gives error,
No database files have been found in directory E:/Workspace/opal for the database opal
can anybody suggest how to take the database backup.
I've never gotten that to work. If you just want a snapshot of data for development (load on startup) I found that using DBUnit to export/import the data worked great for me. I wrote a script to export it that I call from the console:
class DataExport {
def ctx = SCH.servletContext.getAttribute(GA.APPLICATION_CONTEXT)
def exportData() {
println "-->export"
def ds = ctx.dataSourceUnproxied
println ds.dump()
Connection jdbcConnection = ctx.dataSourceUnproxied.getConnection()
IDatabaseConnection connection = new DatabaseConnection(jdbcConnection);
println connection.dump()
ITableFilter filter = new DatabaseSequenceFilter(connection);
IDataSet dataset = new FilteredDataSet(filter, connection.createDataSet());
FlatXmlDataSet.write(dataset, new File("full.xml").newWriter());
connection.close()
}
}
And then in bootstrap you can load it back in
Connection jdbcConnection
FlatXmlDataSet dataSet = new FlatXmlDataSetBuilder().build(new ClassPathResource('resources/data/full.xml').inputStream)
jdbcConnection = ctx.dataSource.getConnection()
IDatabaseConnection connection = new DatabaseConnection(jdbcConnection);
try {
DatabaseOperation.INSERT.execute(connection, dataSet)
} catch(e) {
e.printStackTrace()
throw(e)
} finally {
jdbcConnection.close()
}
log.info 'data loaded'
I think this site would be quite helpful. Or, on taking dump of the database and to restore the database try snippet below:
mysqldump -u root -p my_database Table1 Table2 > /home/user/tablesDump.sql;
and to restore the table(s) back:
mysql -u root -p my_database_2
mysql> source /home/user/tablesDump.sql;
Both tables were created in my_database_2.

Enable logging in Mercurial bugzilla extension scripts

In Mercurial, how I can enable the logging in bugzilla extension scripts? e.g. the "self.ui.note" inside the bugzilla.py.
host = self.ui.config('bugzilla', 'host', 'localhost')
user = self.ui.config('bugzilla', 'user', 'bugs')
passwd = self.ui.config('bugzilla', 'password')
db = self.ui.config('bugzilla', 'db', 'bugs')
timeout = int(self.ui.config('bugzilla', 'timeout', 5))
self.ui.note(_('connecting to %s:%s as %s, password %s\n') %***
(host, db, user, '*' * len(passwd)))
self.conn = bzmysql._MySQLdb.connect(host=host,
user=user, passwd=passwd,
db=db,
connect_timeout=timeout)
self.cursor = self.conn.cursor()
self.longdesc_id = self.get_longdesc_id()
self.user_ids = {}
self.default_notify = "cd %(bzdir)s && ./processmail %(id)s %(user)s"
I'm not sure it will do what you want since I don't have time to look in the source. But you should try one of the following command line parameters :
-v, --verbose
enable additional output
--debug
enable debugging output