How do I enable docker's experimental features so Gitlab can build with it - gitlab-ci-runner

In a small swarm that has experimental features enabled and one that hosts gitlab-runners I seem to be unable to build with experimental features enabled. (I want to build with the --squash option)
docker version on the host shows experimental: true, but the same command in the gitlab-ci-runner shows experimental: false.
I can't seem to find any additional configuration options...
my runner config:
cat /srv/data/gitlab-runner/etc/config.toml
concurrent = 4
check_interval = 0
[[runners]]
name = "bushbaby-general-ci"
url = "xxx"
token = "xxx"
environment = ["COMPOSER_CACHE_DIR=/cache/composer", "COMPOSER_ALLOW_SUPERUSER=1", "YARN_CACHE_FOLDER=/cache/yarn"]
executor = "docker"
[runners.docker]
tls_verify = false
image = "alpine:latest"
privileged = true
disable_cache = false
volumes = ["/var/cache:/cache:rw"]
shm_size = 0
[runners.cache]

I figured it out
ensure this is placed inside the .gitlab-ci.yml
some_build:
stage: build
image: docker:git
services:
- name: docker:dind
command: ["--experimental"]
I presume the "privileged = true" in the runner config is also required but have not tested that

Related

Packer Autoinstall with Ubuntu 21.04/21.10 Desktop

I was trying to automate a Packer build of Ubuntu Desktop 21.04 in vSphere with the HCL below. I since found that this will only start to work from 21.10 for desktop images.
See:
https://discourse.ubuntu.com/t/refreshing-the-ubuntu-desktop-installer/20659/76?u=nathanto
The original question is below to help others.
The key piece is where the boot command defines the seedfrom. That seems not to work in the sense that the user-data is never loaded. The VM boots, and the net.ifnames=0 argument from the boot command is applied (interfaces are named eth0).
The logic of the boot command is to press c to get to the grub> prompt, and then the commands are entered as shown in the boot_command below.
I see in the /proc/cmdline that the boot command is applied properly.
I can see no indication that the user-data is loaded though. If I look at the web server shown in the boot command, using Firefox from the booted VM, the user-data and meta-data files are there and accessible.
Does anyone have any ideas of how to debug this please?
source "vsphere-iso" "dev_vm" {
username = var.vcenter_username
password = var.vcenter_password
vcenter_server = var.vcenter_server
cluster = var.vcenter_cluster
datacenter = var.vcenter_datacenter
datastore = var.vcenter_vm_datastore
guest_os_type = "ubuntu64Guest"
insecure_connection = "true"
iso_checksum = "sha256:fa95fb748b34d470a7cfa5e3c1c8fa1163e2dc340cd5a60f7ece9dc963ecdf88"
iso_urls = ["https://releases.ubuntu.com/21.04/ubuntu-21.04-desktop-amd64.iso"]
http_directory = "./http"
vm_name = "dev_vm"
CPUs = 2
RAM = 2048
RAM_reserve_all = true
boot_wait = "3s"
convert_to_template = false
boot_command = [
"c",
"linux /casper/vmlinuz --- autoinstall ds='nocloud-net;seedfrom=http://{{.HTTPIP}}:{{.HTTPPort}}/' net.ifnames=0 ",
"<enter><wait>",
"initrd /casper/initrd<enter><wait>",
"boot<enter>"
]
network_adapters {
network = "xxx"
network_card = "e1000"
}
storage {
disk_size = 40960
disk_thin_provisioned = true
}
ssh_username = "xx"
ssh_password = "xx"
ssh_timeout = "60m"
}
build {
sources = [
"source.vsphere-iso.dev_vm"
]
...
}
I was working with packer with virtualbox installing Ubuntu server 20.04 with autoinstall and I watched the installation process happen. What I see is cloud init running and then after it reaches the point where on 20.04 the server reboots itself, the graphical installer launches.
I have tried 8 or 9 versions of the boot_command that people swear works and so far none of them have. I am looking for how people create the boot_command. I am new to Ubuntu autoinstall and packer, so I am learning.
Also they are revamping ubiquity
https://discourse.ubuntu.com/t/refreshing-the-ubuntu-desktop-installer/20659/73
So this may impact you as well.

Problems with execution aws command via ssh jenkins

Good morning, how are you?
I have a problem with one execution via ssh in my jenkins.
Those characters that appear before did not appear, and we have not changed anything in the node..
The code we use is:
withCredentials([usernamePassword(credentialsId: 'id', passwordVariable: 'pass', usernameVariable: 'user')]) {
def remote = [:]
remote.name = 'id_nme'
remote.host = 'ip_node'
remote.user = user
remote.password = pass
remote.allowAnyHosts = true
remote.timeoutSec = 300
sshCommand remote: remote, command: "aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name [name_asg]"
}
When the command is launched, the job is stuck and does not progress.

airflow: All tasks finished but dag state is running

I'm using Airflow 2.0.0 with CeleryExecutor and mysql-8.0.22.
Every time we execute any dag. Irrespective of all tasks status being failed/success/mixed, overall dag status is always running.
Because of which after sometime, scheduler also gets crashed.
Airflow is installed at /root/
Here is the airflow.cfg:
[core]
dags_folder = /var/airflow/dags
executor = CeleryExecutor
sql_alchemy_conn = mysql://user:password#localhost:3306/airflow
[logging]
base_log_folder = /var/airflow/logs
[webserver]
base_url = http://localhost:8080
default_ui_timezone = UTC
web_server_host = 0.0.0.0
web_server_port = 8080
[celery]
celery_app_name = airflow.executors.celery_executor
worker_concurrency = 8
worker_log_server_port = 8793
broker_url = sqla+mysql://user:password#localhost:3306/celery
result_backend=db+mysql://user:password#localhost:3306/celery
flower_host = 0.0.0.0
flower_port = 5555
operation_timeout = 1.0
[scheduler]
child_process_log_directory = /var/airflow/logs/scheduler
Can somebody help.
Is your database running on the same node as your airflow worker? It looks wrong.

JRuby / Warbler / GlassFish - (NameError) uninitialized constant ApplicationController::SessionsHelper

Really Short Story:
I'm incredibly frustrated by this issue
Short Story:
JRuby-1.7.2 building to a .war using Warbler (1.3.8) deploying to a glassfish v3 server. I can build on my machine and everything works fine, however when I try to build with Jenkins, the war gives the following error when trying to load the first page:
org.jruby.exceptions.RaiseException: (NameError) uninitialized constant ApplicationController::SessionsHelper
Long Story:
Build script on our Jenkins server:
#path to rvm
[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm"
# Use the correct ruby
rvm use "jruby-1.7.2#webadmin"
# Set "fail on error" in bash
set -e
# build
bundle update
warble compiled war
Error log from Glassfish....which I hope has enough info.
[#|2013-05-31T17:10:14.634-0400|INFO|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=19;_ThreadName=Thread-2;|PWC1412: WebModule[null] ServletContext.log():INFO: pool was empty - getting new application instance|#]
[#|2013-05-31T17:10:25.181-0400|INFO|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=19;_ThreadName=Thread-2;|PWC1412: WebModule[null] ServletContext.log():An exception happened during JRuby-Rack startup
uninitialized constant ApplicationController::SessionsHelper
--- System
jruby 1.7.4 (1.9.3p392) 2013-05-16 2390d3b on OpenJDK 64-Bit Server VM 1.6.0_27-b27 [linux-amd64]
Time: 2013-05-31 17:10:25 -0400
Server: GlassFish Server Open Source Edition 3.1.2.2
jruby.home: classpath:/META-INF/jruby.home
--- Context Init Parameters:
com.sun.faces.forceLoadConfiguration = true
com.sun.faces.validateXml = true
public.root = /
rails.env = production
--- Backtrace
NameError: uninitialized constant ApplicationController::SessionsHelper
--- RubyGems
Gem.dir: /opt/glassfish3/glassfish/domains/myDomain/applications/web-admin/WEB-INF/gems
Gem.path:
/opt/glassfish3/glassfish/domains/myDomain/applications/web-admin/WEB-INF/gems
Activated gems:
bundler-1.3.5
rake-10.0.4
i18n-0.6.1
multi_json-1.7.4
activesupport-3.2.13
builder-3.0.4
activemodel-3.2.13
erubis-2.7.0
journey-1.0.4
rack-1.4.5
rack-cache-1.2
rack-test-0.6.2
hike-1.2.2
tilt-1.4.1
sprockets-2.2.2
actionpack-3.2.13
mime-types-1.23
polyglot-0.3.3
treetop-1.4.12
mail-2.5.4
actionmailer-3.2.13
arel-3.0.2
tzinfo-0.3.37
activerecord-3.2.13
activeresource-3.2.13
gyoku-1.0.0
nokogiri-1.5.9-java
akami-1.2.0
bcrypt-ruby-3.0.1-java
sass-3.2.9
bootstrap-sass-2.3.1.2
will_paginate-3.0.4
bootstrap-will_paginate-0.0.9
bouncy-castle-java-1.5.0147
coffee-script-source-1.6.2
execjs-1.4.0
coffee-script-2.2.0
rack-ssl-1.3.3
json-1.8.0-java
rdoc-3.12.2
thor-0.18.1
railties-3.2.13
coffee-rails-3.2.2
faker-1.1.2
httpi-2.0.2
jquery-rails-2.2.2
jruby-openssl-0.8.8
nori-2.1.0
rails-3.2.13
sass-rails-3.2.6
wasabi-3.1.0
savon-2.2.0
therubyrhino_jar-1.7.4
therubyrhino-2.0.2
uglifier-1.0.4
uuidtools-2.1.4
--- Bundler
Bundler.bundle_path: /opt/glassfish3/glassfish/domains/myDomain/applications/web-admin/WEB-INF/gems
Bundler.root: /opt/glassfish3/glassfish/domains/myDomain/applications/web-admin/WEB-INF
Gemfile: /opt/glassfish3/glassfish/domains/myDomain/applications/web-admin/WEB-INF/Gemfile
Settings:
gemfile = /opt/glassfish3/glassfish/domains/myDomain/applications/web-admin/WEB-INF/Gemfile
without = development:test:assets
bin_path = /opt/glassfish3/glassfish/domains/myDomain/applications/web-admin/WEB-INF/gems/gems/bundler-1.3.5/bin/bundle
--- JRuby-Rack Config
compat_version =
default_logger = org.jruby.rack.logging.StandardOutLogger#62a49a04
equals =
err = com.sun.common.util.logging.LoggingOutputStream$LoggingPrintStream#7a21bdb8
filter_adds_html = true
filter_verifies_resource = false
ignore_environment = false
initial_memory_buffer_size =
initial_runtimes =
jms_connection_factory =
jms_jndi_properties =
logger = org.jruby.rack.logging.ServletContextLogger#19a2312c
logger_class_name = servlet_context
logger_name = jruby.rack
maximum_memory_buffer_size =
maximum_runtimes =
num_initializer_threads =
out = com.sun.common.util.logging.LoggingOutputStream$LoggingPrintStream#52f8d395
rackup =
rackup_path =
rewindable = true
runtime_arguments =
runtime_environment =
runtime_timeout_seconds =
serial_initialization = false
servlet_context = org.apache.catalina.core.ApplicationContextFacade#16c7e149
throw_init_exception = false
|#]
[#|2013-05-31T17:10:25.182-0400|INFO|glassfish3.1.2|javax.enterprise.system.container.web.com.sun.enterprise.web|_ThreadID=19;_ThreadName=Thread-2;|PWC1412: WebModule[null] ServletContext.log():DEBUG: resetting rack response due exception|#]
Turns out it was an issue with source code control. My helpers directory was not added and therefore Jenkins was not including in the build. Always check the obvious first, if it says it isn't there it probably isn't.

admin-username error of proxy

I tried to install proxy on development machine and I got the following error.
/etc/init.d/mysql-proxyd start
Starting mysql-proxy: 2011-02-26 15:51:45: (critical) admin-plugin.c:569: --admin-username needs to be set
2011-02-26 15:51:45: (critical) mainloop.c:267: applying config of plugin admin failed
2011-02-26 15:51:45: (critical) mysql-proxy-cli.c:596: Failure from chassis_mainloop. Shutting down.
[ OK ]
Since this is only a test machine, I do not want the security feature of proxy. How do I avoid the above error?
Either upgrade your version of mysql-proxy to 0.8.2 or greater or explicitly specify that you don't need the admin plugin by using mysql-proxy --plugins=proxy
[mysql-proxy]
daemon = true
user = mysql
proxy-skip-profiling = true
keepalive = true
max-open-files = 2048
event-threads = 50
pid-file = /var/run/mysql-proxy.pid
log-file = /var/log/mysql-proxy.log
log-level = debug
admin-address=:4401
admin-username=1
admin-password=1
admin-lua-script=/usr/local/lib/mysql-proxy/lua/admin.lua
proxy-address = 0.0.0.0:3307
proxy-backend-addresses = 192.168.2.1:3306
proxy-read-only-backend-addresses=192.168.6.2:3306, 192.168.6.1:3306
proxy-lua-script=/usr/lib/mysql-proxy/lua/proxy/balance.lua