Packer : how to avoid providing ssh_private_key_file in CI/CD pipeline? - packer

I m running locally packer with ansible and terraform and it works fine.
Now i want to include these in my github actions ci/cd pipeline.
The packer hcl file is like this :
variable "do_token" {
type = string
default = env("DO_PAT")
}
variable "pvt_key" {
type = string
default = env("SSH_PVT_KEY")
}
packer {
required_plugins {
digitalocean = {
version = ">= 1.0.0"
source = "github.com/hashicorp/digitalocean"
}
}
}
source "digitalocean" "example" {
api_token = var.do_token
image = "debian-11-x64"
region = "ams3"
size = "s-1vcpu-1gb"
ssh_username = "root"
monitoring = true
snapshot_name = "packer-{{timestamp}}"
droplet_name = "packer-build"
ssh_key_id = id
ssh_private_key_file = path/to/my/file
}
build {
sources = ["source.digitalocean.example"]
provisioner "file" {
source = "publickeypath"
destination = "/tmp/publickey.pub"
}
provisioner "ansible-local" {
playbook_file = "../ansible/playbook.yml"
extra_arguments= [
"-vvv",
"--extra-vars",
"'ansible_python_interpreter=/usr/bin/python3'"
]
}
}
I would like to provide an environment variable for my ssh private key instead of a file so i dont need to upload it to github...Is it possible?
Also for the public key, is it possible to provide an environment variable and copy it to a file (instead of build provisioner file)
Thank you

The answer is simple : make a runner and echo "$ENV_VARIABLE" > fileyouwant
- name: Make ssh private key from secret
run: |
echo "$PVT_KEY" > sshkey
env:
PVT_KEY: ${{ secrets.PVT_KEY }}

Related

Freeradius 3.0.20 mysql radacct tamble empty not storing logs

I am developing a small project using freeradius 3.0.20 on linux ubuntu 20.04 machine..
installed freeradius, configured mysql on default.conf... loaded virtual servers.. and my device can connect ok..
On init i can see NAS information being loaded from MYSQL table, info is stored everything ok..
client data stored in radcheck, radgroupgcheck,radgroupreply,radipool storing the cgnat ip table etc..
client device logs in with username and password all ok i can navigate on the internet with client logged in, but its not storing clients data on the accounting radacct table on the mysql... no history logs stored at all .. only thing i can see is radpostauth storing username, password xored md5 and authdate which contains the login date only.. attached below full log init from freeradius -x
if anyone can comment, or knows how to fix this configuration issue..
root#PSI-DEV:~# sudo freeradius -X
FreeRADIUS Version 3.0.20
Copyright (C) 1999-2019 The FreeRADIUS server project and contributors
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE
You may redistribute copies of FreeRADIUS under the terms of the
GNU General Public License
For more information about these matters, see the file named COPYRIGHT
Starting - reading configuration files ...
including dictionary file /usr/share/freeradius/dictionary
including dictionary file /usr/share/freeradius/dictionary.dhcp
including dictionary file /usr/share/freeradius/dictionary.vqp
including dictionary file /etc/freeradius/3.0/dictionary
including configuration file /etc/freeradius/3.0/radiusd.conf
including configuration file /etc/freeradius/3.0/proxy.conf
including configuration file /etc/freeradius/3.0/clients.conf
including files in directory /etc/freeradius/3.0/mods-enabled/
including configuration file /etc/freeradius/3.0/mods-enabled/linelog
including configuration file /etc/freeradius/3.0/mods-enabled/digest
including configuration file /etc/freeradius/3.0/mods-enabled/echo
including configuration file /etc/freeradius/3.0/mods-enabled/radutmp
including configuration file /etc/freeradius/3.0/mods-enabled/passwd
including configuration file /etc/freeradius/3.0/mods-enabled/mschap
including configuration file /etc/freeradius/3.0/mods-enabled/unix
including configuration file /etc/freeradius/3.0/mods-enabled/files
including configuration file /etc/freeradius/3.0/mods-enabled/pap
including configuration file /etc/freeradius/3.0/mods-enabled/eap
including configuration file /etc/freeradius/3.0/mods-enabled/replicate
including configuration file /etc/freeradius/3.0/mods-enabled/soh
including configuration file /etc/freeradius/3.0/mods-enabled/sql
including configuration file /etc/freeradius/3.0/mods-config/sql/main/mysql/queries.conf
including configuration file /etc/freeradius/3.0/mods-enabled/logintime
including configuration file /etc/freeradius/3.0/mods-enabled/exec
including configuration file /etc/freeradius/3.0/mods-enabled/realm
including configuration file /etc/freeradius/3.0/mods-enabled/preprocess
including configuration file /etc/freeradius/3.0/mods-enabled/cache_eap
including configuration file /etc/freeradius/3.0/mods-enabled/sradutmp
including configuration file /etc/freeradius/3.0/mods-enabled/expiration
including configuration file /etc/freeradius/3.0/mods-enabled/detail
including configuration file /etc/freeradius/3.0/mods-enabled/unpack
including configuration file /etc/freeradius/3.0/mods-enabled/detail.log
including configuration file /etc/freeradius/3.0/mods-enabled/expr
including configuration file /etc/freeradius/3.0/mods-enabled/chap
including configuration file /etc/freeradius/3.0/mods-enabled/ntlm_auth
including configuration file /etc/freeradius/3.0/mods-enabled/always
including configuration file /etc/freeradius/3.0/mods-enabled/sqlippool
including configuration file /etc/freeradius/3.0/mods-config/sql/ippool/mysql/queries.conf
including configuration file /etc/freeradius/3.0/mods-enabled/dynamic_clients
including configuration file /etc/freeradius/3.0/mods-enabled/utf8
including configuration file /etc/freeradius/3.0/mods-enabled/attr_filter
including files in directory /etc/freeradius/3.0/policy.d/
including configuration file /etc/freeradius/3.0/policy.d/cui
including configuration file /etc/freeradius/3.0/policy.d/dhcp
including configuration file /etc/freeradius/3.0/policy.d/eap
including configuration file /etc/freeradius/3.0/policy.d/filter
including configuration file /etc/freeradius/3.0/policy.d/abfab-tr
including configuration file /etc/freeradius/3.0/policy.d/debug
including configuration file /etc/freeradius/3.0/policy.d/canonicalization
including configuration file /etc/freeradius/3.0/policy.d/accounting
including configuration file /etc/freeradius/3.0/policy.d/control
including configuration file /etc/freeradius/3.0/policy.d/moonshot-targeted-ids
including configuration file /etc/freeradius/3.0/policy.d/rfc7542
including configuration file /etc/freeradius/3.0/policy.d/operator-name
including files in directory /etc/freeradius/3.0/sites-enabled/
including configuration file /etc/freeradius/3.0/sites-enabled/default
including configuration file /etc/freeradius/3.0/sites-enabled/inner-tunnel
main {
security {
user = "freerad"
group = "freerad"
allow_core_dumps = no
}
name = "freeradius"
prefix = "/usr"
localstatedir = "/var"
logdir = "/var/log/freeradius"
run_dir = "/var/run/freeradius"
}
main {
name = "freeradius"
prefix = "/usr"
localstatedir = "/var"
sbindir = "/usr/sbin"
logdir = "/var/log/freeradius"
run_dir = "/var/run/freeradius"
libdir = "/usr/lib/freeradius"
radacctdir = "/var/log/freeradius/radacct"
hostname_lookups = no
max_request_time = 30
cleanup_delay = 5
max_requests = 16384
pidfile = "/var/run/freeradius/freeradius.pid"
checkrad = "/usr/sbin/checkrad"
debug_level = 0
proxy_requests = yes
log {
stripped_names = no
auth = no
auth_badpass = no
auth_goodpass = no
colourise = yes
msg_denied = "You are already logged in - access denied"
}
resources {
}
security {
max_attributes = 200
reject_delay = 1.000000
status_server = yes
}
}
radiusd: #### Loading Realms and Home Servers ####
proxy server {
retry_delay = 5
retry_count = 3
default_fallback = no
dead_time = 120
wake_all_if_all_dead = no
}
home_server localhost {
ipaddr = 127.0.0.1
port = 1812
type = "auth"
secret = <<< secret >>>
response_window = 20.000000
response_timeouts = 1
max_outstanding = 65536
zombie_period = 40
status_check = "status-server"
ping_interval = 30
check_interval = 30
check_timeout = 4
num_answers_to_alive = 3
revive_interval = 120
limit {
max_connections = 16
max_requests = 0
lifetime = 0
idle_timeout = 0
}
coa {
irt = 2
mrt = 16
mrc = 5
mrd = 30
}
}
home_server_pool my_auth_failover {
type = fail-over
home_server = localhost
}
realm example.com {
auth_pool = my_auth_failover
}
realm LOCAL {
}
radiusd: #### Loading Clients ####
Debugger not attached
systemd watchdog is disabled
# Creating Auth-Type = mschap
# Creating Auth-Type = eap
# Creating Auth-Type = PAP
# Creating Auth-Type = CHAP
# Creating Auth-Type = MS-CHAP
radiusd: #### Instantiating modules ####
modules {
# Loaded module rlm_linelog
# Loading module "linelog" from file /etc/freeradius/3.0/mods-enabled/linelog
linelog {
filename = "/var/log/freeradius/linelog"
escape_filenames = no
syslog_severity = "info"
permissions = 384
format = "This is a log message for %{User-Name}"
reference = "messages.%{%{reply:Packet-Type}:-default}"
}
# Loading module "log_accounting" from file /etc/freeradius/3.0/mods-enabled/linelog
linelog log_accounting {
filename = "/var/log/freeradius/linelog-accounting"
escape_filenames = no
syslog_severity = "info"
permissions = 384
format = ""
reference = "Accounting-Request.%{%{Acct-Status-Type}:-unknown}"
}
# Loaded module rlm_digest
# Loading module "digest" from file /etc/freeradius/3.0/mods-enabled/digest
# Loaded module rlm_exec
# Loading module "echo" from file /etc/freeradius/3.0/mods-enabled/echo
exec echo {
wait = yes
program = "/bin/echo %{User-Name}"
input_pairs = "request"
output_pairs = "reply"
shell_escape = yes
}
# Loaded module rlm_radutmp
# Loading module "radutmp" from file /etc/freeradius/3.0/mods-enabled/radutmp
radutmp {
filename = "/var/log/freeradius/radutmp"
username = "%{User-Name}"
case_sensitive = yes
check_with_nas = yes
permissions = 384
caller_id = yes
}
# Loaded module rlm_passwd
# Loading module "etc_passwd" from file /etc/freeradius/3.0/mods-enabled/passwd
passwd etc_passwd {
filename = "/etc/passwd"
format = "*User-Name:Crypt-Password:"
delimiter = ":"
ignore_nislike = no
ignore_empty = yes
allow_multiple_keys = no
hash_size = 100
}
# Loaded module rlm_mschap
# Loading module "mschap" from file /etc/freeradius/3.0/mods-enabled/mschap
mschap {
use_mppe = yes
require_encryption = no
require_strong = no
with_ntdomain_hack = yes
passchange {
}
allow_retry = yes
winbind_retry_with_normalised_username = no
}
# Loaded module rlm_unix
# Loading module "unix" from file /etc/freeradius/3.0/mods-enabled/unix
unix {
radwtmp = "/var/log/freeradius/radwtmp"
}
Creating attribute Unix-Group
# Loaded module rlm_files
# Loading module "files" from file /etc/freeradius/3.0/mods-enabled/files
files {
filename = "/etc/freeradius/3.0/mods-config/files/authorize"
acctusersfile = "/etc/freeradius/3.0/mods-config/files/accounting"
preproxy_usersfile = "/etc/freeradius/3.0/mods-config/files/pre-proxy"
}
# Loaded module rlm_pap
# Loading module "pap" from file /etc/freeradius/3.0/mods-enabled/pap
pap {
normalise = yes
}
# Loaded module rlm_eap
# Loading module "eap" from file /etc/freeradius/3.0/mods-enabled/eap
eap {
default_eap_type = "md5"
timer_expire = 60
ignore_unknown_eap_types = no
cisco_accounting_username_bug = no
max_sessions = 16384
}
# Loaded module rlm_replicate
# Loading module "replicate" from file /etc/freeradius/3.0/mods-enabled/replicate
# Loaded module rlm_soh
# Loading module "soh" from file /etc/freeradius/3.0/mods-enabled/soh
soh {
dhcp = yes
}
# Loaded module rlm_sql
# Loading module "sql" from file /etc/freeradius/3.0/mods-enabled/sql
sql {
driver = "rlm_sql_mysql"
server = "localhost"
port = 3306
login = "radius"
password = <<< secret >>>
radius_db = "radius"
read_groups = yes
read_profiles = yes
read_clients = yes
delete_stale_sessions = yes
sql_user_name = "%{User-Name}"
logfile = "/var/log/freeradius/radacct/sql.log"
default_user_profile = ""
client_query = "SELECT id, nasname, shortname, type, secret, server FROM nas"
authorize_check_query = "SELECT id, username, attribute, value, op FROM radcheck WHERE username = '%{SQL-User-Name}' ORDER BY id"
authorize_reply_query = "SELECT id, username, attribute, value, op FROM radreply WHERE username = '%{SQL-User-Name}' ORDER BY id"
authorize_group_check_query = "SELECT id, groupname, attribute, Value, op FROM radgroupcheck WHERE groupname = '%{SQL-Group}' ORDER BY id"
authorize_group_reply_query = "SELECT id, groupname, attribute, value, op FROM radgroupreply WHERE groupname = '%{SQL-Group}' ORDER BY id"
group_membership_query = "SELECT groupname FROM radusergroup WHERE username = '%{SQL-User-Name}' ORDER BY priority"
simul_count_query = "SELECT COUNT(*) FROM radacct WHERE username = '%{SQL-User-Name}' AND acctstoptime IS NULL"
simul_verify_query = "SELECT radacctid, acctsessionid, username, nasipaddress, nasportid, framedipaddress, callingstationid, framedprotocol FROM radacct WHERE username = '%{SQL-User-Name}' AND acctstoptime IS NULL"
safe_characters = "#abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.-_: /"
auto_escape = no
accounting {
reference = "%{tolower:type.%{%{Acct-Status-Type}:-%{Request-Processing-Stage}}.query}"
type {
accounting-on {
query = "UPDATE radacct SET acctstoptime = FROM_UNIXTIME(%{integer:Event-Timestamp}), acctsessiontime = '%{integer:Event-Timestamp}' - UNIX_TIMESTAMP(acctstarttime), acctterminatecause = '%{%{Acct-Terminate-Cause}:-NAS-Reboot}' WHERE acctstoptime IS NULL AND nasipaddress = '%{NAS-IP-Address}' AND acctstarttime <= FROM_UNIXTIME(%{integer:Event-Timestamp})"
}
accounting-off {
query = "UPDATE radacct SET acctstoptime = FROM_UNIXTIME(%{integer:Event-Timestamp}), acctsessiontime = '%{integer:Event-Timestamp}' - UNIX_TIMESTAMP(acctstarttime), acctterminatecause = '%{%{Acct-Terminate-Cause}:-NAS-Reboot}' WHERE acctstoptime IS NULL AND nasipaddress = '%{NAS-IP-Address}' AND acctstarttime <= FROM_UNIXTIME(%{integer:Event-Timestamp})"
}
start {
query = "INSERT INTO radacct (acctsessionid, acctuniqueid, username, realm, nasipaddress, nasportid, nasporttype, acctstarttime, acctupdatetime, acctstoptime,acctsessiontime, acctauthentic, connectinfo_start, connectinfo_stop, acctinputoctets, acctoutputoctets, calledstationid, callingstationid, acctterminatecause, servicetype, framedprotocol, framedipaddress, framedipv6address, framedipv6prefix, framedinterfaceid, delegatedipv6prefix) VALUES ('%{Acct-Session-Id}', '%{Acct-Unique-Session-Id}', '%{SQL-User-Name}', '%{Realm}', '%{NAS-IP-Address}', '%{%{NAS-Port-ID}:-%{NAS-Port}}', '%{NAS-Port-Type}', FROM_UNIXTIME(%{integer:Event-Timestamp}), FROM_UNIXTIME(%{integer:Event-Timestamp}), NULL, '0', '%{Acct-Authentic}', '%{Connect-Info}', '', '0', '0', '%{Called-Station-Id}', '%{Calling-Station-Id}', '', '%{Service-Type}', '%{Framed-Protocol}', '%{Framed-IP-Address}', '%{Framed-IPv6-Address}', '%{Framed-IPv6-Prefix}', '%{Framed-Interface-Id}', '%{Delegated-IPv6-Prefix}')"
}
interim-update {
query = "UPDATE radacct SET acctupdatetime = (#acctupdatetime_old:=acctupdatetime), acctupdatetime = FROM_UNIXTIME(%{integer:Event-Timestamp}), acctinterval = %{integer:Event-Timestamp} - UNIX_TIMESTAMP(#acctupdatetime_old), acctstoptime = NULL, framedipaddress = '%{Framed-IP-Address}', framedipv6address = '%{Framed-IPv6-Address}', framedipv6prefix = '%{Framed-IPv6-Prefix}', framedinterfaceid = '%{Framed-Interface-Id}', delegatedipv6prefix = '%{Delegated-IPv6-Prefix}', acctsessiontime = %{%{Acct-Session-Time}:-NULL}, acctinputoctets = '%{%{Acct-Input-Gigawords}:-0}' << 32 | '%{%{Acct-Input-Octets}:-0}', acctoutputoctets = '%{%{Acct-Output-Gigawords}:-0}' << 32 | '%{%{Acct-Output-Octets}:-0}' WHERE AcctUniqueId = '%{Acct-Unique-Session-Id}'"
}
stop {
query = "UPDATE radacct SET acctstoptime = FROM_UNIXTIME(%{integer:Event-Timestamp}), acctsessiontime = %{%{Acct-Session-Time}:-NULL}, acctinputoctets = '%{%{Acct-Input-Gigawords}:-0}' << 32 | '%{%{Acct-Input-Octets}:-0}', acctoutputoctets = '%{%{Acct-Output-Gigawords}:-0}' << 32 | '%{%{Acct-Output-Octets}:-0}', acctterminatecause = '%{Acct-Terminate-Cause}', connectinfo_stop = '%{Connect-Info}' WHERE AcctUniqueId = '%{Acct-Unique-Session-Id}'"
}
}
}
post-auth {
reference = ".query"
logfile = "/var/log/freeradius/post-auth.sql"
query = "INSERT INTO radpostauth (username, pass, reply, authdate) VALUES ( '%{SQL-User-Name}', '%{%{User-Password}:-%{Chap-Password}}', '%{reply:Packet-Type}', '%S')"
}
}
rlm_sql (sql): Driver rlm_sql_mysql (module rlm_sql_mysql) loaded and linked
Creating attribute SQL-Group
# Loaded module rlm_logintime
# Loading module "logintime" from file /etc/freeradius/3.0/mods-enabled/logintime
logintime {
minimum_timeout = 60
}
# Loading module "exec" from file /etc/freeradius/3.0/mods-enabled/exec
exec {
wait = no
input_pairs = "request"
shell_escape = yes
timeout = 10
}
# Loaded module rlm_realm
# Loading module "IPASS" from file /etc/freeradius/3.0/mods-enabled/realm
realm IPASS {
format = "prefix"
delimiter = "/"
ignore_default = no
ignore_null = no
}
# Loading module "suffix" from file /etc/freeradius/3.0/mods-enabled/realm
realm suffix {
format = "suffix"
delimiter = "#"
ignore_default = no
ignore_null = no
}
# Loading module "bangpath" from file /etc/freeradius/3.0/mods-enabled/realm
realm bangpath {
format = "prefix"
delimiter = "!"
ignore_default = no
ignore_null = no
}
# Loading module "realmpercent" from file /etc/freeradius/3.0/mods-enabled/realm
realm realmpercent {
format = "suffix"
delimiter = "%"
ignore_default = no
ignore_null = no
}
# Loading module "ntdomain" from file /etc/freeradius/3.0/mods-enabled/realm
realm ntdomain {
format = "prefix"
delimiter = "\\"
ignore_default = no
ignore_null = no
}
# Loaded module rlm_preprocess
# Loading module "preprocess" from file /etc/freeradius/3.0/mods-enabled/preprocess
preprocess {
huntgroups = "/etc/freeradius/3.0/mods-config/preprocess/huntgroups"
hints = "/etc/freeradius/3.0/mods-config/preprocess/hints"
with_ascend_hack = no
ascend_channels_per_line = 23
with_ntdomain_hack = no
with_specialix_jetstream_hack = no
with_cisco_vsa_hack = no
with_alvarion_vsa_hack = no
}
# Loaded module rlm_cache
# Loading module "cache_eap" from file /etc/freeradius/3.0/mods-enabled/cache_eap
cache cache_eap {
driver = "rlm_cache_rbtree"
key = "%{%{control:State}:-%{%{reply:State}:-%{State}}}"
ttl = 15
max_entries = 0
epoch = 0
add_stats = no
}
# Loading module "sradutmp" from file /etc/freeradius/3.0/mods-enabled/sradutmp
radutmp sradutmp {
filename = "/var/log/freeradius/sradutmp"
username = "%{User-Name}"
case_sensitive = yes
check_with_nas = yes
permissions = 420
caller_id = no
}
# Loaded module rlm_expiration
# Loading module "expiration" from file /etc/freeradius/3.0/mods-enabled/expiration
# Loaded module rlm_detail
# Loading module "detail" from file /etc/freeradius/3.0/mods-enabled/detail
detail {
filename = "/var/log/freeradius/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/detail-%Y%m%d"
header = "%t"
permissions = 384
locking = no
escape_filenames = no
log_packet_header = no
}
# Loaded module rlm_unpack
# Loading module "unpack" from file /etc/freeradius/3.0/mods-enabled/unpack
# Loading module "auth_log" from file /etc/freeradius/3.0/mods-enabled/detail.log
detail auth_log {
filename = "/var/log/freeradius/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/auth-detail-%Y%m%d"
header = "%t"
permissions = 384
locking = no
escape_filenames = no
log_packet_header = no
}
# Loading module "reply_log" from file /etc/freeradius/3.0/mods-enabled/detail.log
detail reply_log {
filename = "/var/log/freeradius/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/reply-detail-%Y%m%d"
header = "%t"
permissions = 384
locking = no
escape_filenames = no
log_packet_header = no
}
# Loading module "pre_proxy_log" from file /etc/freeradius/3.0/mods-enabled/detail.log
detail pre_proxy_log {
filename = "/var/log/freeradius/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/pre-proxy-detail-%Y%m%d"
header = "%t"
permissions = 384
locking = no
escape_filenames = no
log_packet_header = no
}
# Loading module "post_proxy_log" from file /etc/freeradius/3.0/mods-enabled/detail.log
detail post_proxy_log {
filename = "/var/log/freeradius/radacct/%{%{Packet-Src-IP-Address}:-%{Packet-Src-IPv6-Address}}/post-proxy-detail-%Y%m%d"
header = "%t"
permissions = 384
locking = no
escape_filenames = no
log_packet_header = no
}
# Loaded module rlm_expr
# Loading module "expr" from file /etc/freeradius/3.0/mods-enabled/expr
expr {
safe_characters = "#abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789.-_: /äéöüàâæçèéêëîïôœùûüaÿÄÉÖÜßÀÂÆÇÈÉÊËÎÏÔŒÙÛÜŸ"
}
# Loaded module rlm_chap
# Loading module "chap" from file /etc/freeradius/3.0/mods-enabled/chap
# Loading module "ntlm_auth" from file /etc/freeradius/3.0/mods-enabled/ntlm_auth
exec ntlm_auth {
wait = yes
program = "/path/to/ntlm_auth --request-nt-key --domain=MYDOMAIN --username=%{mschap:User-Name} --password=%{User-Password}"
shell_escape = yes
}
# Loaded module rlm_always
# Loading module "reject" from file /etc/freeradius/3.0/mods-enabled/always
always reject {
rcode = "reject"
simulcount = 0
mpp = no
}
# Loading module "fail" from file /etc/freeradius/3.0/mods-enabled/always
always fail {
rcode = "fail"
simulcount = 0
mpp = no
}
# Loading module "ok" from file /etc/freeradius/3.0/mods-enabled/always
always ok {
rcode = "ok"
simulcount = 0
mpp = no
}
# Loading module "handled" from file /etc/freeradius/3.0/mods-enabled/always
always handled {
rcode = "handled"
simulcount = 0
mpp = no
}
# Loading module "invalid" from file /etc/freeradius/3.0/mods-enabled/always
always invalid {
rcode = "invalid"
simulcount = 0
mpp = no
}
# Loading module "userlock" from file /etc/freeradius/3.0/mods-enabled/always
always userlock {
rcode = "userlock"
simulcount = 0
mpp = no
}
# Loading module "notfound" from file /etc/freeradius/3.0/mods-enabled/always
always notfound {
rcode = "notfound"
simulcount = 0
mpp = no
}
# Loading module "noop" from file /etc/freeradius/3.0/mods-enabled/always
always noop {
rcode = "noop"
simulcount = 0
mpp = no
}
# Loading module "updated" from file /etc/freeradius/3.0/mods-enabled/always
always updated {
rcode = "updated"
simulcount = 0
mpp = no
}
# Loaded module rlm_sqlippool
recreate radacct table with auto increment. it will fix it.
import the radacct table from freeradius github link.

Data block not supported with packer version 1.6.1 in hcl2 templates

I created a packer json template on my local system with packer 1.7.7 installed.
Then I upgraded to hcl2 template. However, when I run the packer pipeline over the jenkins node having packer version 1.6.1. It throws this error:
Blocks of type "data" are not expected here.
Error: Unsupported block type
After researching, I realized that packer version 1.6.1 doesn't support data blocks in its templates, but it supports hcl2 templates.
Can anyone explain how I can replace the data block (ref template below) with something supported in packer version 1?
data "amazon-ami" "autogenerated_1"{
access_key = "${var.aws_access_key}"
filters = {
root-device-type = "ebs"
virtualization-type = "hvm"
name = "**** Linux *"
}
most_recent = true
region = "${var.aws_region}"
owners = ["${var.owner_id}"]
secret_key = "${var.aws_secret_key}"
}
when I am trying to consume this ami id in the source block It gives me error.
ami_name = "${var.ami_name}"
associate_public_ip_address = false
force_deregister = true
iam_instance_profile = "abc"
instance_type = "****"
region = "${var.aws_region}"
source_ami = data.amazon-ami.autogenerated_1.id
ssh_interface = "private_ip"
ssh_username = "user"
subnet_id = "subnet-********"
vpc_id = "vpc-***********"
}
The packer pipeline over the jenkins node having packer version 1.6.1.
Its not supported in such an old version. From docs:
Note: Data Sources is a feature included in Packer 1.7 and later

Glue_version and python_version not working in terraform

Hellow everyone,
I am using terraform to create the glue job. Now AWS Glue now supports the ability to run ETL jobs on Apache Spark 2.4.3 (with Python 3).
I want to use this feature. but whenever i am making changes it is throwing error.
I am using
aws-cli/1.16.184.
Terraform v0.12.6
aws provider 2.29
resource "aws_glue_job" "aws_glue_job_foo" {
glue_version = "1"
name = "job-name"
description = "job-desc"
role_arn = data.aws_iam_role.aws_glue_iam_role.arn
max_capacity = 1
max_retries = 1
connections = [aws_glue_connection.connection.name]
timeout = 5
command {
name = "pythonshell"
script_location = "s3://bucket/script.py"
python_version = "3"
}
default_arguments = {
"--job-language" = "python"
"--ENV" = "env"
"--ROLE_ARN" = data.aws_iam_role.aws_glue_iam_role.arn
}
execution_property {
max_concurrent_runs = 1
}
}
But it is throwing error to me,
Error: Unsupported argument
An argument named "glue_version" is not expected here.
This Terraform issue has been resolved.
Terraform aws_glue_job now accepts a glue_version argument.
Previous Answer
With or without python_version in the Terraform command block, I must go to the AWS console to edit the job and set "Glue version". My job fails without this manual step.
Workaround #1
This issue has been reported and debated and includes a workaround.
resource "aws_glue_job" "etl" {
name = "${var.job_name}"
role_arn = "${var.iam_role_arn}"
command {
script_location = "s3://${var.bucket_name}/${aws_s3_bucket_object.script.key}"
}
default_arguments = {
"--enable-metrics" = ""
"--job-language" = "python"
"--TempDir" = "s3://${var.bucket_name}/TEMP"
}
# Manually set python 3 and glue 1.0
provisioner "local-exec" {
command = "aws glue update-job --job-name ${var.job_name} --job-update 'Command={ScriptLocation=s3://${var.bucket_name}/${aws_s3_bucket_object.script.key},PythonVersion=3,Name=glueetl},GlueVersion=1.0,Role=${var.iam_role_arn},DefaultArguments={--enable-metrics=\"\",--job-language=python,--TempDir=\"s3://${var.bucket_name}/TEMP\"}'"
}
}
Workaround #2
Here is a different workaround.
resource "aws_cloudformation_stack" "network" {
name = "${local.name}-glue-job"
template_body = <<STACK
{
"Resources" : {
"MyJob": {
"Type": "AWS::Glue::Job",
"Properties": {
"Command": {
"Name": "glueetl",
"ScriptLocation": "s3://${local.bucket_name}/jobs/${var.job}"
},
"ExecutionProperty": {
"MaxConcurrentRuns": 2
},
"MaxRetries": 0,
"Name": "${local.name}",
"Role": "${var.role}"
}
}
}
}
STACK
}
This has been released in version 2.34.0 of the Terraform AWS provider.
It looks like terraform uses python_version instead of glue_version
By using python_version = "3", you should be using glue version 1.0. Glue version 0.9 doesn't support python 3.

Terraform Azure : deploy mysql network rule on another subscription

I'm trying do deploy a MySQL database on Azure using Terraform (v 0.11.11). I need to set differents parts in my main.tf file:
provider
resource group
mysql server
mysql database
mysql virtual network rule 1
mysql virtual network rule 2
mysql virtual network rule 3
At the moment, all those requierements work except the last one, mysql virtual network rule 3. Everything is created on subscription A but mysql virtual network rule 3 uses a subnet_id includes in subscription B.
And here is the problem, how can I write my .tf file to create a virtual network rule using a subnet_id with a subscription different from the one used until now ?
I tried to do it manually in Azure and it works. On Azure Portal, I can choose the subnet even if it based in another subscription.
#provider azurem.A is Subscription A in my text. Everything is created in this sub.
#prodiver azurem.B is Subscription B in my text. The subnet used to create vitual_network_rule_3 is in this subscription.
provider "azurerm" {
client_id = "${var.client_id}"
client_secret = "${var.client_secret}"
tenant_id = "${var.tenant_id}"
subscription_id = "${var.subscription}"
alias = "A"
}
provider "azurerm" {
client_id = "${var.client_id}"
client_secret = "${var.client_secret}"
tenant_id = "${var.tenant_id}"
subscription_id = "${var.subscription_B}"
alias = "B"
}
#Creating RG in Sub A.
resource "azurerm_resource_group" "rg" {
# attributes to create RG in Sub A. works well.
# ....
}
#Creating mysql server in Sub A.
resource "azurerm_mysql_server" "mysql_server" {
# attributes to create mysql server. works well.
# ....
}
#Creating mysql database in Sub A.
resource "azurerm_mysql_database" "mysql_db" {
# attributes to create mysql database. works well.
# ....
}
#Creating vnet rule using a subnet in Sub A. WORKING
resource "azurerm_mysql_virtual_network_rule" "mysql_vnet_1" {
count = "${var.vnet_one != "" ? 1 : 0}"
name = "subscription-peering-1"
resource_group_name = "${azurerm_resource_group.rg.name}"
server_name = "${azurerm_mysql_server.mysql_server.name}"
subnet_id = "${var.vnet_one}"
provider = "azurerm.A"
}
#Creating vnet rule using a subnet in Sub A. WORKING
resource "azurerm_mysql_virtual_network_rule" "mysql_vnet_2" {
count = "${var.vnet_two != "" ? 1 : 0}"
name = "subscription-peering-2"
resource_group_name = "${azurerm_resource_group.rg.name}"
server_name = "${azurerm_mysql_server.mysql_server.name}"
subnet_id = "${var.vnet_two}"
provider = "azurerm.A"
}
#Getting data to get the subnet in Subscription B in order to use it in "mysql_vnet_three".
#Uses the second provider, the one that contains Subcription B
data "azurerm_subnet" "subnet_data" {
name = "my-subB-subnet-name"
virtual_network_name = "my-subB-vnet-name"
resource_group_name = "my-subB-rg_name"
provider = "azurerm.B"
}
#Creating vnet rule using a subnet in Sub B. NOT WORKING
resource "azurerm_mysql_virtual_network_rule" "mysql_vnet_3" {
count = "${var.vnet_exploit != "" ? 1 : 0}"
name = "subscription-peering-3"
resource_group_name = "${azurerm_resource_group.rg.name}"
server_name = "${azurerm_mysql_server.mysql_server.name}"
subnet_id = "${data.azurerm_subnet.subnet_data.id}"
provider = "azurerm.A"
}
Thank you so much !
Shouldn't the provider be azurerm.B ?
#Creating vnet rule using a subnet in Sub B. NOT WORKING
resource "azurerm_mysql_virtual_network_rule" "mysql_vnet_3" {
count = "${var.vnet_exploit != "" ? 1 : 0}"
name = "subscription-peering-3"
resource_group_name = "${azurerm_resource_group.rg.name}"
server_name = "${azurerm_mysql_server.mysql_server.name}"
subnet_id = "${data.azurerm_subnet.subnet_data.id}"
provider = "azurerm.B"
}
As I couldn't find the solution using TF resources, I used local-exec to run Az command in order to create the vnet rule.
resource "null_resource" "create_vnet_rule_exploit_from_cli" {
count = "${var.vnet_exploit != "" ? 1 : 0}"
provisioner "local-exec" {
command = "az mysql server vnet-rule create --name subscription-peering-exploit
--server-name ${azurerm_mysql_server.mysql_server.name} --resource-group
${azurerm_resource_group.rg.name} --subnet ${var.vnet_exploit} --
subscription ${var.subscription}"
}
depends_on = ["azurerm_mysql_server.mysql_server"]
}

List of all instances created by a module

I have a number of module invocations that look similar to this
1 module "gcpue4a1" {
2 source = "../../../modules/pods"
3
4 }
where the module is creating instances, DNS records, etc.
locals {
gateway_name = "gateway-${var.network_zone}-${var.environment}-1"
}
resource "google_compute_instance" "gateway" {
name = "${local.gateway_name}"
machine_type = "n1-standard-8"
zone = "${var.zone}"
allow_stopping_for_update = true
}
How can I iterate over a list of all instances that have been created through this module. Can I do it with instance tags or labels?
In the end what I want is to be able to iterate over a list to export to an ansible inventory file. But I'm just not sure how I do this when my resources are encapsulated in modules.
With terraform show I can clearly see the structure of the variables.
➜ gcp-us-east4 git:(integration) ✗ terraform show | grep google_compute_instance.gateway -n1
640- zone = us-east4-a
641:module.screencast-gcp-pod-gcpue4a1-food.google_compute_instance.gateway:
642- id = gateway-gcpue4a1-food-1
--
--
991- zone = us-east4-a
992:module.screencast-gcp-pod-gcpue4a2-food.google_compute_instance.gateway:
993- id = gateway-gcpue4a2-food-1
--
--
1342- zone = us-east4-a
1343:module.screencast-gcp-pod-gcpue4a3-food.google_compute_instance.gateway:
1344- id = gateway-gcpue4a3-food-1
--
--
1693- zone = us-east4-a
1694:module.screencast-gcp-pod-gcpue4a4-food.google_compute_instance.gateway:
1695- id = gateway-gcpue4a4-food-1
The etcd inventory piece works just fine when I explicitly say which node I want. The overall inventory piece below it does not and I'm not sure how to fix it.
10 ##Create ETCD Inventory
11 provisioner "local-exec" {
12 command = "echo \"\n[etcd]\n${google_compute_instance.k8s-master.name} ansible_s sh_host=${google_compute_instance.k8s-master.network_interface.0.address}\" >> kubesp ray-inventory"
13 }
14
15 ##Create Nodes Inventory
16 provisioner "local-exec" {
17 command = "echo \"\n[kube-node]\" >> kubespray-inventory"
18 }
19 # provisioner "local-exec" {
20 # command = "echo \"${join("\n",formatlist("%s ansible_ssh_host=%s", google_compu te_instance.gateway.*.name, google_compute_instance.gateway.*.network_interface.0.add ress))}\" >> kubespray-inventory"
21 # }
➜ gcp-us-east4 git:(integration) ✗ terraform apply
Error: resource 'null_resource.ansible-provision' provisioner local-exec (#4): unknown resource 'google_compute_instance.gateway' referenced in variable google_compute_instance.gateway.*.id
you can make sure each module adds a label that matches the module
and you can then use gcloud compute instances list and use a filter to only show the ones with the specific lablel.