I have mysql and apache superset setup on dockers and connected by a bridge network, what will theSQLAlchemy URI be? - mysql

I pulled the official superset image:
git clone https://github.com/apache/incubator-superset.git
then added the MYSQL Client to requirements.txt
cd incubator-superset
touch ./docker/requirements-local.txt
echo "mysqlclient==1.4.6" >> ./docker/requirements-local.txt
docker-compose build --force-rm
docker-compose up -d
After which I made the MYSQL Container
docker run --detach --network="incubator-superset_default" --name=vedasupersetmysql --env="MYSQL_ROOT_PASSWORD=vedashri" --publish 6603:3306 mysql
Then connected Mysql to the Superset Bridge.
The bridge network is as follows:
docker inspect incubator-superset_default
[
{
"Name": "incubator-superset_default",
"Id": "56db7b47ecf0867a2461dddb1219c64c1def8cd603fc9668d80338a477d77fdb",
"Created": "2020-12-08T07:38:47.94934583Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.19.0.0/16",
"Gateway": "172.19.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"07a6e0d5d87ea3ccb353fa20a3562d8f59b00d2b7ce827f791ae3c8eca1621cc": {
"Name": "superset_db",
"EndpointID": "0dd4781290c67e3e202912cad576830eddb0139cb71fd348019298b245bc4756",
"MacAddress": "02:42:ac:13:00:02",
"IPv4Address": "172.19.0.2/16",
"IPv6Address": ""
},
"096a98f22688107a689aa156fcaf003e8aaae30bdc3c7bc6fc08824209592a44": {
"Name": "superset_worker",
"EndpointID": "54614854caebcd9afd111fb67778c7c6fd7dd29fdc9c51c19acde641a9552e66",
"MacAddress": "02:42:ac:13:00:05",
"IPv4Address": "172.19.0.5/16",
"IPv6Address": ""
},
"34e7fe6417b109fb9af458559e20ce1eaed1dc3b7d195efc2150019025393341": {
"Name": "superset_init",
"EndpointID": "49c580b22298237e51607ffa9fec56a7cf155065766b2d75fecdd8d91d024da7",
"MacAddress": "02:42:ac:13:00:06",
"IPv4Address": "172.19.0.6/16",
"IPv6Address": ""
},
"5716e0e644230beef6b6cdf7945f3e8be908d7e9295eea5b1e5379495817c4d8": {
"Name": "superset_app",
"EndpointID": "bf22dab0714501cc003b1fa69334c871db6bade8816724779fca8eb81ad7089d",
"MacAddress": "02:42:ac:13:00:04",
"IPv4Address": "172.19.0.4/16",
"IPv6Address": ""
},
"b09d2808853c54f66145ac43bfc38d4968d28d9870e2ce320982dd60968462d5": {
"Name": "superset_node",
"EndpointID": "70f00c6e0ebf54b7d3dfad1bb8e989bc9425c920593082362d8b282bcd913c5d",
"MacAddress": "02:42:ac:13:00:07",
"IPv4Address": "172.19.0.7/16",
"IPv6Address": ""
},
"d08f8a2b090425904ea2bdc7a23b050a1327ccfe0e0b50360b2945ea39a07172": {
"Name": "superset_cache",
"EndpointID": "350fd18662e5c7c2a2d8a563c41513a62995dbe790dcbf4f08097f6395c720b1",
"MacAddress": "02:42:ac:13:00:03",
"IPv4Address": "172.19.0.3/16",
"IPv6Address": ""
},
"e21469db533ad7a92b50c787a7aa026e939e4cf6d616e3e6bc895a64407c1eb7": {
"Name": "vedasupersetmysql",
"EndpointID": "d658c0224d070664f918644584460f93db573435c426c8d4246dcf03f993a434",
"MacAddress": "02:42:ac:13:00:08",
"IPv4Address": "172.19.0.8/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "incubator-superset",
"com.docker.compose.version": "1.26.0"
}
}
]
How should I form the SQLAlchemy URI?
I have tried
mysql://user:password#8088:6603/database-name
But it shows connection error, when I enter this URI.
If there is any related documentation, that would also help.

The issue was not related to superset or network. You configured the right network but haven't enabled default-authentication-plugin on MySQL docker images. Due to this error showed on the console was
Plugin caching_sha2_password could not be loaded:
To reproduce:
from sqlalchemy import create_engine
engine = create_engine('mysql://root:sample#172.19.0.5/mysql')
engine.connect()
error logs:
sqlalchemy.exc.OperationalError: (MySQLdb._exceptions.OperationalError) (1045, 'Plugin caching_sha2_password could not be loaded: /usr/lib/x86_64-linux-gnu/mariadb19/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory')
(Background on this error at: http://sqlalche.me/e/13/e3q8)
To resolve the issue:
Create MySQL image with default-authentication-plugin
docker run --detach --network="incubator-superset_default" --name=mysql --env="MYSQL_ROOT_PASSWORD=sample" --publish 3306:3306 mysql --default-authentication-plugin=mysql_native_password
Superset already has a User-defined bridge network, so you can use both formats
mysql://root:sample#mysql/mysql
mysql://root:sample#172.19.0.5/mysql

I don't have a lot of experience with Docker, but I don't think you should use 8088 as the host for your MySQL database.
Try using mysql://user:password#172.19.0.8:6603/database-name as the URI.

Related

Communication problems between two containers in the same network, mysql and spring boot

I'm having issues with the connection between two containers that are in the same network, one is mysql container and other container contains spring boot jar. I will paste here the relevant information:
These are the relevant network properties of mysql container:
"Networks": {
"my_network": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"f205465d5a7e",
"mysqldb"
]
}
"Ports": {
"3306/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "3307"
},
{
"HostIp": "::",
"HostPort": "3307"
}
],
"33060/tcp": null
}
This is the docker-compose file of my spring boot app:
version: '3.5'
services:
my_springboot_service:
container_name: my-springboot-container
hostname: my-springboot-container
image: my_springboot_image
restart: always
build: .
networks:
- my_network
environment:
MYSQL_HOST: mysqldb
MYSQL_PORT: 3306
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_DATABASE: spring_database
networks:
my_network:
external: true
name: my_network
Here is also the network configuration:
{
"Name": "my_network",
"Id": "4024f3611b1cf1526e44fa5663c32fcd86fba563983fd5a2d7a6298af5400d12",
"Created": "2023-01-18T01:57:27.921023955+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"a481e00d2903735b61020eb33cf8d41c6c61dada4843b89e99d1c131099a701e": {
"Name": "my-springboot-container",
"EndpointID": "4e4ffd82d0c96e22b0a1b1fc7be2391d9f29324049e6403727175a001acba385",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"f205465d5a7e3bb4cae02d691c0058efc9e53efe93849270245462bc74f29ef3": {
"Name": "mysqldb",
"EndpointID": "272937b9776be7369915f50023c73e1c8a702a39863b57b49705019673f52868",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
}
}
}
As you can see, mysql container is on "my_network" and since his alias is "mysqldb", I'm trying to connect from the spring boot container with the URL:
jdbc:mysql://mysqldb:3306/spring_database
Both user and password for the database are good since I am able to connect to the database outside of the container and network.
But when I hit docker-compose up --build for my spring app, I get an exception
Unable to obtain connection from database: Communications link failure
Can anyone explain what could be the problem? Thanks in advance.

Using packer and type qemu in the json file to create a guest kvm vm, but ssh timeout error coming

I have RHEL 8.5 as the KVM host. I want to create a guest vm through packer type qemu and have a json file where all the configurations are mentioned.
{
"builders": [
{
"type": "qemu",
"iso_url": "/var/lib/libvirt/images/test.iso",
"iso_checksum": "md5:3959597d89e8c20d58c4514a7cf3bc7f",
"output_directory": "/var/lib/libvirt/images/iso-dir/test",
"disk_size": "55G",
"headless": "true",
"qemuargs": [
[
"-m",
"4096"
],
[
"-smp",
"2"
]
],
"format": "qcow2",
"shutdown_command": "echo 'siedgerexuser' | sudo -S shutdown -P now",
"accelerator": "kvm",
"ssh_username": "nonrootuser",
"ssh_password": "********",
"ssh_timeout": "20m",
"vm_name": "test",
"net_device": "virtio-net",
"disk_interface": "virtio",
"http_directory": "/home/azureuser/http",
"boot_wait": "10s",
"boot_command": [
"e inst.ks=http://{{ .HTTPIP }}:{{ .HTTPPort }}/anaconda-ks.cfg"
]
}
],
"provisioners":
[
{
"type": "file",
"source": "/home/azureuser/service_status_check.sh",
"destination": "/tmp/service_status_check.sh"
},
{
"type": "file",
"source": "/home/azureuser/service_check.sh",
"destination": "/tmp/service_check.sh"
},
{
"type": "file",
"source": "/home/azureuser/azure.sh",
"destination": "/tmp/azure.sh"
},
{
"type": "file",
"source": "/home/azureuser/params.cfg",
"destination": "/tmp/params.cfg"
},
{
"type": "shell" ,
"execute_command": "echo 'siedgerexuser' | {{.Vars}} sudo -E -S bash '{{.Path}}'",
"inline": [
"echo copying" , "cp /tmp/params.cfg /root/",
"sudo ls -lrt /root/params.cfg",
"sudo ls -lrt /opt/scripts/"
],
"inline_shebang": "/bin/sh -x"
},
{
"type": "shell",
"pause_before": "5s",
"expect_disconnect": true ,
"inline": [
"echo runningconfigurescript" , "sudo sh /opt/scripts/configure-env.sh"
]
},
{
"type": "shell",
"pause_before": "200s",
"inline": [
"sudo sh /tmp/service_check.sh",
"sudo sh /tmp/azure.sh"
]
}
]
}
It is working fine in rhel 7.9, but the same thing giving ssh timeout error in RHEL 8.4.
But when i am creating guest vm with virt-install it is able to create a vm and i am able to see it in cockpit web ui, but when i initiate packer build then while giving ssh timeout error it is not visible in cockpit UI, so not able to see where the guest vm created get stuck.
Can anyone please help me to fix this issue

Why is my docker networks gateway IP unaccessible from my host OS

I am trying to get the following set up working:
My local machine OS = Linux
I am building a docker mysql container on this local machine
I plan to seed the database within the container, and then run tests locally (on my local Linux machine) against this container (which i will spin up on my linux machine too)
Unfortunately when running my tests and trying to connect to the container, the default bridge networks Gateway IP is inaccessible.
My docker-compose.yaml file is as follows
version: "3.4"
services:
integration-test-mysql:
image: mysql:8.0
container_name: ${MY_SQL_CONTAINER_NAME}
environment:
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
ports:
- "3306:3306"
volumes:
# - ./src/db:/usr/src/db #Mount db folder so we can run seed files etc.
- ./seed.sql:/docker-entrypoint-initdb.d/seed.sql
network_mode: bridge
healthcheck:
test: "mysqladmin -u root -p$MYSQL_ROOT_PASSWORD -h 127.0.0.1 ping --silent 2> /dev/null || exit 1"
interval: 5s
timeout: 30s
retries: 5
start_period: 10s
entrypoint: sh -c "
echo 'CREATE SCHEMA IF NOT EXISTS gigs;' > /docker-entrypoint-initdb.d/init.sql;
/usr/local/bin/docker-entrypoint.sh --default-authentication-plugin=mysql_native_password
"
When running docker network ls i see the following
docker network ls
NETWORK ID NAME DRIVER SCOPE
42a11ef835dd bridge bridge local
c7453acfbc98 host host local
48572c69755a integration_default bridge local
bd470f8620fd none null local
So the integration_default network was created. Then if i inspect this network
docker network inspect integration_default
[
{
"Name": "integration_default",
"Id": "48572c69755ae1bbc1448ab203a01d81be4300da12c97a9c4f1142872b878387",
"Created": "2022-09-28T00:48:20.504251612Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.27.0.0/16",
"Gateway": "172.27.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"79e897decb4f0ae5836c018d82e78997e8ac2f615b399362a307cc7f585c0875": {
"Name": "integration-test-mysql-host",
"EndpointID": "1f7798554029cc2d07f7ba44d057c489b678eac918f7916029798b42585eda41",
"MacAddress": "02:42:ac:1b:00:02",
"IPv4Address": "172.27.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "integration",
"com.docker.compose.version": "2.7.0"
}
}
]
Comparing this to the default bridge
docker inspect bridge
[
{
"Name": "bridge",
"Id": "42a11ef835dd1b2aec3ecea57211bb2753e0ebd4a2a115ace8b7df3075e97d5a",
"Created": "2022-09-27T21:54:44.239215269Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
},
"Labels": {}
}
]
Interestingly running ping 172.17.0.1 on my Linux machine works fine but ping 172.27.0.1 fails to return anything
UPDATE
I have got it working now. By specifying network_mode: bridge in my docker compose file i was able to use the default bridge network which was accessible on my local machine as i mentioned.
However, i would like to know why creating my own network didn't work here. Does anyone know why this was the case?
Docker networks are meant to be hidden and you should let docker do its job unless there is a good reason for it.
The correct way to interract with a service is through its open ports. And those ports are mapped on the host so that talking to the host:port is like talking to the app inside the container.
So when you say that you can't ping your container from the host, it is because Docker does its job good. "Fixing" this breaks the isolation of the container and makes it available to other services that shouldn't have acccess to it.

Apache Mesos,MESOS-DNS, MARATHON and Docker

In my environment running mesos-slave, mesos-master marathon and mesos-dns in standalone mode.
I deployed mysql app to marathon to run as docker container.
MySql app configurations as follows.
{
"id": "mysql",
"cpus": 0.5,
"mem": 512,
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mysql:5.6.27",
"network": "BRIDGE",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 32000,
"protocol": "tcp"
}
]
}
},
"constraints": [
[
"hostname",
"UNIQUE"
]],
"env": {
"MYSQL_ROOT_PASSWORD": "password"
},
"minimumHealthCapacity" :0,
"maximumOverCapacity" : 0.0
}
Then I deploy app called mysql client. Mysql client app needs to connect to mysql app.
mysql app config as follows.
{
"id": "mysqlclient",
"cpus": 0.3,
"mem": 512.0,
"cmd": "/scripts/create_mysql_dbs.sh",
"instances": 1,
"container": {
"type": "DOCKER",
"docker": {
"image": "mysqlclient:latest",
"network": "BRIDGE",
"portMappings": [{
"containerPort": 3306,
"hostPort": 0,
"protocol": "tcp"
}]
}
},
"env": {
"MYSQL_ENV_MYSQL_ROOT_PASSWORD": "password",
"MYSQL_PORT_3306_TCP_ADDR": "mysql.marathon.slave.mesos.",
"MYSQL_PORT_3306_TCP_PORT": "32000"
},
"minimumHealthCapacity" :0,
"maximumOverCapacity" : 0.0
}
My mesos-dns config.json. as follows
{
"zk": "zk://127.0.0.1:2181/mesos",
"masters": ["127.0.0.1:5050"],
"refreshSeconds": 60,
"ttl": 60,
"domain": "mesos",
"port": 53,
"resolvers": ["127.0.0.1"],
"timeout": 5,
"httpon": true,
"dnson": true,
"httpport": 8123,
"externalon": true,
"listener": "127.0.0.1",
"SOAMname": "ns1.mesos",
"SOARname": "root.ns1.mesos",
"SOARefresh": 60,
"SOARetry": 600,
"SOAExpire": 86400,
"SOAMinttl": 60,
"IPSources": ["mesos", "host"]
}
I can ping with service name mysql.marathon.slave.mesos. from host machine. But when I try to ping from mysql docker container I get host unreachable. Why docker container cannot resolve hsot name?
I tried with set dns parameter to apps. But its not work.
EDIT:
I can ping mysql.marathon.slave.mesos. from master/slave hosts. But I cannot ping from mysqlclient docker container. It says unreachable. How can I fix this?
Not sure what your actual question is, by guessing I think you want to know how you can resolve a Mesos DNS service name to an actual endpoint the MySQL client.
If so, you can use my mesosdns-resolver bash script to get the endpoint from Mesos DNS:
mesosdns-resolver.sh -sn mysql.marathon.mesos -s <IP_ADDRESS_OF_MESOS_DNS_SERVER>
You can use this in your create_mysql_dbs.sh script (whatever it does) to get the actual IP address and port where your mysql app is running.
You can pass in an environment variable like
"MYSQL_ENV_SERVICE_NAME": "mysql.marathon.mesos"
and then use it like this in the image/script
mesosdns-resolver.sh -sn $MYSQL_ENV_SERVICE_NAME -s <IP_ADDRESS_OF_MESOS_DNS_SERVER>
Also, please note that Marathon is not necessarily the right tool for running one-off operations (I assume you initialize your DBs with the second app). Chronos would be a better choice for this.

Error creating artifact: resource not found

I'm new to using Packer and I'd like to get some help with a problem I'm having. I can't seem to be able to find any information about this error. This error occur both on my local computer, and in atlas after push.
Running Packer v.0.8.7.dev
Please give me a helping hand!
Error:
==> virtualbox-iso: Running post-processor: atlas
virtualbox-iso (atlas): Creating artifact: /
Build 'virtualbox-iso' errored: 1 error(s) occurred:
* Post-processor failed: Error creating artifact: resource not found
==> Some builds didn't complete successfully and had errors:
--> virtualbox-iso: 1 error(s) occurred:
* Post-processor failed: Error creating artifact: resource not found
==> Builds finished but no artifacts were created.
Configuration:
{
"push": {
"name": "",
"vcs": true
},
"variables": {
"atlas_username": "{{env `ATLAS_USERNAME`}}",
"atlas_name": "{{env `ATLAS_NAME`}}"
},
"provisioners": [
{
"type": "shell",
"scripts": [
"scripts/base.sh",
"scripts/virtualbox.sh",
"scripts/vmware.sh",
"scripts/vagrant.sh",
"scripts/dep.sh",
"scripts/cleanup.sh",
"scripts/zerodisk.sh",
"scripts/custom.sh"
],
"override": {
"virtualbox-iso": {
"execute_command": "echo 'vagrant'|sudo -S bash '{{.Path}}'"
},
"vmware-iso": {
"execute_command": "echo 'vagrant'|sudo -S bash '{{.Path}}'"
}
}
}
],
"builders": [
{
"type": "virtualbox-iso",
"boot_command": [
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg ",
"debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
"hostname={{ .Name }} ",
"fb=false debconf/frontend=noninteractive ",
"keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA keyboard-configuration/variant=USA console-setup/ask_detect=false ",
"initrd=/install/initrd.gz -- <enter>"
],
"headless": false,
"boot_wait": "10s",
"disk_size": 20480,
"guest_os_type": "Ubuntu_64",
"http_directory": "http",
"iso_checksum": "c2571c4c2fc17bef1fad9e5db5e7afdb4bd29cd8ab51e42f9c036238c4e54caa",
"iso_checksum_type": "sha256",
"iso_url": "http://ftp.acc.umu.se/mirror/cdimage.ubuntu.com/releases/14.04/release/ubuntu-14.04.3-server-amd64+mac.iso",
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo '/sbin/halt -h -p' > shutdown.sh; echo 'vagrant'|sudo -S bash 'shutdown.sh'",
"guest_additions_path": "VBoxGuestAdditions_{{.Version}}.iso",
"virtualbox_version_file": ".vbox_version"
},
{
"type": "vmware-iso",
"boot_command": [
"<esc><esc><enter><wait>",
"/install/vmlinuz noapic preseed/url=http://{{ .HTTPIP }}:{{ .HTTPPort }}/preseed.cfg ",
"debian-installer=en_US auto locale=en_US kbd-chooser/method=us ",
"hostname={{ .Name }} ",
"fb=false debconf/frontend=noninteractive ",
"keyboard-configuration/modelcode=SKIP keyboard-configuration/layout=USA keyboard-configuration/variant=USA console-setup/ask_detect=false ",
"initrd=/install/initrd.gz -- <enter>"
],
"boot_wait": "10s",
"disk_size": 20480,
"guest_os_type": "Ubuntu-64",
"headless": true,
"http_directory": "http",
"iso_checksum": "af224223de99e2a730b67d7785b657f549be0d63221188e105445f75fb8305c9",
"iso_checksum_type": "sha256",
"iso_url": "http://releases.ubuntu.com/precise/ubuntu-12.04.5-server-amd64.iso",
"skip_compaction": true,
"ssh_username": "vagrant",
"ssh_password": "vagrant",
"ssh_port": 22,
"ssh_wait_timeout": "10000s",
"shutdown_command": "echo '/sbin/halt -h -p' > shutdown.sh; echo 'vagrant'|sudo -S bash 'shutdown.sh'",
"tools_upload_flavor": "linux"
}
],
"post-processors": [
[{
"type": "vagrant",
"keep_input_artifact": true
},
{
"type": "atlas",
"only": ["vmware-iso"],
"artifact": "{{user `atlas_username`}}/{{user `atlas_name`}}",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "vmware_desktop",
"version": "0.0.1"
}
},
{
"type": "atlas",
"only": ["virtualbox-iso"],
"artifact": "{{user `atlas_username`}}/{{user `atlas_name`}}",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox",
"version": "0.0.1"
}
}]
]
}
You get
virtualbox-iso (atlas): Creating artifact: /
so it seems your variable from
"variables": {
"atlas_username": "{{env `ATLAS_USERNAME`}}",
"atlas_name": "{{env `ATLAS_NAME`}}"
},
are not set correctly - make sure they are set or try to hardcode at first.
Also as a side note, in the latest doc its recommended to use an atlas_token to access atlas
"post-processors": [
{
"type": "atlas",
"only": ["virtualbox-iso"],
"token": "{{user `atlas_token`}}",
"artifact": "hashicorp/foobar",
"artifact_type": "vagrant.box",
"metadata": {
"provider": "virtualbox",
"version": "0.0.1"
"created_at": "{{timestamp}}"
}
}
]