Invalid Dockerrun.aws.json version, abort deployment - json

Why would I be getting invalid for this? Version 1 works fine but for some reason I can't get this to load.
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "islandsound_vowpal_wabbit_test",
"image": "islandsound/vowpal_wabbit_test",
"memory": 128,
"portMappings": [
{
"hostPort": 26542,
"containerPort": 26542
}
]
}
]
}

AWSEBDockerrunVersion version 2 is not supported by single container platforms, create an environment with a multi container platform and deploy to it.
To create a multi container platform from the CLI, you can run: eb create --elb-type application -p "64bit Amazon Linux 2018.03 v2.15.2 running Multi-container Docker 18.06.1-ce (Generic)"

Answer is here:
multicontainer vs single container Dockerrun version
... the issue is because the environment created is using a "Single
Container" platform ...

If you want to run a multi-container docker instance, you must select it when creating your environment.
Through AWS portal
Platform: Docker
Platform branch: Multi-container Docker running on 64bit Amazon Linux
Platform version: (recommended)

Related

Cloud SQL minor patch upgrade not working for MySQL

As per release notes I should be able to upgrade the minor patch of the database, which by default is 8.0.26 in newly fresh instances:
May 26, 2022
Cloud SQL for MySQL now supports minor version 8.0.29. To upgrade your existing instance to the new version, see Upgrade the database minor version.
I triggered a minor patch upgrade using GCP CLI for MySQL. The command output is successful but the database patch didn't change. Used the following command:
$ gcloud sql instances patch mysqldb
The following message will be used for the patch API method.
{"name": "mysqldb", "project": "myproject", "settings": {}}
Patching Cloud SQL instance...done.
Updated [https://sqladmin.googleapis.com/sql/v1beta4/projects/myproject/instances/mysqldb].
It didn't upgrade, so I restarted the MySQL instance, and even after that the version remained 8.0.26.
This is expected because when you execute gcloud sql instances patch mysqldb this will do nothing about an upgrade because actually you're not setting what to patch.
If you look at the body sent to the API it is almost "empty"
{"name": "mysqldb", "project": "myproject", "settings": {}}
Moreover, in the docs you link it mentions that to upgrade the instance you should specify the new version.
gcloud sql instances patch mysqldb --database-version=MYSQL_8_0_29
which results in
{"databaseVersion": "MYSQL_8_0_29", "name": "mysqldb", "project": "myproject", "settings": {}}

How to properly run a container with containerd's ctr using --uidmap/gidmap and --net-host option

I'm running a container with ctr and next to using user namespaces to map the user within the container (root) to another user on the host, I want to make the host networking available for the container. For this, I'm using the --net-host option. Based on a very simple test container
$ cat Dockerfile
FROM alpine
ENTRYPOINT ["/bin/sh"]
I try it with
sudo ctr run -rm --uidmap "0:1000:999" --gidmap "0:1000:999" --net-host docker.io/library/test:latest test
which gives me the following error
ctr: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"sysfs\\\" to rootfs \\\"/run/containerd/io.containerd.runtime.v2.task/default/test/rootfs\\\" at \\\"/sys\\\" caused \\\"operation not permitted\\\"\"": unknown
Everything works fine if I either
remove the --net-host flag or
remove the --uidmap/--gidmap arguments
I tried to add the user with the host uid=1000 to the netdev group, but still the same error.
Do I maybe need to use networking namespaces?
EDIT:
Meanwhile found out that it's an issue within runc. In case I use user namespaces by adding the following to the config.json
"linux": {
"uidMappings": [
{
"containerID": 0,
"hostID": 1000,
"size": 999
}
],
"gidMappings": [
{
"containerID": 0,
"hostID": 1000,
"size": 999
}
],
and additionally do not use a network namespace, which means leaving out the entry
{
"type": "network"
},
within the "namespaces" section, I got the following error from runc:
$ sudo runc run test
WARN[0000] exit status 1
ERRO[0000] container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"sysfs\\\" to rootfs \\\"/vagrant/test/rootfs\\\" at \\\"/sys\\\" caused \\\"operation not permitted\\\"\""
container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"sysfs\\\" to rootfs \\\"/vagrant/test/rootfs\\\" at \\\"/sys\\\" caused \\\"operation not permitted\\\"\""
Finally found the answer from this issue in runc. It's basically a restriction within the kernel that a user that does not own the network namespace does not have the CAP_SYS_ADMIN capability and without that can't mount sysfs. Since the user on the host that the root user within the container is mapped to did not create the host network namespace, it does not have CAP_SYS_ADMIN there.
From the discussion in the runc issue, I do see the following options for now:
remove mounting of sysfs.
Within the config.json that runc uses, remove the following section within "mounts":
{
"destination": "/sys",
"type": "sysfs",
"source": "sysfs",
"options": [
"nosuid",
"noexec",
"nodev",
"ro"
]
},
In my case, I also couldn't mount /etc/resolv.conf. By removing these 2, the container did run fine and had host network access. This does not work with ctr though.
setup a bridge from the host network namespace to the network space of the container (see here and slirp4netns).
use docker or podman if possible that seem to use slirp4netns for this purpose. There is an old moby issue that also might be interesting.

How can I make Jelastic start PM2 to launch an 'npm' command instead of a file?

I'm using a Jelastic Node.js PM2 environment and I want my app to be started with something like the following:
pm2 start npm --name "app name" -- start
(my server is not a JS file).
The command runs fine if I use a Jelastic 'npm' environment, but I'd rather have the benefits of PM2.
I tried setting various APP_FILE (start, npm start, a pm2 config file path), Entry Points and PROCESS_MANAGER_FILE, without success. I usually get this error:
Node ID : 53209
-----------------------
result 1 Failed to start
Stopping nodejs server[ OK ] Starting nodejs server [FAILED]
The comment from #Jelastic worked! Indeed using a PM2 'ecosystem file' works in Jelastic.
Set APP_FILE (or possibly PROCESS_MANAGER_FILE) to ecosystem.config.js (This is relative to ROOT_DIR)
The content of this file should look something like this:
module.exports = {
apps: [
{
script: "yarn",
args: "--cwd myserver1 start",
name: "myserver1",
},
// You can use this setup to start multiple processes too.
{
script: "yarn",
args: "--cwd myserver2 start",
name: "myserver2",
},
],
};
--cwd tells yarn to switch the Current Working Directory. If you use npm, you can use --prefix instead.
Read more about PM2 ecoystem files: https://pm2.keymetrics.io/docs/usage/application-declaration/

Configuring Apache drill for Cassandra

I am trying to configure Cassandra with Drill. I used the same approach given on the link: https://drill.apache.org/docs/starting-the-web-ui/.
I used the following code for New Storage Plugin:
{
"type": "cassandra",
"hosts": [
"127.0.0.1"
],
"port": 9042,
"username": "<username>",
"password": "<password>",
"enabled": false
}
I have attached the Screenshot here.
But I'm getting the following error:
Please retry: Error (invalid JSON mapping)
How can I resolve this?
All the code :
Git: https://github.com/yssharma/drill/tree/cassandra-storage
Patch: https://gist.github.com/yssharma/2581ae8a97c559b2677f
1. Get Drill: Lets get the Drill source
$ git clone https://github.com/apache/drill.git
2. Get Cassandra Storage patch/Download the Patch file from:
https://reviews.apache.org/r/29816/diff/raw/
3. Apply the patch on top of Drill
$ cd drill
$ git apply --check ~/Downloads/DRILL-92-CassandraStorage.patch
$ git apply ~/Downloads/DRILL-92-CassandraStorage.patch
4. Build Drill with Cassandra Storage & export distribution to /opt/drill
$ mvn clean install -DskipTests
$ mkdir /opt/drill
$ tar xvzf distribution/target/*.tar.gz --strip=1 -C /opt/drill
5. Start Sqlline.
That it we have finished with the Drill build and installation – and its time we can start using Drill.
$ cd /opt/drill
$ bin/sqlline -u jdbc:drill:zk=local -n admin -p admin
Drill-Sqlline
Hit ‘show schemas‘ to view existing schemas.
Drill-Sqlline-schemas
6. Drill Web interface
You should be able to see the Drill web interface on localhost:8047, or whatever your host/port is.
Use this as your config:
{
"type": "cassandra",
"config": {
"cassandra.hosts": [
"127.0.0.1",
"127.0.0.2"
],
"cassandra.port": 9042
},
"enabled": true
}
Also, if this doesnt work, know that they are working on a plugin for it now: https://github.com/apache/drill/pull/1960
I'll give an update here as well. We're doing some serious refactoring of the how Drill works with storage plugins. Specifically, we're working to incorporate the Calcite Adapter1 for Cassandra. The reason for this is that the hard part of storage plugins isn't the connection, it's the optimizations. Calcite already does query planning for Drill and already implemented a bunch of these adapters which means that the work of figuring out all the optimizations (AKA pushdowns) is largely done.
In the case of Cassandra/Scylla, this is particularly important because some filters should be pushed down to Cassandra, and some should absolutely not be pushed down. The adapters also include aggregate pushdowns--something which no Drill plugins currently do. Again the point of this is that once we commit this, the connector should work VERY will with Cassandra/Scylla. We have one for ElasticSearch that is very near completion and once that's done the Cassandra plugin is next. If you have any suggestions/comments or other feedback, please post on the pull request linked above.
** UPDATE 11 April 2021: Cassandra/Scylla Plugin Now Merged in Drill 1.19.0-SNAPSHOT **

Autodesk Forge RCDB Deployment to Heroku

Can someone give me a step by step! guide to deploying all the work at https://github.com/Autodesk-Forge/forge-rcdb.nodejs - to Heroku or Digital Ocean? I'm fine with either, but I'd like a proper guide here for anyone else that tries to go through this.
Explanation:
Following the guide here # Building Autodesk Forge RCDB on Windows 10 fails with node-gyp errors - I created my own DB on my localhost. I had no choice but to change the dynamic clientsecret and clientid in development.config.js to a static option - using the ones in my own forge api get it working.
Issues:
https://devcenter.heroku.com/articles/nodejs-support#customizing-the-build-process
Log In: I get the following error if I click on login to my forge account from the website (LINK)
I've moved all of my files to heroku, hosted my database (though have not even gotten to the point of testing that far). When I try to build on heroku I get the following error.
-----> Node.js app detected
-----> Build failed
Two different lockfiles found: package-lock.json and yarn.lock
Both npm and yarn have created lockfiles for this application,
but only one can be used to install dependencies. Installing
dependencies using the wrong package manager can result in missing
packages or subtle bugs in production.
- To use npm to install your application's dependencies please delete
the yarn.lock file.
$ git rm yarn.lock
- To use yarn to install your application's dependences please delete
the package-lock.json file.
$ git rm package-lock.json
https://kb.heroku.com/why-is-my-node-js-build-failing-because-of-conflicting-lock-files
Push rejected, failed to compile Node.js app.
Push failed
Log In: I get the following error if I click on login to my forge account from the website (LINK)
You need to configure the following environment variables on Heroku (the lack of FORGE_CLIENT_ID as the environment variable was what caused the error):
"NODE_ENV": {
"description": "Environment, defaulted to production",
"value": "production"
},
"NPM_CONFIG_PRODUCTION": {
"description": "This forces Heroku to install devDependencies, needed to build the App. Must be false!",
"value": "false"
},
"FORGE_CLIENT_ID": {
"description": "Your Forge Client ID API Key"
},
"FORGE_CLIENT_SECRET": {
"description": "Your Forge Client Secret API Key"
},
"RCDB_DBHOST": {
"description": "Database host url"
},
"RCDB_PORT": {
"description": "Database port"
},
"RCDB_DBNAME": {
"description": "Database name"
},
"RCDB_USER": {
"description": "Database username"
},
"RCDB_PASS": {
"description": "Database user password"
}
This should have been easier with the Deploy to Heroku button in the project README but it's not set up right unfortunately.
I've moved all of my files to heroku, hosted my database (though have not even gotten to the point of testing that far). When I try to build on heroku I get the following error.
As is suggested in the error messages only one package manager should be present so either delete the yarn.lock or package-lock.json file in the root directory of the project and deploy again.