k3sup install older kubectl version v1.24.3 - k3s

I'm using crunchydata/postgres-operator for my k3s based setup however I have started getting error as below due to latest version on kubectl:
time="2022-10-28T20:49:40Z" level=debug msg="debug flag set to true" file="cmd/postgres-operator/main.go:68" func=main.main version=5.2.0-0
time="2022-10-28T20:49:40Z" level=info msg="metrics server is starting to listen" addr=":8080" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/log/deleg.go:130" func="log.(*DelegatingLogger).Info" version=5.2.0-0
time="2022-10-28T20:49:40Z" level=info msg="starting controller runtime manager and will wait for signal to exit" file="cmd/postgres-operator/main.go:89" func=main.main version=5.2.0-0
time="2022-10-28T20:49:40Z" level=info msg="upgrade checking enabled" file="cmd/postgres-operator/main.go:94" func=main.main version=5.2.0-0
time="2022-10-28T20:49:40Z" level=info msg="starting metrics server" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/manager/internal.go:385" func="manager.(*controllerManager).serveMetrics.func2" path=/metrics version=5.2.0-0
time="2022-10-28T20:49:40Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:40Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:40Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:40Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:40Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:40Z" level=info msg="{\"pgo_versions\":[{\"tag\":\"v5.1.0\"},{\"tag\":\"v5.0.5\"},{\"tag\":\"v5.0.4\"},{\"tag\":\"v5.0.3\"},{\"tag\":\"v5.0.2\"},{\"tag\":\"v5.0.1\"},{\"tag\":\"v5.0.0\"}]}" X-Crunchy-Client-Metadata="{\"deployment_id\":\"4d3c5b1b-a13b-46a9-b07d-59dd0fa0205b\",\"kubernetes_env\":\"v1.25.3+k3s1\",\"pgo_clusters_total\":0,\"pgo_version\":\"5.2.0-0\",\"is_open_shift\":false}" file="internal/upgradecheck/http.go:181" func=upgradecheck.CheckForUpgradesScheduler version=5.2.0-0
time="2022-10-28T20:49:40Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:41Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:41Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:41Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:41Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:41Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:41Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:41Z" level=info msg="Starting EventSource" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/internal/controller/controller.go:165" func="controller.(*Controller).Start.func1" reconciler group=postgres-operator.crunchydata.com reconciler kind=PostgresCluster source="kind source: /, Kind=" version=5.2.0-0
time="2022-10-28T20:49:42Z" level=error msg="if kind is a CRD, it should be installed before calling Start" error="no matches for kind \"CronJob\" in version \"batch/v1beta1\"" file="sigs.k8s.io/controller-runtime#v0.8.3/pkg/log/deleg.go:144" func="log.(*DelegatingLogger).Error" kind=CronJob.batch version=5.2.0-0
panic: no matches for kind "CronJob" in version "batch/v1beta1"
goroutine 1 [running]:
main.assertNoError(...)
github.com/crunchydata/postgres-operator/cmd/postgres-operator/main.go:41
main.main()
github.com/crunchydata/postgres-operator/cmd/postgres-operator/main.go:105 +0x570
What is the correct way to to resolve this? I tried helm chart provided at: https://github.com/CrunchyData/postgres-operator-examples but still no success.
Thanks

I have managed to resolve this issue by setting up old version on Kubernetes version. I'm installing cluster using k3sup install so I just added version as part of the command like k3sup install --k3s-channel v1.24

Related

Headless Chrome error when running in Jenkins for protractor scripts

I am getting below error when i run mine protractor jenkins case with below configuration
(Session info: headless chrome=79.0.3945.117)
[INFO] (Driver info: chromedriver=79.0.3945.36
Code :
'chromeOptions': {
'args': ['--no-sandbox', '--disable-web-security', '--disable-extensions'],
'args': [ "--headless", "--disable-gpu", "--window-size=1920,1080" ]
},
Error: TimeoutError: timeout: Timed out receiving message from renderer: 10.000
Package.json details
main": "conf.js",
"dependencies": {
"chromedriver": "^79.0.0",
"grunt": "^0.4.5",
"grunt-cli": "^0.1.13",
"grunt-cli-babel": "0.0.5",
"grunt-protractor-runner": "^2.1.0",
"grunt-shell-spawn": "^0.3.8",
"iedriver": "^3.0.0",
"jasmine": "^2.4.1",
"jasmine-allure-reporter": "^1.0.2",
"jasmine-reporters": "^2.1.1",
"jasmine-spec-reporter": "^2.4.0",
"protractor": "^5.3.1",
"protractor-jasmine2-screenshot-reporter": "^0.5.0",
"selenium-webdriver": "^3.6.0",
"webdriver-manager": "^12.1.7"
Any help will be highly appreciated.

Travis is Failing without showing a reason

I am trying to build my project through travis-ci, but unknown error is happening when travis tries to test the project through mocha. i know the issue is not with mysql since the connection is there and scripts are working.
I am testing using
- ts-node/register
- Mocha
- Chai
- Chai-http
- mySQL (using pool)
The test is working fine locally, but on travis it is failing. i am thinking that it might be something with ts-node/register, in travis.
1 - i have tried remove the test completely and the build was fixed. but when i try to test it fails miserably.
package.json
{
"name": "xxxxxxxxx",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"start": "ENV='PRODUCTION' node dist/app.js --verbose",
"debug": "node --inspect dist/app.js",
"dev": "ENV='DEVELOPMENT' nodemon src/app.ts --verbose",
"build": "tsc -p .",
"test": "ENV='TEST' mocha -r ts-node/register test/{,dbMappers}/**.test.ts --exit; ",
"test:awesome": "ENV='TEST' mocha -r ts-node/register test/{,dbMappers}/**.test.ts --reporter mochawesome --exit; "
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"app-root-path": "^2.2.1",
"bcrypt": "^3.0.6",
"body-parser": "^1.19.0",
"colors": "^1.3.3",
"cors": "^2.8.5",
"dotenv": "^8.0.0",
"express": "^4.17.1",
"jsonwebtoken": "^8.5.1",
"mysql": "^2.17.1",
"uuid": "^3.3.2",
"winston": "^3.2.1"
},
"devDependencies": {
"#types/app-root-path": "^1.2.4",
"#types/bcrypt": "^3.0.0",
"#types/body-parser": "^1.17.0",
"#types/chai": "^4.2.0",
"#types/chai-http": "^4.2.0",
"#types/colors": "^1.2.1",
"#types/cors": "^2.8.5",
"#types/dotenv": "^6.1.1",
"#types/express": "^4.17.0",
"#types/jsonwebtoken": "^8.3.2",
"#types/mocha": "^5.2.7",
"#types/mysql": "^2.15.6",
"#types/node": "^12.6.8",
"#types/uuid": "^3.4.5",
"#types/winston": "^2.4.4",
"chai": "^4.2.0",
"chai-http": "^4.3.0",
"mocha": "^6.2.0",
"mochawesome": "^4.1.0",
"nodemon": "^1.19.1",
"ts-node": "^8.3.0",
"typescript": "^3.5.3"
}
}
.travis.yml
language: node_js
node_js:
- "stable"
directories:
- "node_modules"
services:
- mysql
cache:
- "node_modules"
before_script:
- mysql -u root -e "CREATE USER 'test'#'localhost' IDENTIFIED BY 'xxxx'"
- mysql -u root -e 'CREATE DATABASE backend;'
- mysql -u root -e 'SHOW DATABASES'
- mysql -u root -e "GRANT ALL ON backend.* TO 'test'#'localhost';"
- npm install
env:
- ENV: 'TEST'
The Travis repo
https://travis-ci.com/moh682/hbas-system-api
The error that travis expose is only this.
yyy#1.0.0 test /home/travis/build/xxx/yyy
ENV='TEST' mocha -r ts-node/register test/{,dbMappers}/**.test.ts --exit;
xxx = username | yyy = repo name

Angular boot error npm ERR! code ELIFECYCLE

I have this problem when I want boot Angular project
tried install plugins
{
"name": "portal-app",
"version": "0.0.0",
"license": "MIT",
"scripts": {
"ng": "ng",
"start": "ng serve --proxy-config proxy.config.json",
"build": "ng build",
"test": "ng test",
"lint": "ng lint",
"e2e": "ng e2e"
},
"private": true,
"dependencies": {
"#angular/animations": "^5.0.0",
"#angular/common": "^5.0.0",
"#angular/compiler": "^5.0.0",
"#angular/core": "^5.0.0",
"#angular/forms": "^5.0.0",
"#angular/http": "^5.0.0",
"#angular/platform-browser": "^5.0.0",
"#angular/platform-browser-dynamic": "^5.0.0",
"#angular/router": "^5.0.0",
"bootstrap": "^3.3.7",
"core-js": "^2.4.1",
"rxjs": "^5.5.2",
"zone.js": "^0.8.14"
},
"devDependencies": {
"#angular/cli": "^1.6.3",
"#angular/compiler-cli": "^5.0.0",
"#angular/language-service": "^5.0.0",
"#types/jasmine": "~2.5.53",
"#types/jasminewd2": "~2.0.2",
"#types/node": "~6.0.60",
"codelyzer": "^4.0.1",
"jasmine-core": "~2.6.2",
"jasmine-spec-reporter": "~4.1.0",
"karma": "~1.7.0",
"karma-chrome-launcher": "~2.1.1",
"karma-cli": "~1.0.1",
"karma-coverage-istanbul-reporter": "^1.2.1",
"karma-jasmine": "~1.1.0",
"karma-jasmine-html-reporter": "^0.2.2",
"protractor": "~5.1.2",
"ts-node": "~3.2.0",
"tslint": "~5.7.0",
"typescript": "~2.4.2"
}
}
You have to be inside an angular-cli project in order to use the
serve command.
npm ERR! code ELIFECYCLE npm ERR! errno 1 npm ERR! portal-app#0.0.0
start: ng serve --proxy-config proxy.config.json npm ERR! Exit
status 1 npm ERR! npm ERR! Failed at the portal-app#0.0.0 start
script. npm ERR! This is probably not a problem with npm. There is
likely additional logging output above. npm WARN Local package.json
exists, but node_modules missing, did you mean to install?
npm ERR! A complete log of this run can be found in: npm ERR!
C:\Users\User\AppData\Roaming\npm-cache_logs\2019-04-24T06_32_19_277Z-debug.log
Process finished with exit code 1
Your error is clear and meaningful : you need to be inside your project to run your command. You'll maybe need to use the change directory command to place your command prompt at the right location before launching your npm scripts.
So :
cd /path/to/projectDirectory
npm start
Should do the trick.
in my case I got this type of error when running ng test --karma-config=karma.conf.js --code-coverage, and the cause of problem were console.log statements in the code

How to npm install on Heroku

I would like to to deploy my app on Heroku through bitbucket-pipeline.
Flow is as follows.
composer install
npm install
Laravel-Mix(gulp task and brouserify and so on...)
These will be needed because "vendor" and "node_modules" are written in gitignore.
Now 2 and 3 are failed to build.
I guess it caused package.json or bitbucket-pipelines.yml, but I do not know how to code. Please give me solutions?
※nodejs buildpack is already installed
※I can npm install in Heroku app when I tried "heroku run bash", then "npm install"
Here is my "package.json" and "bitbucket-pipelines.yml" (Laravel5.4)
package.json
{
"private": true,
"scripts": {
"dev": "npm run development",
"development": "cross-env NODE_ENV=development node_modules/webpack/bin/webpack.js --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js",
"watch": "cross-env NODE_ENV=development node_modules/webpack/bin/webpack.js --watch --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js",
"watch-poll": "npm run watch -- --watch-poll",
"hot": "cross-env NODE_ENV=development node_modules/webpack-dev-server/bin/webpack-dev-server.js --inline --hot --config=node_modules/laravel-mix/setup/webpack.config.js",
"prod": "npm run production",
"production": "cross-env NODE_ENV=production node_modules/webpack/bin/webpack.js --progress --hide-modules --config=node_modules/laravel-mix/setup/webpack.config.js"
},
"devDependencies": {
"axios": "^0.15.3",
"bootstrap-sass": "^3.3.7",
"cross-env": "^3.2.3",
"jquery": "^3.1.1",
"laravel-mix": "0.*",
"lodash": "^4.17.4",
"vue": "^2.1.10"
}
}
bitbucket-pipelines.yml
// bitbucket-pipelines.yml
image: phpunit/phpunit:5.0.3
clone:
depth: full
pipelines:
default:
- step:
script: # Modify the commands below to build your repository.
- composer install
- npm install
- npm run production
- git push https://heroku: test-stg.git HEAD

Recommended file format (YAML? JSON? Other?)

Before I start I should state that I'm new to both YAML and JSON so the rules of formatting are not that clear.
I'm trying to write a Perl script (Perl because I know it to exist on all of our servers.) which will update several network-related settings for various hosts. My preference is to have all of the settings in a single file and update the configurations based on which host the script is running on.
I looked at YAML, but I'm a bit put off by the fact that I can't do something like:
host:
hostname: first
interface: eth0
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
interface: eth1
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
host:
hostname: second
interface: eth0
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
interface: eth1
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
That is to say, I've plugged this into YAML validators and it has failed.
I have figured out that, for YAML, I can do the following:
host: "first"
interface1:
name: eth0
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
interface2:
name: eth1
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
This is less than desirable, though, as it makes having multiple hosts in one file impossible. I'm basing this on the fact that I keep getting errors from the online validators that I've used when I do attempt this.
I've looked at using JSON, but I don't know all of the rules for that either. I do know that the following does not work:
{
"host": "first",
"interface1": {
"newip": "2.3.4.5",
"oldip": "1.2.3.4",
"oldgw": "1.2.3.1",
"name": "eth0",
"newgw": "2.3.4.1"
},
"interface2": {
"newip": "2.3.4.5",
"oldip": "1.2.3.4",
"oldgw": "1.2.3.1",
"name": "eth1",
"newgw": "2.3.4.1"
}
}
{
"host": "second",
"interface1": {
"newip": "2.3.4.5",
"oldip": "1.2.3.4",
"oldgw": "1.2.3.1",
"name": "eth0",
"newgw": "2.3.4.1"
},
"interface2": {
"newip": "2.3.4.5",
"oldip": "1.2.3.4",
"oldgw": "1.2.3.1",
"name": "eth1",
"newgw": "2.3.4.1"
}
}
Is there a format I can use that will allow me to store all of the host and their information in a single file that can be parsed?
If either YAML or JSON are suitable, what am I doing wrong?
Your YAML problem with host is the same as what it was initially with interface: You're trying to put subkeys at the same level as the keys that contain them.
host:
name: first
interface1:
name: eth0
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
interface2:
name: eth1
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
should work, although that still doesn't address your need for multiple hosts. For that (and to better handle multiple interfaces), you should use lists:
host:
- name: first_host
interface:
- name: eth0
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
- name: eth1
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
- name: second_host
interface:
- ...
When read in by Perl, this will give you the structure:
{
"host": [
{
"interface": [
{
"newip": "2.3.4.5",
"oldip": "1.2.3.4",
"oldgw": "1.2.3.1",
"name": "eth0",
"newgw": "2.3.4.1"
},
{
"newip": "2.3.4.5",
"oldip": "1.2.3.4",
"oldgw": "1.2.3.1",
"name": "eth1",
"newgw": "2.3.4.1"
}
],
"name": "first_host"
}
]
}
As far as JSON, that's a subset of YAML. Personally, I prefer to have the full YAML spec available to me, but JSON provides more interoperability with non-Perl languages.
I'd prefer JSON over YAML. I recently built a system whose "user interface" (ha) was basically one giant config file; the user needed to edit that config file to control the system; and I used YAML for that file. It turns out that YAML has a few really annoying gotchas that make it unsuitable for humans -- it's very picky about whitespace, for example.
Also, it's less familiar in general: I'd guess that anyone with programming experience has run into JSON, and understands it. But YAML is more niche.
If you're not using the advanced features of YAML -- such as the ability to define variables and then reference them later -- I'd recommend that you go with JSON instead.
Do not care about the format. Populate a data structure and let JSON or YAML do the dirty work for you. If you are going to produce and parse the files yourself anyway, there is almost no advantage in using JSON or YAML.
The exact format is unimportant: both YAML and JSON are fine. Actually, I would recommend to keep this specific part pluggable.
The issue with your YAML is that the data structure has to make some sense, e.g.:
- host:
hostname: first
interfaces:
eth0:
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
eth1:
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
- host:
hostname: second
interfaces:
...
Or if the interfaces have to be ordered:
- host:
hostname: first
interfaces:
-
name: eth0
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
-
name: eth1
oldip: 1.2.3.4
newip: 2.3.4.5
oldgw: 1.2.3.1
newgw: 2.3.4.1
If writing YAML manually is to tedious for you, just write a small script that generates it for you.
Note that lists have to be somehow introduces by a marker like -. Intendation is not sufficient for this.