gitlab ci shell runner reports profile loading problem - gitlab-ci-runner

I'm trying to setup the gitlab ci shell runner. I've used the docker runner before successfully but now I'd like to use another docker container within my testing routine and therefore switched to the shell runner.
After registering I'm running into an exception:
ERROR: Job failed (system failure): prepare environment: exit status 1. Check https://docs.gitlab.com/runner/shells/index.html#shell-profile-loading for more information
So, I went through the linked material but that didn't cure the problem. Now, I verified that the gitlab-runner user exists and it has access to docker (needed to run the docker test container). Also the gitlab-runner user is part of the docker group. I can also --login, fire up the /bin/bash without problems.
Still, all I get from the runner side is the the enigmatic message above. What other checkups to I need to do to track down this issue?

The careful reader will find the answer:
"A common failure is when you have a .bash_logout that tries to clear
the console."

Related

pod not found error, while using the rsync cmd to copy the results from ocp pod to jenkins workspace

I am running some automated tests using RBF, but sometimes when the test runner is complete and am trying to copy the test result from OCP to local using Rsync (using Jenkins job), it shows that the pod is not found, but test cases were run on the same pod, after restarting the pod this error is resolved, but after some time again this error comes. Can anyone tell me what is the root cause of this error and the solution for this?
tried to start rollout the pod, but this does not fix the problem permantely, after some run the error comes again

ERROR: Preparation failed: Getwd: getwd: no such file or directory

Gitlab runner throw ERROR: Preparation failed: Getwd: getwd: no such file or directory?
gitlab version is: GitLab Community Edition 8.6.4
gitlab-runner version: 1.11.5
My CI throw ERROR: Preparation failed: Getwd: getwd frequently,
but sometimes we commit is work fine. So we didn't know what the final reason cause this problem.
We only know about one thing that is this error shows after we moved the build directory.
In my case that was because of residual gitlab-runner processes still executing. I resolved it by identifying guilty pids then killed them:
$ ps -ax | grep gitlab-runner
27034 ? Ssl 0:06 /usr/bin/gitlab-runner run --working-directory /home/gitlab-runner --config /etc/gitlab-runner/config.toml --service gitlab-runner --syslog --user gitlab-runner
$ sudo kill -9 27034
I got the same error and solved by restarting gitlab-runner
gitlab-runner restart
The Gitlab Runner checks out a copy of your repository into CI_PROJECT_DIR. You can check its value by adding the following to your .gitlab-ci.yml:
script:
- echo $CI_PROJECT_DIR
I received the "getwd: no such file or directory" error because:
I had changed my working directory to /var/www/mysite (I am using a docker container with gitlab-runner installed inside it, but I think that's beside the point)
one of my deploy script lines moves /var/www/mysite to /var/www/old-mysite.
I'm used to the Gitlab Runner checking out its build inside /home/gitlab-runner/build. When I changed the docker working directory this caused the runner to check it out at /var/www/mysite/build.
After my script moved /var/www/mysite to /var/www/old-mysite, on second and subsequent runs, gitlab runner still expected to find (/var/www/mysite) but it no longer existed, hence the error.
Given the above, I can't explain why the runner works the first time ever, when that directory also doesn't exist, but hopefully my answer might at least prompt something useful for someone! :)

Elastic Beanstalk stops at EbExtensionPostBuild

I am having a problem deploying an EB instance with a custom .ebextensions file. This is the relevant part in that file:
container_commands:
01_migrate:
command: 'python db_migrate.py'
02_npm_build:
command: 'npm install && npm run prod'
As you can see, these commands are for migrating my PostgreSQL database (via a Flask backend) and building my React .jsx files.
If I leave these commands out, the deployment completes perfectly well. However, once I put them in, looking at the eb-activity.log it stalls at this part forever (as far as I can tell):
[2017-04-10T02:39:24.106Z] INFO [3023] - [Application deployment app-613e-170409_223418#1/StartupStage0/EbExtensionPostBuild] : Starting activity...
I also get this message on the Health overview in the console (this is after 1 day):
Performing application deployment (running for 1 day).
I have also tried to deploy it without those container_commands, and then including it back after the successful initial deployment. Then I get the same error message as before in eb-activity.log, and I also get this message on the Health overview:
Incorrect application version "app-2a3d-170409_214923" (deployment 1). Expected version "app-2a3d-170409_214923" (deployment 1).
Which is very strange because those two versions referenced are the same versions. I don't know what this means!
I found a solution.
Remove all you container_commands from .ebextensions/
Go ssh to instance, kill process with.
sudo killall python
Then Deploy new version without container_commands.
And start debuging all your container_commands, one by one on ssh..
Have fun.

Error when running 'embark run'

When run the command 'embark run'. I got the error message:
Running "deploy_contracts:development" (deploy_contracts) task
Warning: ==== can't connect to localhost:8101 check if an ethereum node is running Use --force to continue.
Error: ==== can't connect to localhost:8101 check if an ethereum node is running
Could you please give me some help about it?
Before you can run embark, you have to run an ethereum rpc simulator, simply run:
$ embark simulator
Or Alternatively, you can run a REAL ethereum node for development purposes:
$ embark blockchain
By default embark blockchain will mine a minimum amount of ether and will only mine when new transactions come in. This is quite usefull to keep a low CPU. The option can be configured at config/blockchain.yml
You will see a geth node starting in the terminal. Then, open another terminal and type:
$ embark run
This will automatically deploy the contracts, update their JS bindings and deploy your DApp to a local server at http://localhost:8000
Note that if you update your code it will automatically be re-deployed, contracts included. There is no need to restart embark, refreshing the page on the browser will do.
See also newest embark tagged questions on Ethereum Stack Exchange for future reference.
In your embark project directory:
run $ embark blockchain and leave this running on your terminal.
Open a new terminal, cd <yourProject> and run $ embark run
You will now be up and running on your local host at http://localhost:8000

Troubleshooting failed packer build

I am just getting started with Packer, and have had several instances where my build is failing and I'd LOVE to log in to the box to investigate the cause. However, there doesn't seem to be a packer login or similar command to give me a shell. Instead, the run just terminates and tears down the box before I have a chance to investigate.
I know I can use the --debug flag to pause execution at each stage, but I'm curios if there is a way to just pause after a failed run (and prior to cleanup) and then runt he cleanup after my debugging is complete.
Thanks.
This was my top annoyance with packer. Thankfully, packer build now has an option -on-error that gives you options.
packer build -on-error=ask ... to the rescue.
From the packer build docs:
-on-error=cleanup (default), -on-error=abort, -on-error=ask - Selects what to do when the build fails. cleanup cleans up after the previous steps, deleting temporary files and virtual machines. abort exits without any cleanup, which might require the next build to use -force. ask presents a prompt and waits for you to decide to clean up, abort, or retry the failed step.
Having used Packer extensively, the --debug flag is most helpful. Once the process is paused you SSH to the box with the key (in the current dir) and figure out what is going on.
Yeah, the way I handle this is to put a long sleep in a script inline provisioner after the failing step, then I can ssh onto the box and see what's up. Certainly the debug flag is useful, but if you're running the packer build remotely (I do it on jenkins) you can't really sit there and hit the button.
I do try and run tests on all the stuff I'm packing outside of the build - using the Chef provisioner I've got kitchen tests all over everything before it gets packed. It's a royal pain to try and debug anything besides packer during a packer run.
While looking up info for this myself, I ran across numerous bug reports/feature requests for Packer.
Apparently, someone added new features to the virtualbox and vmware builders a year ago (https://github.com/mitchellh/packer/issues/409), but it hasn't gotten merged into main.
In another bug (https://github.com/mitchellh/packer/issues/1687), they were looking at adding additional features to --debug, but that seemed to stall out.
If a Packer build is failing, first check where the build process has got stuck, but do the check in this sequence:
Are the boot commands the appropriate ones?
Is the preseed config OK?
If 1. and 2. are OK, then it means box has booted and the next to check is the login: SSH keys, ports, ...
Finally any issues within the provisioning scripts