I'm having an odd issue with the awsebcli package when running it on Gitlab's pipeline CI system.
When I run eb deploy locally, the command succeeds (or fails) much as expected. When I run it as part of the CD scripts I've written, it runs successfully (i.e. returns an exit code of 0) but doesn't actually trigger the deployment. No errors are returned - in fact there's no text output from the command at all.
Can anyone suggest what could be going wrong here?
Related
I am running some automated tests using RBF, but sometimes when the test runner is complete and am trying to copy the test result from OCP to local using Rsync (using Jenkins job), it shows that the pod is not found, but test cases were run on the same pod, after restarting the pod this error is resolved, but after some time again this error comes. Can anyone tell me what is the root cause of this error and the solution for this?
tried to start rollout the pod, but this does not fix the problem permantely, after some run the error comes again
I am using Azure Devops to build and deploy my git repo to a third party vps. I do this by logging into the server from Azure Devops through SSH, executing a shell script to pull git repo, and build it with ie. vue-cli and Laravel.
When the bash script is executed I receive a lot of errors on nearly all commands although everything is succeeding - can anyone tell me how to get rid of these unless something is really failing (would be nice to fail if npm build exit with code 1 for instance).
See screenshot below.
Screenshots are only really helpful for visual issues. You can use PasteBin or etc to share long logs if necessary.
According to this issue Azure just follows the lead of whatever shell it's running code in. So, in Bash it continues unless explicitly told to stop.
To easily change this behavior you can add set -e (or set -o errexit) at the start of your script. The errexit option causes Bash to exit as soon as a command/etc returns a non-zero exit code.
Another worthy addition is the set -o pipefail option. If you've got any pipes like command1 | command2 this will return the first non-zero exit code from a chain of pipes of any length as the result. So, if command1 fails above but command2 succeeds it would return the failure code from command1 instead of overwriting it.
Finally, set -u (or -o nounset) causes an error when unset variables are encountered during parameter expansion. If running in a non-interactive shell, it will also exit.
Many scripts combine these by running set -euo pipefail at the beginning to stop them from running after the first problem is encountered.
If you want to explicitly force a bash script to exit you can use || and && accordingly. The expression command || exit will exit if the command fails and command && exit will exit if the command succeeds.
This seems to be one bug starting from npm V.3.10.8. You can check this discussion.
As a workaround you can add this script to package.json and run the command with --no-optional switch:
"optionalDependencies": {
"fsevents": "*"
},
Also, there's possibility that your NPM version is too old. You can use Node.js tool installer task with version spec = 12.x to install higher node.js and npm versions.
I am having a problem deploying an EB instance with a custom .ebextensions file. This is the relevant part in that file:
container_commands:
01_migrate:
command: 'python db_migrate.py'
02_npm_build:
command: 'npm install && npm run prod'
As you can see, these commands are for migrating my PostgreSQL database (via a Flask backend) and building my React .jsx files.
If I leave these commands out, the deployment completes perfectly well. However, once I put them in, looking at the eb-activity.log it stalls at this part forever (as far as I can tell):
[2017-04-10T02:39:24.106Z] INFO [3023] - [Application deployment app-613e-170409_223418#1/StartupStage0/EbExtensionPostBuild] : Starting activity...
I also get this message on the Health overview in the console (this is after 1 day):
Performing application deployment (running for 1 day).
I have also tried to deploy it without those container_commands, and then including it back after the successful initial deployment. Then I get the same error message as before in eb-activity.log, and I also get this message on the Health overview:
Incorrect application version "app-2a3d-170409_214923" (deployment 1). Expected version "app-2a3d-170409_214923" (deployment 1).
Which is very strange because those two versions referenced are the same versions. I don't know what this means!
I found a solution.
Remove all you container_commands from .ebextensions/
Go ssh to instance, kill process with.
sudo killall python
Then Deploy new version without container_commands.
And start debuging all your container_commands, one by one on ssh..
Have fun.
In my gitlab CI runner I'm setting up the yaml to call a python script. Currently the script fails to connect to an HTTP server (this is expected behavior). The resulting exception is caught in the python script and it exits with a -1. However the CI runner hangs indefinitely. What could the problem be?
The issue was a misunderstanding of how the gitlab ci works. In the script tag I was first booting up a system that the hanging executable did work on. When the executable fails it expects the child processes to be cleaned up before proceeding. I was expecting the ci to call into after_script which is where I was doing the clean-up.
My NAnt builds run fine locally on a developer machine, and locally on the command line of the Hudson server, but they will not run in my configured Hudson project.
The console output when I run a Build via the Hudson web UI is similar to the following :
Started by user anonymous [workspace]
$ sh -xe
C:\WINDOWS\TEMP\hudson8104357939096562606.sh
C:\WINDOWS\TEMP\hudson8104357939096562606.sh:
fork failed: no error [1] Archiving
artifacts Finished: SUCCESS
I have another project configured properly that runs fine so I know the NAnt plugin is setup properly in Hudson, and that NAnt is on the system path.
Can anyone suggest possible causes as to why this build won't run?
The problematic build may be configured to Execute a Shell script, rather than Execute a Windows Batch file.
Copy the command from the existing build step (the Execute Shell Script) and remove the step. Then add a new step to Execute a windows Batch File and paste the command.
Trigger the build and observe the results.
(I asked and answered this since it took me quite a while to figure out how I had mis-configured this particular build. Hopefully it'll save time or give ideas to other people trouble-shooting automation..)