I'm trying to run gajira-cli to update/ comment on specific ticket on Jira using Github actions with following command:
jira edit [TICKET-ID] --noedit --comment="string"
However, it throws EOF error. What did I miss in the command I use?
Related
We are trying to deploy a SAPUI5 application via github actions. Right now we call the deploy command via npm run deploy in the github action. The step wont proceed since it is asking the user to confirm the deployment.
Start deployment (Y/n)?
However, the third party script responsible for the deployment has no option to always default to "Y".
Is there a way to let github actions enter a "Y" in such cases? Do you have another idea how to solve this problem?
Just use
yes | npm run deploy
This will automatically choose y when deploying.
I was doing the same kind of thing with Django. I tried this and it worked.
echo 'yes' | python manage.py collectstatic
The prompt was like mentioned below:
[enter image description here][1]
You have requested to collect static files at the destination
location as specified in your settings:
/home/path-to-staticfiles/
This will overwrite existing files!
Are you sure you want to do this?
Type 'yes' to continue or 'no' to cancel:
I was running the command from my github-actions. The 'yes' is automatically picked for the prompt.
I am setting a couple of env variables on build time when deploying on vercel using "amondnet/vercel-action#v19.0.1+3" github action.
Everything works fine when I set just one variable, but when I set multiple variables as described in Vercel's documenation here: https://vercel.com/docs/cli#commands/overview/unique-options/build-env, I get the following error when running the action:
Error! The specified file or directory "PR_NUMBER=423]" does not exist.
The command the action is trying to run is as follows:
/usr/local/bin/npx vercel --build-env [NODE_ENV=pr PR_NUMBER=423] -t *** -m
It should be:
/usr/local/bin/npx vercel --build-env NODE_ENV=pr --build-env PR_NUMBER=423 -b KEY=value
Try to create Node.js app in OpenShift in terminal, like this:
./oc new-app https://j4nos#bitbucket.org/j4nos/nodejs.git
Source code in BitBucket in a private account, how to set credentials? Once it asked for password, but not again. How can I set credentials?
Added annotated secret from GUI: repo-at-bitbucket
I have read Private Git Repositories: Part 2A tutorial, strange that for HTTPD app there is a Source Secret filed to select secret, but not when Node.js + MongoDB combo is selected. Why?
Ahh .. need to select pure Node.js app.
You need to authenticate to the private git repository. This can be done a few different ways. I would suggest taking a few a minutes and reading this blog series which outlines the different methods you can take.
https://blog.openshift.com/private-git-repositories-part-1-best-practices/
After reading first through initial few posts explaining concepts and doing it with GitHub, only then look at the BitBucket example.
https://blog.openshift.com/private-git-repositories-part-5-hosting-repositories-bitbucket/
Those GitHub examples have more explanation which will then make BitBucket example easier to understand.
The likely reason you were prompted for the password when running oc new-app is that you used:
oc new-app https://j4nos#bitbucket.org/j4nos/nodejs.git
Specifically, you didn't specify a S2I builder to use. As a result, oc new-app will try and checkout the repo locally to analyse it to try and work out what language it uses. This is why it would prompt for the password separately.
It is better to specify the builder name on the command as:
oc new-app nodejs~https://j4nos#bitbucket.org/j4nos/nodejs.git
This is an abbreviated form of the command and is the same as running:
oc new-app --strategy=source --image-stream nodejs --code https://j4nos#bitbucket.org/j4nos/nodejs.git
If you specify the builder, it already knows what to use and doesn't analyse the code so will not prompt for the password, plus you wouldn't need user in the URI.
Either way, when building in OpenShift you still need the basicauth secret and should annotate it so it knows to use the secret for that build.
We are trying to automate the build and deployment of containers to projects created in openshift v3.3. From the documentation I can see that we will need to leverage service accounts to do this but the documentation is hard to follow and the examples I have found in the blogs don't complete the task. My workflow is as follows with examples oc commands I use:
BUILDER_TOKEN='xxx'
DEPLOYER_TOKEN='xxx'
# build and push the image works as expected
docker build -t registry.xyz.com/want/want:latest .
docker login --username=<someuser> --password=${BUILDER_TOKEN} registry.xyz.com
docker push registry.xyz.com/<repo>/<image>:<tag>
# This fails with error
oc login https://api.xyz.com --token=${DEPLOYER_TOKEN}
oc project <someproject>
oc new-app registry.xyz.com/<repo>/<image>:<tag>
Notice I login into the rest api interface, select the project and create the app but this fails with the following errors:
error: User "system:serviceaccount:want:deployer" cannot create deploymentconfigs in project "default"
error: User "system:serviceaccount:want:deployer" cannot create services in project "default"
Any ideas?
Service accounts only have permission in their owning project by default. You would need to grant deployer access to deploy in other projects.
OK so it seems that using a service account to accomplish this is not the best way to go about things. This is not helped by the documentation. The use case above is very common and the correct approach is to simply evoke the new-app with the image name and corresponding tag:
oc new-app ${APP}:${TAG}
There is no need to mess around with service accounts.
We are monitoring our production environments using Zabbix 2.4. New instances are provisioned with Ansible that sets up a Zabbix agent. What we need is for hosts to be removed from the server if they have been terminated so that we only receive messages about running instances becoming unavailable.
To do this I wrote a Python script that can take a zabbix host name as an argument, check if that host is on the list of running instances by calling awscli and delete the host if it's not on a "not terminated" list.
I put the script in /usr/bin/delete_host.py and configured an action to call for it when a "Agent not available" trigger is activated. This is how the Operation tab looks like link
And here is the Action Log link
I've tried a couple of ways to write the command, also placed the script in ExternalScripts directory. Turned on debug logs for the server but nothing in it mentioned an error or anything. In fact it only showed messages that command is being executed and everything is ok, but the host is still there. When I copy the command from Action Log and execute it manually everything works fine.
At this point I am really out of options on how to troubleshoot this further. I disabled selinux and added zabbix user to sudoers file with nopasswd. I can't find anything in any logs. Is it even possible to execute non-messaging scripts with zabbix?
Try to write the script in a way that will print "OK" or 0 if it ran properly and the error message or error code if it fails. Run the script using an active zabbix agent item on the Zabbix server host (use the function system.run). In this way you'll be able to create a trigger that will raise an error if the script fails to run.
You can also just schedule it using a different tool such as Rundeck.
The script does not have to be in the ExternalScripts directory, that is only required for items of type "external check". The operation screenshot you linked to uses relative path of delete_host.py, and that is almost guaranteed not to work. Your action log screenshot shows a few entries with /usr/bin/ prefixed, which is better.
At least for testing, make sure to specify full path to everything, including the python binary, for example /full/path/to/python /full/path/to/delete_host.py.
You also had a few entries that redirected all output to a file in /tmp/, but you didn't mention what got logged in there. Please use that approach and check the potential error messages as well.
Remote commands from "Actions" are run using the key system.run[command,nowait]. This "nowait" key returns 1 irrespective of the command result.
Try running system.run with "wait" parameter and see what the actual error is under "Latest data".
For me the error was "sudo: sorry, you must have a tty to run sudo" even i had "Defaults:zabbix !requiretty" in the sudoers file. I commented out the "#Defaults requiretty" line in /etc/sudoers file and it worked.