I am using jira 4.4.3 and mercurial plugin version 4.1.0
When i try to add a repositorty, i get the exception:
java.lang.reflect.InvocationTargetException
Steps taken:
Install Jira and Mercurial plugin.
The mercurial plugin was manually installed as jar file along with other plugins in the following location:
root#jiraeng:/var/atlassian/application-data/jira/plugins/installed-plugins# ls -l
total 17712
-rw-r--r-- 1 jira jira 310943 2011-11-07 17:02 jira-hudson-plugin-1.0.jar
-rw-r--r-- 1 jira jira 2823919 2011-11-07 17:00 jira-javamelody-1.32.1.jar
-rw-r--r-- 1 jira jira 173892 2011-11-07 17:00 mercurial-jira-plugin-4.1.0.jar
-rw------- 1 jira jira 1244915 2011-11-07 12:47 plugin.2065440881919807685.jira-fisheye-plugin- 3.2.3.jar
-rw-r--r-- 1 jira jira 3551278 2011-11-07 18:13 plugin.3295151510657051340.jira-greenhopper-plugin-5.8.2.jar
-rw-r--r-- 1 jira jira 1258615 2011-11-07 18:13 plugin.4689593656531536722.jira-fisheye-plugin-3.4.7.jar
-rw------- 1 jira jira 8540534 2011-11-07 12:48 plugin.8779650565160096778.jira-importers-plugin-3.5.2.jar
-rw-r--r-- 1 jira jira 216700 2011-11-07 17:00 trafficlight-1.2.2.jar
Jira was restarted
Login as Administrator
Goto http://jiraeng.company.com:8080/secure/AdminSummary.jspa
click on "mercurial repositories" http://jiraeng.company.com:8080/secure/ViewMercurialRepositories.jspa
Add a mercurial repo
Repository Root is chosen as http://jiraeng.company.com:8080/secure/ViewMercurialRepositories.jspa
this repo can be cloned without password from command line
Click add
Get the exception:
Related
I have Ubuntu 20.04 and python 3.10.6 on WSL.
I have been trying to install airflow, and am getting 'airflow: command not found' when I'm trying to do 'airflow initdb' or 'airflow info'.
I have done
export AIRFLOW_HOME=~/airflow
and when I run
myname#LAPTOP-28BMMQV7:/root$ ls -l ~/.local/bin
I can see airflow in the list of files.
drwxrwxr-x 2 myname myname 4096 Nov 20 14:17 __pycache__
-rwxrwxr-x 1 myname myname 3472 Nov 20 14:17 activate-global-python-argcomplete
-rwxrwxr-x 1 myname myname 215 Nov 20 14:17 airflow
-rwxrwxr-x 1 myname myname 213 Nov 20 14:17 alembic
when I run this command to see where my python is, I can see this
myname#LAPTOP-28BMMQV7:/root$ ls -l /usr/bin/python*
lrwxrwxrwx 1 root root 10 Aug 18 11:39 /usr/bin/python3 -> python3.10
lrwxrwxrwx 1 root root 17 Aug 18 11:39 /usr/bin/python3-config -> python3.10-config
-rwxr-xr-x 1 root root 5912936 Nov 2 18:53 /usr/bin/python3.10
I also warnings similar to this:
WARNING: The script pygmentize is installed in '/home/myname/.local/bin' which is not on PATH.
So I need to find a way to add this directory to PATH.
I have found the following advice from the airflow documentation,
If the airflow command is not getting recognized (can happen on Windows when using WSL), then ensure that ~/.local/bin is in your PATH environment variable, and add it in if necessary:
PATH=$PATH:~/.local/bin
am not quite sure how to do it?
I also have a MySQL workbench/server 8.0.31 installed and want to connect it to airflow instead of SQLite. can anybody refer me to a good guide on how to install it correctly?
I have run 'pip install 'apache-airflow[mysql]'.
You were so close! I think your local python (and your terminal whenever you tried airflow db init ) was not able to see the airflow you installed on its path.
There is this video series I go to, whenever I need to install Airflow for a fellow coworker.
This video shows how to install Airflow locally. Also, in the second video it shows how to write a DAG.
And more importantly, on the third video it shows how to connect to a different database just like you wanted.
I'm using a GitHub Actions to deploy to a Google Cloud Function. The steps in my workflow include:
steps:
- name: "Checkout repository"
uses: actions/checkout#v3
# Setup Python so we can install Pipenv and generate requirements.txt.
- name: "Setup Python"
uses: actions/setup-python#v4
with:
python-version: '3.10'
- name: "Install Pipenv"
run: |
pipenv requirements > requirements.txt
ls -la
cat requirements.txt
- name: "Generate requirements.txt"
run: pipenv requirements > requirements.txt
- id: "auth"
name: "Authenticate to Google Cloud"
uses: "google-github-actions/auth#v0"
with:
workload_identity_provider: "..."
service_account: "..."
- id: "deploy"
uses: "google-github-actions/deploy-cloud-functions#v0"
with:
name: "my-function"
runtime: "python310"
Once I've generated the requirements.txt file I want that to be deployed along with my application code (checked out in the step above). The requirements.txt file gets generated during the build but it never gets deployed. (Confirmed by looking at the source in Cloud Functions).
How can I ensure this file is deployed along with my application code?
Update 1:
Here is the output after listing the contents of the directory after generating requirements.txt:
total 56
drwxr-xr-x 6 runner docker 4096 Sep 6 20:38 .
drwxr-xr-x 3 runner docker 4096 Sep 6 20:38 ..
-rw-r--r-- 1 runner docker 977 Sep 6 20:38 .env.example
-rw-r--r-- 1 runner docker 749 Sep 6 20:38 .gcloudignore
drwxr-xr-x 8 runner docker 4096 Sep 6 20:38 .git
drwxr-xr-x 3 runner docker 4096 Sep 6 20:38 .github
-rw-r--r-- 1 runner docker 120 Sep 6 20:38 .gitignore
-rw-r--r-- 1 runner docker 139 Sep 6 20:38 Pipfile
-rw-r--r-- 1 runner docker 454 Sep 6 20:38 Pipfile.lock
-rw-r--r-- 1 runner docker 1276 Sep 6 20:38 README.md
drwxr-xr-x 5 runner docker 4096 Sep 6 20:38 app
drwxr-xr-x 2 runner docker 4096 Sep 6 20:38 data
-rw-r--r-- 1 runner docker 2169 Sep 6 20:38 main.py
-rw-r--r-- 1 runner docker 27 Sep 6 20:38 requirements.txt
Update 2: Showing the contents of requirements.txt reveals it to only contain:
-i https://pypi.org/simple
No dependencies are included. This could well be the problem but I'm not yet sure why.
Update 3: The error shown in the deploy stage of the workflow is:
ModuleNotFoundError: No module named 'aiohttp'
This is because there is no requirements.txt file to install prior to running the function. aiohttp just happens to be the first dependency listed in my source code.
As explained by #ianyoung, the problem was with the pip file. The requirements.txt was empty, the requirements file is a list of all of a project’s dependencies. This includes the dependencies needed by the dependencies. It also contains the specific version of each dependency, specified with a double equals sign (==).
Background: We produce a big Library Management System, the server parts written in C, compiled on Linux SLES 15 and deployed to ~100 customers. The version in question was compiled on SLES 15 SP2 a year ago, and our Internal IT Department updated meanwhile the Dev and QA hosts to SP3.
It turned out, that the libcrypt.so moved with this update from SP2 to SP3 to a new location, from /lib64 to /usr/lib64 and contains a new symbol:
strings /usr/lib64/libcrypt.so.1.1.0 | grep XCRYPT_2.0
XCRYPT_2.0
# rpm -q -f /usr/lib64/libcrypt.so.1
libcrypt1-4.4.15-150300.4.2.41.x86_64
# zypper info libcrypt1
Information for package libcrypt1:
----------------------------------
Repository : SLE-Module-Basesystem15-SP3-Updates
Name : libcrypt1
Version : 4.4.15-150300.4.2.41
Arch : x86_64
If you now compile a server application on SP3 and ship this to customers (as a fix for an urgent bug) who is still using SP2, these application are missing this symbol and do not start anymore:
/opt/lib/sisis/avserver/batch/bin/prg/BASTVL: /lib64/libcrypt.so.1: version `XCRYPT_2.0' not found (required by /opt/lib/sisis/avserver/batch/bin/prg/BASTVL)
# strings /lib64/libcrypt.so.1 | grep XCR
# strings /usr/lib64/libcrypt.so.1 | grep XCR
strings: '/usr/lib64/libcrypt.so.1': No such file
# rpm -q -f /lib64/libcrypt.so.1
glibc-2.26-13.48.1.x86_64
# rpm -q -f /usr/lib64/libcrypt.so.1
error: file /usr/lib64/libcrypt.so.1: No such file or directory
i.e. our internal update from SP2 to SP3, make it impossible to deliver fixes to customers running SP2, or they need update as well to SP3 before installing fixes, at least if libcrypt.so is involved.
Any comments or hints for a workaround?
At the end I compiled from source with
git clone https://github.com/besser82/libxcrypt.git
cd libxcrypt
./autogen.sh
./configure --prefix /usr/local/sisis-pap/libxcrypt
make
sudo make install
ls -l /usr/local/sisis-pap/libxcrypt/lib64
insgesamt 1300
-rw-r--r-- 1 root root 635620 26. Jul 14:09 libcrypt.a
-rwxr-xr-x 1 root root 945 26. Jul 14:09 libcrypt.la
lrwxrwxrwx 1 root root 17 26. Jul 14:09 libcrypt.so -> libcrypt.so.1.1.0
lrwxrwxrwx 1 root root 17 26. Jul 14:09 libcrypt.so.1 -> libcrypt.so.1.1.0
-rwxr-xr-x 1 root root 681656 26. Jul 14:09 libcrypt.so.1.1.0
lrwxrwxrwx 1 root root 10 26. Jul 14:09 libowcrypt.a -> libcrypt.a
lrwxrwxrwx 1 root root 11 26. Jul 14:09 libowcrypt.so -> libcrypt.so
lrwxrwxrwx 1 root root 13 26. Jul 14:09 libowcrypt.so.1 -> libcrypt.so.1
lrwxrwxrwx 1 root root 10 26. Jul 14:09 libxcrypt.a -> libcrypt.a
lrwxrwxrwx 1 root root 11 26. Jul 14:09 libxcrypt.so -> libcrypt.so
and pointed our application via LD_LIBRARY_PATH to use this version of libcrypt.so.1.
I constructed the environment of Eclipse Che v6.0.0 + OCP v3.6(v3.6.173.0.96) by the same procedure as the following issue.
Workspace of MultiUser Eclipse-che can not be started on OCP
I confirmed that Workspace pod can be started with OCP 3.6.
However, the Workspace screen can not be displayed in the browser.
The following message is displayed in the browser.
Could not start workspace wksp-vryu. Reason: Bootstrapping of machine dev-machine reached timeout
From the browser console of OpenShift, when I executed the following command with Workspace pod, I noticed that bootstrapper size was wrong.
The result of executing the command is as follows.
$ cd /tmp/bootstrapper
$ ls -al
total 32
drwxr-xr-x. 2 user root 69 Feb 26 05:32 .
drwxrwxrwt. 4 root root 49 Feb 26 05:32 ..
-rwxr-xr-x. 1 user root 250 Feb 26 05:32 bootstrapper
-rw-r--r--. 1 user root 100 Feb 26 05:32 bootstrapper.log
-rw-r--r--. 1 user root 23906 Feb 26 05:32 config.json
The content of the bootstrapper was as follows.
$ cat bootstrapper
<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>404 Not Found</title>
</head><body>
<h1>Not Found</h1>
<p>The requested URL /agent-binaries/linux_amd64/bootstrapper/bootstrapper was not found on this server.</p>
</body></html>
In the workspace pod, we manually executed the wget command using CHE_INFRA_KUBERNETES_BOOTSTRAPPER_BINARY__URL in environment variable of Che Server's pod. I can successfully download boostrapper.
$ wget http://<My Route name>-<My Project Name>.<My Prefix>/agent-binaries/linux_amd64/bootstrapper/bootstrapper
--2018-02-26 06:19:25-- http://<My Route name>-<My Project Name>.<My Prefix>/agent-binaries/linux_amd64/bootstrapper/bootstrapper
Resolving <My Route name>-<My Project Name>.<My Prefix> (<My Route name>-<My Project Name>.<My Prefix>)... <Che Server's Node IP>
Connecting to <My Route name>-<My Project Name>.<My Prefix> (<My Route name>-<My Project Name>.<My Prefix>)|<Che Server's Node IP>|:80... connected.
HTTP request sent, awaiting response... 200
Length: 6146825 (5.9M)
Saving to: ‘bootstrapper’
bootstrapper 100%[===================>] 5.86M --.-KB/s in 0.08s
2018-02-26 06:19:26 (73.1 MB/s) - ‘bootstrapper’ saved [6146825/6146825]
$ ls -l
total 6032
-rw-r--r--. 1 user root 6146825 Jan 31 15:07 bootstrapper
-rw-r--r--. 1 user root 49 Feb 26 06:15 bootstrapper.log
-rw-r--r--. 1 user root 23906 Feb 26 06:15 config.json
How can I solve this problem?
Please let me know if there is information to help.
CHE_INFRA_KUBERNETES_BOOTSTRAPPER_BINARY__URL was introduced only in Eclipse Che 6.1.0. Related PR https://github.com/eclipse/che/pull/8559.
So, Che Server code expects CHE_INFRA_OPENSHIFT_BOOTSTRAPPER_BINARY__URL and it has wrong value.
I suppose the scripts that you used is not compatible with 6.0.0 Che. Please try to deploy Che 6.1.0. It is recommended to use the deploy script with the same version as Che has. So, try to use checkout to tag.
I've been trying to get gcloud to a usable state on Travis and I just can't seem to get passed the gcloud auth activate-service-account point.
When ever it runs I just get the following error:
ERROR: (gcloud.auth.activate-service-account) PyOpenSSL is not available.
See https://developers.google.com/cloud/sdk/crypto for details.
I've tried apt-get and pip installs both with the export CLOUDSDK_PYTHON_SITEPACKAGES=1 set and nothing seems to work.
Does anyone have any ideas or alternatives?
This is Travis version Ubuntu 14.04.
Update
If I run the command from the docs on travis I get the following error:
usage: gcloud auth activate-service-account ACCOUNT --key-file KEY_FILE [optional flags]
ERROR: (gcloud.auth.activate-service-account) too few arguments
This made me think I had to have an ACCOUNT parameter, but after running the command locally with the un-encrypted service account key, I know it's not needed (unless something has changed).
The only other thing I can think of is that the file isn't be decrypted correctly or the command itself isn't happy in Travis:
- gcloud auth activate-service-account --key-file client-secret.json
Update 2
Just dumped a load of logs to figure what is going on. (Massive shout out to #Vilas for his help)
It looks like gcloud is installed on the VM for node already, but it's a super old version.
$ which gcloud
/usr/bin/gcloud
$ gcloud --version
Google Cloud SDK 0.9.37
bq 2.0.18
bq-nix 2.0.18
compute 2014.11.25
core 2014.11.25
core-nix 2014.11.25
dns 2014.11.25
gcutil 1.16.5
gcutil-nix 1.16.5
gsutil 4.6
gsutil-nix 4.6
sql 2014.11.25
The next question is how can I get the path to find the right gcloud?
I've confirmed that the downloaded SDK installs to ${HOME}/google-cloud-sdk/bin by running this command.
$ ls -l ${HOME}/google-cloud-sdk/bin
total 24
drwxr-xr-x 2 travis travis 4096 Apr 27 21:44 bootstrapping
-rwxr-xr-x 1 travis travis 3107 Mar 28 14:53 bq
-rwxr-xr-x 1 travis travis 912 Apr 21 18:56 dev_appserver.py
-rwxr-xr-x 1 travis travis 3097 Mar 28 14:53 gcloud
-rwxr-xr-x 1 travis travis 3144 Mar 28 14:53 git-credential-gcloud.sh
-rwxr-xr-x 1 travis travis 3143 Mar 28 14:53 gsutil
I finally got a solution for it. Essentially Travis has a super old version of the gcloud SDK installed that was taking presidence over the downloaded SDK.
Steps to Help Diagnose
In your .travis.yml file add:
env:
global:
# Ensure the downloaded SDK is first on the PATH
- PATH=${HOME}/google-cloud-sdk/bin:$PATH
# Ensure the install happens without prompts
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
Then in your install step add the following:
install:
# Make sure SDK is downloaded - cache once it's working
# NOTE: Note sure how to update the SDK if it's cached
- curl https://sdk.cloud.google.com | bash;
# List the SDK contents to ensure it's downloaded
- ls -l ${HOME}/google-cloud-sdk/bin
# Ensure the correct gcloud is being used
- which gcloud
# Print the gcloud version and make sure it's something
# Reasonably up to date compared with:
# https://cloud.google.com/sdk/downloads#versioned
- gcloud --version