I've been trying to get gcloud to a usable state on Travis and I just can't seem to get passed the gcloud auth activate-service-account point.
When ever it runs I just get the following error:
ERROR: (gcloud.auth.activate-service-account) PyOpenSSL is not available.
See https://developers.google.com/cloud/sdk/crypto for details.
I've tried apt-get and pip installs both with the export CLOUDSDK_PYTHON_SITEPACKAGES=1 set and nothing seems to work.
Does anyone have any ideas or alternatives?
This is Travis version Ubuntu 14.04.
Update
If I run the command from the docs on travis I get the following error:
usage: gcloud auth activate-service-account ACCOUNT --key-file KEY_FILE [optional flags]
ERROR: (gcloud.auth.activate-service-account) too few arguments
This made me think I had to have an ACCOUNT parameter, but after running the command locally with the un-encrypted service account key, I know it's not needed (unless something has changed).
The only other thing I can think of is that the file isn't be decrypted correctly or the command itself isn't happy in Travis:
- gcloud auth activate-service-account --key-file client-secret.json
Update 2
Just dumped a load of logs to figure what is going on. (Massive shout out to #Vilas for his help)
It looks like gcloud is installed on the VM for node already, but it's a super old version.
$ which gcloud
/usr/bin/gcloud
$ gcloud --version
Google Cloud SDK 0.9.37
bq 2.0.18
bq-nix 2.0.18
compute 2014.11.25
core 2014.11.25
core-nix 2014.11.25
dns 2014.11.25
gcutil 1.16.5
gcutil-nix 1.16.5
gsutil 4.6
gsutil-nix 4.6
sql 2014.11.25
The next question is how can I get the path to find the right gcloud?
I've confirmed that the downloaded SDK installs to ${HOME}/google-cloud-sdk/bin by running this command.
$ ls -l ${HOME}/google-cloud-sdk/bin
total 24
drwxr-xr-x 2 travis travis 4096 Apr 27 21:44 bootstrapping
-rwxr-xr-x 1 travis travis 3107 Mar 28 14:53 bq
-rwxr-xr-x 1 travis travis 912 Apr 21 18:56 dev_appserver.py
-rwxr-xr-x 1 travis travis 3097 Mar 28 14:53 gcloud
-rwxr-xr-x 1 travis travis 3144 Mar 28 14:53 git-credential-gcloud.sh
-rwxr-xr-x 1 travis travis 3143 Mar 28 14:53 gsutil
I finally got a solution for it. Essentially Travis has a super old version of the gcloud SDK installed that was taking presidence over the downloaded SDK.
Steps to Help Diagnose
In your .travis.yml file add:
env:
global:
# Ensure the downloaded SDK is first on the PATH
- PATH=${HOME}/google-cloud-sdk/bin:$PATH
# Ensure the install happens without prompts
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
Then in your install step add the following:
install:
# Make sure SDK is downloaded - cache once it's working
# NOTE: Note sure how to update the SDK if it's cached
- curl https://sdk.cloud.google.com | bash;
# List the SDK contents to ensure it's downloaded
- ls -l ${HOME}/google-cloud-sdk/bin
# Ensure the correct gcloud is being used
- which gcloud
# Print the gcloud version and make sure it's something
# Reasonably up to date compared with:
# https://cloud.google.com/sdk/downloads#versioned
- gcloud --version
Related
I have Ubuntu 20.04 and python 3.10.6 on WSL.
I have been trying to install airflow, and am getting 'airflow: command not found' when I'm trying to do 'airflow initdb' or 'airflow info'.
I have done
export AIRFLOW_HOME=~/airflow
and when I run
myname#LAPTOP-28BMMQV7:/root$ ls -l ~/.local/bin
I can see airflow in the list of files.
drwxrwxr-x 2 myname myname 4096 Nov 20 14:17 __pycache__
-rwxrwxr-x 1 myname myname 3472 Nov 20 14:17 activate-global-python-argcomplete
-rwxrwxr-x 1 myname myname 215 Nov 20 14:17 airflow
-rwxrwxr-x 1 myname myname 213 Nov 20 14:17 alembic
when I run this command to see where my python is, I can see this
myname#LAPTOP-28BMMQV7:/root$ ls -l /usr/bin/python*
lrwxrwxrwx 1 root root 10 Aug 18 11:39 /usr/bin/python3 -> python3.10
lrwxrwxrwx 1 root root 17 Aug 18 11:39 /usr/bin/python3-config -> python3.10-config
-rwxr-xr-x 1 root root 5912936 Nov 2 18:53 /usr/bin/python3.10
I also warnings similar to this:
WARNING: The script pygmentize is installed in '/home/myname/.local/bin' which is not on PATH.
So I need to find a way to add this directory to PATH.
I have found the following advice from the airflow documentation,
If the airflow command is not getting recognized (can happen on Windows when using WSL), then ensure that ~/.local/bin is in your PATH environment variable, and add it in if necessary:
PATH=$PATH:~/.local/bin
am not quite sure how to do it?
I also have a MySQL workbench/server 8.0.31 installed and want to connect it to airflow instead of SQLite. can anybody refer me to a good guide on how to install it correctly?
I have run 'pip install 'apache-airflow[mysql]'.
You were so close! I think your local python (and your terminal whenever you tried airflow db init ) was not able to see the airflow you installed on its path.
There is this video series I go to, whenever I need to install Airflow for a fellow coworker.
This video shows how to install Airflow locally. Also, in the second video it shows how to write a DAG.
And more importantly, on the third video it shows how to connect to a different database just like you wanted.
I use a shutdown-script to backup the files on an instance before it is shutdown.
In this shutdown-script, the gsutil tool is used to send files to a bucket at google cloud storage.
/snap/bin/gsutil -m rsync -d -r /home/ganjin/notebook gs://ganjin-computing/XXXXXXXXXXX/TEST-202104/notebook
It worked well for long days. But recently, there occurs some error as below.
If I run the code manually, it works well. It seems that there is something wrong with jobs management of systemd.
Could anyone give me some hint?
INFO shutdown-script: /snap/bin/gsutil -m rsync -d -r /home/ganjin/notebook gs://ganjin-computing/XXXXXXXXXXX/TEST-202104/notebook
Apr 25 03:00:41 instance-XXXXXXXXXXX systemd[1]: Requested transaction contradicts existing jobs: Transaction for snap.google-cloud-sdk.gsutil.d027e14e-3905-4c96-9e42-c1f5ee9c6b1d.scope/start is destructive (poweroff.target has 'start' job queued, but 'stop' is included in transaction).
Apr 25 03:00:41 instance-XXXXXXXXXXX shutdown-script: INFO shutdown-script: internal error, please report: running "google-cloud-sdk.gsutil" failed: cannot create transient scope: DBus error "org.freedesktop.systemd1.TransactionIsDestructive": [Transaction for snap.google-cloud-sdk.gsutil.d027e14e-3905-4c96-9e42-c1f5ee9c6b1d.scope/start is destructive (poweroff.target has 'start' job queued, but 'stop' is included in transaction).]
Update gsutil with -f option.
update gsutil -f
If the above command doesn’t work then try the command below:
sudo apt-get update && sudo apt-get --only-upgrade install google-cloud-sdk
Update guest environment and try to shutdown the instance. Use the link below as a reference to update the guest environment.
https://cloud.google.com/compute/docs/images/install-guest-environment#update-guest
If still facing issues do forceful shutdown:
sudo poweroff -f
I would really appreciate help. I'm new to Linux and Yum. I recently purchased a VPS and it only has MySQL 5.6. I want to update to 5.7 and I'm stuck. pulling hair out over here. I've gotten to the point where i enter command createrepo but there doesn't seem to be any directory created by it as i read there should be. The server is running CentOS 6.9. Here are the steps i followed:
Downloaded Red Hat Enterprise Linux 6 / Oracle Linux 6 (Architecture Independent), RPM Package from https://dev.mysql.com/downloads/repo/yum/ .
Used FTP client to upload the file to my server in a directory called downloads.
in PuTTy went to the downloads folder and typed 'ls', i can see the file is there. it appears in red as "mysql57-community-release-el6-11.noarch.rpm"
typed createrepo /downloads while in the downloads directory and I get:
Spawning worker 0 with 1 pkgs Workers Finished Gathering worker
results
Saving Primary metadata Saving file lists metadata Saving other
metadata Generating sqlite DBs Sqlite DBs complete
ACCORDING to the documentation i read it's supposed to create another directory at this point but when i type ls there is still just the file there. when i run the command again as createrepo /downloads -v i get:
Spawning worker 0 with 1 pkgs
Worker 0: reading mysql57-community-release-el6-11.noarch.rpm
Workers Finished
Gathering worker results
Saving Primary metadata
Saving file lists metadata
Saving other metadata
Generating sqlite DBs
Starting other db creation: Wed Mar 21 06:30:38 2018
Ending other db creation: Wed Mar 21 06:30:38 2018
Starting filelists db creation: Wed Mar 21 06:30:38 2018
Ending filelists db creation: Wed Mar 21 06:30:38 2018
Starting primary db creation: Wed Mar 21 06:30:38 2018
Ending primary db creation: Wed Mar 21 06:30:38 2018
Sqlite DBs complete
I've also tried localinstall command on the file in the directory with "yum localinstall mysql57-community-release-el6-11.noarch.rpm and i get:
Loading "fastestmirror" plugin
Loading "security" plugin
Loading "universal-hooks" plugin
Config time: 0.033
Yum Version: 3.2.29
Setting up Local Package Process
rpmdb time: 0.000
Examining mysql57-community-release-el6-11.noarch.rpm: mysql57-community-release-el6-11.noarch
Excluding mysql57-community-release-el6-11.noarch
Nothing to do
I think the file may have been installed already kind of inadvertently as i have been struggling with this for hours but then when i call createrepo shouldn't it create the repository using the RPM file?
Thanks for any help!
It may be easier to delete the file you created and use the steps below (Taken from a production box)
The repo uses the name mysql-community-* That's why you couldn't find the package mysql-server via yum
Create a new file /etc/yum.repos.d/mysql-community.repo and add the following;
[mysql-connectors-community]
name=MySQL Connectors Community
baseurl=http://repo.mysql.com/yum/mysql-connectors-community/el/6/$basearch/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
[mysql-tools-community]
name=MySQL Tools Community
baseurl=http://repo.mysql.com/yum/mysql-tools-community/el/6/$basearch/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
[mysql57-community]
name=MySQL 5.7 Community Server
baseurl=http://repo.mysql.com/yum/mysql-5.7-community/el/6/$basearch/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
Then do;
yum clean all
yum upgrade mysql-community-server
Which should install & upgrade you to 5.7 - Remember to run mysql_upgrade after MySQL has restarted.
I've recently started using Google's Compute engine for some of my projects the problem is my startup script doesn't seem to work, For some reason my script just doesn't work, the VM has the startup-script metadata and it works fine when I run it manually with:
sudo google_metadata_script_runner --script-type startup
Here is what I am trying to run on startup:
#!/bin/bash
sudo apt-get update
sudo rm -f Eve.jar
sudo rm -f GameServerStatus.jar
wget <URL>/Eve.jar
wget <URL>/GameServerStatus.jar
sudo chmod 7777 Eve.jar
sudo chmod 7777 GameServerStatus.jar
screen -dmS Eve sh Eve.sh
screen -dmS PWISS sh GameServerStatus.sh
There are no errors in the log either, it just seems to stop at the chmod or screen commands, Any ideas?
Thanks!
To add to kangbu's answer:
Checking the logs in container-optimized OS by
sudo journalctl -u google-startup-scripts.service
showed that the script could not find the user. After a long time of debugging I finally added a delay before the sudo and now it works. Seems the user is not registered when the script runs.
#! /bin/bash
sleep 10 # wait...
cut -d: -f1 /etc/passwd > /home/user/users.txt # make sure the user exists
cd /home/user/project # cd does not work after sudo, do it before
sudo -u user bash -c '\
source /home/user/.bashrc && \
<your-task> && \
date > /home/user/startup.log'
I have the same problem #Brina mentioned. I set up metadata key startup-script and value like:
touch a
ls -al > test.txt
When I ran the script above sudo google_metadata_script_runner --script-type startup, it worked perfectly, However if I reset my VM instance the startup script didn't work. So, I checked startup script logs
...
Jul 3 04:30:37 kbot-6 ntpd[1514]: Listen normally on 5 eth0 fe80::4001:aff:fe8c:7 UDP 123
Jul 3 04:30:37 kbot-6 ntpd[1514]: peers refreshed
Jul 3 04:30:37 kbot-6 ntpd[1514]: Listening on routing socket on fd #22 for interface updates
Jul 3 04:30:38 kbot-6 startup-script: INFO Starting startup scripts.
Jul 3 04:30:38 kbot-6 startup-script: INFO Found startup-script in metadata.
Jul 3 04:30:38 kbot-6 startup-script: INFO startup-script: Return code 0.
Jul 3 04:30:38 kbot-6 startup-script: INFO Finished running startup scripts.
Yes. they found startup-script and ran it. I guessed it had executed as an another user. I changed my script like this:
pwd > /tmp/pwd.txt
whoami > /tmp/whoami.txt
The result is:
myuserid#kbot-6:/tmp$ cat pwd.txt whoami.txt
/
root
Yes. It was executed at the / diectory as root user. Finally, I changed my script to sudo -u myuserid bash -c ... which run it by specified userid.
Go to the VM instances page.
Click on the instance for which you want to add a startup script.
Click the Edit button at the top of the page.
Under Custom metadata, click Add item.
Add your startup script using one of the following keys:
startup-script: Supply the startup script contents directly with this key.
startup-script-URL: Supply a Google Cloud Storage URL to the start script file with this key.
It is working. The documentation for the new instance and existing instance as shown in GCE Start Up Script
Startup script output is written to the following log files:
CentOS and RHEL: /var/log/messages
Debian: /var/log/daemon.log
Ubuntu 14.04, 16.04, and 16.10: /var/log/syslog
On Ubuntu 12.04, SLES 11 and 12, and all images older than v20160606:
sudo /usr/share/google/run-startup-scripts
think that you do not need sudo, as well as the chmod 7777 should be 777
also a cd (or at least a pwd) at the beginning might be useful.
... log to text file, in order to know where the script may fail.
I'm trying to debug a page in a web app that keeps crashing Chrome ("Aw, snap!" error). I've enabled/disabled automatic crash reporting, tried logging with google-chrome --enable-logging --v=1, (as well as various levels of verbosity), and all I get is a "crash dump ID" in the chrome_debug.log chrome://crashes Shows all of the dump IDs, but no actual dump file
I see other questions referring to reading the dump files, but I can't find the dump files themselves (just the ID).
Grepping for the crash ID in /tmp and ~/.config/google-chrome/ turns up nothing, but the ~/.config/google-chrome/chrome_debug.log shows that something was sent:
--2015-04-06 11:10:00-- https://clients2.google.com/cr/report
Resolving clients2.google.com (clients2.google.com)... 74.125.228.224, 74.125.228.225, 74.125.228.231, ...
Connecting to clients2.google.com (clients2.google.com)|74.125.228.224|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘/dev/fd/3’
0K
Crash dump id: 7dac9d5d58258264
Any ideas on where to find the actual file/data that's sent?
Details:
Chrome version: 40.0.2214.111 (Official Build)
Linux Mint 16 (Petra)
Edit: Some extra info:
curtis#localhost:-$ tail -n 5 uploads.log && echo $(pwd)
1428584493,ddc357e4600a49e6
1428584497,7ac16455c152381a
1428589439,d00ad6f5e6426f3d
1428934450,66b3f722430511e8
1428939578,7a2efc2b681515d1
/home/curtis/.config/google-chrome/Crash Reports
curtis#localhost:-$ ll -a
total 12
drwx------ 2 curtis curtis 4096 Apr 6 11:32 .
drwx------ 9 curtis curtis 4096 Apr 13 11:43 ..
-rw------- 1 curtis curtis 3291 Apr 13 11:39 uploads.log
Automatic reporting is enabled...
Thanks!
The *.dmp files are stored in /tmp/, and this has nothing to do with the "Automatic crash reporting" checkbox. The file is also not related to the hash stored in ~/.config/google-chrome/
In ~/.config/google-chrome/Crash Reports/uploads.log:
1429189585,5bddea9f7433e3da
From using , the crash dump file for this particular report was:
chromium-renderer-minidump-2113a256de381bce.dmp
Solution:
root#localhost:-$ mkdir /tmp/misc && chmod 777 /tmp/misc
root#localhost:-$ cd /tmp
root#localhost:-$ watch -n 1 'find . -mmin -1 -exec cp {} /tmp/misc/ \;'
Then, as a regular user (not root):
google-chrome --enable-logging --v=1
Once you see files created by the watch command, run:
root#localhost:-$ ls -l
-rw------- 1 root root 230432 Apr 16 09:06 chromium-renderer-minidump-2113a256de381bce.dmp
-rw------- 1 root root 230264 Apr 16 09:12 chromium-renderer-minidump-95889ebac3d8ac81.dmp
-rw------- 1 root root 231264 Apr 16 09:13 chromium-renderer-minidump-da0752adcba4e7ca.dmp
-rw------- 1 root root 236246 Apr 16 09:12 chromium-upload-56dc27ccc3570a10
-rw------- 1 root root 237247 Apr 16 09:13 chromium-upload-5cebb028232dd944
Now you can use breakpad to work on the *.dmp files.
Google Chrome - Crash Dump Location
To generate the Crash Dump locally,
CHROME_HEADLESS=1 google-chrome
The .dmp files are then stored in ~/.config/google-chrome/Crash Reports
Produce Stack Trace
Check out and add depot_tools to your PATH (used to build breakpad)
git clone https://chromium.googlesource.com/chromium/tools/depot_tools
export PATH=`pwd`/depot_tools:"$PATH"
Check out and build breakpad (using fetch from depot_tools)
mkdir breakpad && cd breakpad
fetch breakpad
cd src
./config && make
To produce stack trace without symbols:
breakpad/src/processor/minidump_stackwalk -m /path/to/minidump
More here https://www.chromium.org/developers/decoding-crash-dumps
Personally Preferred Method
Enable crash reporting:
Chrome menu > Settings > Show advanced settings > Tick "Automatically send usage statistics and crash reports to Google"
Go to chrome://crashes > File bug > Takes you to crbug.com > Complete
report leaving the auto-added report_id field unchanged.
Someone from the Chrome/Chromium team will follow up. They can provide
you with your stack trace and aid at resolving the issue.