I just built a Compute Engine app with a Ubuntu 16.04 VM, launched it in Chrome, installed python3, the ipython kernel, and some libraries, so that I could run Jupyter notebooks. Here are the commands:
sudo apt-get update
sudo apt-get install python3-setuptools python3-dev libzmq-dev
sudo easy_install3 pip
sudo pip3 install ipython pyzmq jinja2 tornado jsonschema
sudo pip3 install jupyter
sudo ipython kernel install
sudo pip3 install numpy scipy scikit-learn pandas matplotlib
The last command gave this message, twice:
The directory '/home/allennugent/.cache/pip' or its parent directory
is not owned by the current user and caching wheels has been disabled.
check the permissions and owner of that directory. If executing pip
with sudo, you may want sudo's -H flag.
Hoping this wouldn't be a show-stopper, I went on to set up a firewall rule with Source IP ranges = '0.0.0.0/0' and Protocols and ports = 'tcp:8888'. Then I launched jupyter:
jupyter notebook --ip=0.0.0.0 --port=8888 --no-browser &
This created hyperlink to the notebook. When I click on the link (in the Serial Console window), a browser page opens with
Error: Unauthorized You are currently logged in as [my gmail address]
which does not have access to Cloud Shell 3118611.
According to the Cloud Platform dashboard I was logged in under the same account name when I set up the Compute Engine, so I don't know what is going wrong with the authorization.
Am I missing something?
I had this problem too and for me it was solved by typing the VM's external IP and port in the address bar without http or https:// preceding it.
e.g. xx.xxx.xxx.xxx:8888
Then for my notebook I was required to copy the access token (the token=[xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx] part, just the characters in square brackets here) from the terminal into the password prompt when jupyter shows in the browser.
Related
I'm trying to install the Google App Engine.
I have Cloud SDK v260.0.0 and Python 2.7.9
When I run the command:
http://code.google.com/appengine/gcloud components install app-engine-python from the cmd
it gives me the error:
'http:' is not recognized as an internal or external command,
operable program or batch file.
What is going on?
I have Windows 10 and I'm running from the Directory:
C:\Users\MyName\AppData\Local\Google\Cloud SDK
Here is the Guide to install app engine for Python 2.
Since you mentioned that Python and Google Cloud SDK have been installed, you might start from step 3 to run the following command in your terminal for the gcloud component:
gcloud components install app-engine-python
As Yanan C stated you install app engine with:
gcloud components install app-engine-python
However, I had to remove the link in the beginning.
Change:
http://code.google.com/appengine/gcloud components install app-engine-python
To:
gcloud components install app-engine-python
I'm getting an error while trying to connect raspberry running ubuntu mate to my Google Cloud SQL instance.
These are the step I did to install:
git clone https://github.com/GoogleCloudPlatform/cloudsql-proxy
cd cloudsql-proxy/
sudo sh download_proxy.sh
My instance is configured this way (I deleted some characters in the image and in the code):
I didn't set the network because I'll be using proxy
Then I download into the same folder my JSON key.
wget https://drive.google.com/file/d/my_key.json
And the start the proxy
sudo ./cloud_sql_proxy -instances=be - 21:us-central1:be =tcp:3306 \
-credential_file=./my_key.json &
But I'm getting the error:
pi#pi:~/cloudsql-proxy$ ./cloud_sql_proxy: 1: ./cloud_sql_proxy:
Syntax error: ")" unexpected
I've tried removing the .json and I was getting the same error before without credential, I think that the problem is in the setup.
My dir ls is:
Any help is appreciated :)
download_proxy.sh downloads the proxy compiled for the amd64 architecture of CPU (aka x86_64). Your raspberry Pi has a ARM CPU, so this binary cannot run on your machine.
Google does not provide pre-build ARM versions of the proxy. I don't even know if it is able to build on ARM CPU. If it is possible, this is how you must do it:
Install go, e.g. with apt-get install golang
Setup a GOPATH, as per https://github.com/golang/go/wiki/GOPATH
Run go get github.com/GoogleCloudPlatform/cloudsql-proxy/cmd/cloud_sql_proxy
Run the proxy with $GOPATH/cloud_sql_proxy -instances=...
Ok.
I'm sharing what I did to make it work, as David I don't know what version was I downloading.
I tried to avoid installing Go but it was the only way to get it installed.
sudo apt-get install golang-go
export GOPATH=$HOME/go
go get github.com/GoogleCloudPlatform/cloudsql-proxy/cmd/cloud_sql_proxy
cd $GOPATH/bin
wget your_key.json
sudo ./cloud_sql_proxy -instances=the_full_name_of_the_instance=tcp:3306 -credential_file=./your_key.json &
But I was getting a error because I already have mysql running localy in the same port
So now I'm using a unix soquet
sudo ./cloud_sql_proxy -instances=the_full_name_of_the_instance -credential_file=./your_key.json &
And then it's ready for connections :)
Thanks guys
I found issues with this when compiling SQL-proxy. I did, however, find the instructions here worked great on my raspberry pi 3. Have to make sure to remove all prior installations then reinstall it
wget https://storage.googleapis.com/golang/go1.9.linux-armv6l.tar.gz
sudo tar -C /usr/local -xzf go1.9.linux-armv6l.tar.gz
export PATH=$PATH:/usr/local/go/bin # put into ~/.profile`
I'n having a problem when running some npm test. The error I'm receiving is: "NaCl helper process running without a sandbox!", which is true, as I'm running the browser with the "--no-sandbox" option. I have to run this option due to the fact that the browser runs as root, and I don't have an option to run it a different user at all(it's a docker image).
Can anyone please help me to sort it out?
P.S I'm installing the browser in the following way:
RUN apt-get update
RUN apt-get install -y nodejs npm
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb https://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get install -y apt-transport-https
RUN apt-get update
RUN apt-get install -y google-chrome-stable
Thanks in advance!
This error message...
NaCl helper process running without a sandbox!
...implies that you have no setuid sandbox in your system, hence the program was unable to initiate/spawn a new Browsing Context i.e. Chrome Browser session.
Solution
A quick solution will be, if you want to run Chrome and only use the namespace sandbox, you can set the flag:
--disable-setuid-sandbox
This flag will disable the setuid sandbox (Linux only). But if you do so on a host without appropriate kernel support for the namespace sandbox, Chrome will not spin up. As an alternative you can also use the flag:
--no-sandbox
This flag will disable the sandbox for all process types that are normally sandboxed.
Example:
chromeOptions: {
args: ['--disable-setuid-sandbox', '--no-sandbox']
},
You can find a detailed discussion in Security Considerations - ChromeDriver - Webdriver for Chrome
Deep dive
As per the documentation in Linux SUID Sandbox Development google-chrome needs a SUID helper binary to turn on the sandbox on Linux. In majority of the cases you can install the proper sandbox for you using the command:
build/update-linux-sandbox.sh
This program will install the proper sandbox for you in /usr/local/sbin and tell you to update your .bashrc if required.
However, there can be some exceptions as an example, if your setuid binary is out of date, you will get messages such as:
Running without the SUID sandbox!
Or
The setuid sandbox provides API version X, but you need Y
You are using a wrong version of the setuid binary!
In these cases, you need to:
Build chrome_sandbox whenever you build chrome (ninja -C xxx chrome chrome_sandbox instead of ninja -C xxx chrome)
After building, execute update-linux-sandbox.sh.
# needed if you build on NFS!
sudo cp out/Debug/chrome_sandbox /usr/local/sbin/chrome-devel-sandbox
sudo chown root:root /usr/local/sbin/chrome-devel-sandbox
sudo chmod 4755 /usr/local/sbin/chrome-devel-sandbox
Finally, you have to include the following line in your ~/.bashrc (or .zshenv):
export CHROME_DEVEL_SANDBOX=/usr/local/sbin/chrome-devel-sandbox
If you are using karma to run your tests, make sure you are using ChromeHeadless as the browser on karma.conf.js
I am in a network where Google is blocked. I needed to install Google Chrome. Hence I cannot go to Chrome download page, it is restricted. Neither can I use
apt-get install google-chrome-stable.
It returns error.
Since I don't have access to Google urls, I tried the below command and tried to install
wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | sudo apt-key add -
Doesn't work. Hence I need a URL or third party repository which has chrome.deb files. Or any other alternative to download chrome. No I do not have USB port to download and copy. It's on a virtual machine.
Since our network does not allow to access Google redirected the request through a proxy network. i.e. by adding proxy paths in /etc/environment and for apt installs add a file called 95proxies in the apt directory.
Then use apt-get install google-chrome-stable.
I am trying to build ARM JSON template and facing a problem with custom script extension and request for your help.
This is the sample script that I am executing as a part of virtual machine extensions (Linux).
#!/usr/bin/bash
export AZURE_STORAGE_ACCOUNT="$1"
export AZURE_STORAGE_ACCESS_KEY="$2"
AZURE_STORAGE_CONTAINER="$3"
yum update -y
reboot
yum install -y epel-release
yum install -y gcc gcc-c++ kernel-devel ksh m4 sshpass nodejs npm
With this script the VM was able to install updates and reboot. However the command "yum install -y epel-release" and following commands were not executed. And also while deployment this operation hangs and timeouts.
Can you help suggest how to solve this situation using JSON ARM template or custom script extension or using any Linux workaround ?
probably powershell dsc can help you achieve that (configure certain properties to force configuration to continue after reboot), or split your configuration into 2 scripts and deploy them independently of each other, so the first script reboots the machine, and after that second script should start working as soon as the VM becomes available and waagent talk to Azure.