How can I get the GCE instance name where my script is running? - google-compute-engine

I'm currently trying to manipulate the metadata of my instance from the startup-script. To do that I have to use the following command :
gcutil setinstancemetadata <instance-name> --metadata=<key-1:value-1> --fingerprint=<current-fingerprint-hash>
As you can see the command ask for the instance-name. For I tried to get it from the metadata, but it was not there (see : Default Metadata).
My question is how to get this instance name ?
Edit: For now my only solution is to add the instance-name as a metadata when I create the instance :
gcutil addintance my-cool-instance --metadata=instance-name:my-cool-instance
And then get it with a curl request :
curl 'http://metadata/computeMetadata/v1/instance/attributes/instance-name' -H "X-Google-Metadata-Request: True"

Google Cloud Platform MetaData URL supports getting the instance name via hostname resource, irrespective of any custom hostnames set for the instance. That's why $HOSTNAME is not recommended.
URL1:
INSTANCE_NAME=$(curl http://169.254.169.254/0.1/meta-data/hostname -s | cut -d "." -f1)
URL2:
INSTANCE_NAME=$(curl http://metadata.google.internal/computeMetadata/v1/instance/hostname -H Metadata-Flavor:Google | cut -d . -f1)
GCP follows a common regex pattern for the resource names (?:[a-z](?:[-a-z0-9]{0,61}[a-z0-9])?) , so it's safe to cut the result based on . and use the first part as the instance name.

The instance name is the same as its hostname, you can just use the $HOSTNAME environmental variable, e.g.:
gcutil setinstancemetadata $HOSTNAME --metadata=<key-1:value-1> --fingerprint=<current-fingerprint-hash>
This works on my instance which was built from the debian-7-wheezy-v20140318 image.
UPDATE: The above works fine on Debian 7 (Wheezy), but on OS's where the HOSTNAME variable is the fully qualified domain name, rather than just the host name, you should use the syntax below:
gcutil setinstancemetadata $($HOSTNAME | cut -d . -f1) --metadata=<key-1:value-1> --fingerprint=<current-fingerprint-hash>

A better means to get the instance name is to use the hostname command included in the GCE images :
[benoit#my-instance ~]$ hostname
my-instance

Related

Setting envs or globals in bash script [NOT A duplicate]

I'm looking for a way (preferably cross-platform compatible) to set something globally accessible from a bash script.
My company is using a bash script to request access credentials to a mysql database. This returns username, password and db domain that I end up having to copy paste in my terminal to run and connect to our mysql db.
I thought i'd amend the script to set environment variables and make use of these in an alias with the credentials set in my bashrc but turns out you can't set environment variables in a bash script.
So i tried to set the mysql alias with the username password and domain pre-filled in that same script but same issue. Can't set an alias in a bash script.
I essentially want to be able to run the script that gives me the credentials and then not have to do manual copy pasting all over the place.
What I tried was (if it give more context):
#!/bin/bash
# Script gets the credentials
# Script now has username, password, endpoint variables
export MYSQL_USER=$username
export MYSQL_PASSWORD=$password
export MYSQL_ENDPOINT=$endpoint
# Script finishes
and in my bashrc:
alias mysqlenv="mysql -h $MYSQL_ENDPOINT -u $MYSQL_USER -p'$MYSQL_PASSWORD'"
I appreciate this is not working and that might not be the best solution so i'm open to other options.
PS: Forgot to mention the credentials expire every 24H which is why i want to smoothen the process
PS2: I can't source the script that gives me the credentials because it's not just exporting environment variables, it's taking params from the cli and getting me to log in to my company system on my browser, etc.
PS3: I know putting password for mysql on the command line is bad practice but this is a non-issue as that password is being printed there in the first place by the script that give me the credential (written by someone else in the company)
Since you can already parse the credentials, I'd use your awk code to output shell commands:
getMysqlCredentials() {
credential_script.sh | awk '
{parse the output}
END {
printf "local MYSQL_USER=\"%s\"\n", username
printf "local MYSQL_PASSWORD=\"%s\"\n", password
printf "local MYSQL_ENDPOINT=\"%s\"\n", endpoint
}
'
}
then, I'd have a wrapper function around mysql where you invoke that function and source the output
mysql() {
source <(getMysqlCredentials)
command mysql -h "$MYSQL_ENDPOINT" -u "$MYSQL_USER" -p"$MYSQL_PASSWORD" "$#"
}

GCE - different username if i use SSH or log in from terminal?

I created a new project with nothing in it.
When I created my first micro instance I did the following
Connect to it using the browser window SSH.
I see:
user_name#instance-1:~$
If I connect using the gcloud command:
gcloud compute --project "projectname-165421" ssh --zone "us-east1-b" "instance-1"
I am brought to:
username#instance-1:~$
Why is this happening, and how do I resolve it?
This is creating two separate users which is creating a great deal of confusion for me!
Thank you
By default, the cloudSDK will try to connect using the user running the command.
If you look at the docs,
It says you can specify the username like you would, with your default ssh client.
Meaning on your computer:
gcloud compute --project "projectname-165421" ssh --zone "us-east1-b" "user_name#instance-1"
Alternatively, switch user in the browser window SSH:
sudo su username
I use this command to fix it on MacOS:
LOGNAME=ubuntu gcloud compute ssh app1
In answer to some of the comments: yes, you can change the default username. This is stored under the environment variable $USER. You can view the current value of this variable by running in your local terminal:
echo $USER
You can set the value of this variable to the default user you would like, by running the following (assuming you are using any of bash/zsh):
export USER=the_new_default_user
For example, if the new desired default user is "pablo" then:
export USER=pablo
Now, every time you run the following command to ssh in your instance, you'll ssh into pablo:
gcloud compute ssh MY_INSTANCE
I hope this helps!

How to get password for sudo

I'm trying to use the command sudo -i -u postgres for PostgreSQL, and the Google Compute Engine VM is asking me for my password for my account (not root).
As I never issued a password, and I always login to my server via SSH key, I'm not sure what the password is, how I can reset it, or where it can be found.
Please tell me where I can get my password?
To become another non-root user on a GCE VM, first become root via password-less sudo (since that's how sudo is configured on GCE VM images):
sudo su -
and then switch to the user you want to become or run a command as another use, e.g., in your case, that's:
sudo -i -u postgres
Per https://cloud.google.com/compute/docs/instances ,
The instance creator and any users that were added using the metadata
sshKeys value are automatically administrators to the account, with
the ability to run sudo without requiring a password.
So you don't need that non-existent password -- you need to be "added using the metadata sshKeys value"! The canonic way to do that, and I quote from that same page:
$ echo user1:$(cat ~/.ssh/key1.pub) > /tmp/a
$ echo user2:$(cat ~/.ssh/key2.pub) >> /tmp/a
$ gcloud compute project-info add-metadata --metadata-from-file sshKeys=/tmp/a
or you can use the Google Developers Console for similar purposes, see https://cloud.google.com/compute/docs/console#sshkeys if you'd prefer that.
Summary
While creating the VM, specify the ssh user in the "Enter the entire key data" box.
Details
generate the ssh key pair and identify the public key:
if ssh-keygen, a file ending with ".pub"
if PuTTYgen, the text in box "Public key for pasting ..."
Notice the fields, all one one line, separated by spaces: "protocol key-blob username".
For username, you may find your Windows user name or a string like "rsa-key-20191106". You will replace that string with your choice of Linux user name.
Paste the public key info into the "Enter the entire key data" box.
Change the 3rd field to the actual user that you want to create on the VM. If, for example, "gcpuser", then:
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAjUIG3Z8wKWf+TZQ7nVQzB4s8U5rKGVE8NAt/LxlUzEjJrhPI5m+8llLWYY2PH4atZzuIYvYR0CVWhZvZQzLQc33vDOjQohxV9Lg26MwSqK+bj6tsr9ZkMs2zqNbS4b2blGnr37+dnwz+FF7Es9gReqyPxL9bn5PU/+mK0zWMHoZSEfUkXBrgqKoMQTsYzbMERluByEpZm9nRJ6ypvr9gufft9MsWC2LPhEx0O9YDahgrCsL/yiQVL+3x00DO9sBOXxi8kI81Mv2Rl4JSyswh1mzGAsT1s4q6fxtlUl5Ooz6La693IjUZO/AjN8sZPh03H9WiyewowkhMfS0H06rtGQ== gcpuser
create your VM. (Debian, for example)
Connect to the VM
directly from ssh or PuTTY (not browser window)
use the private key
specify the user
Notice that your public key is present:
gcpuser#instance-1:~/.ssh$ cat authorized_keys
# Added by Google
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAjUIG3Z8wKWf+TZQ7nVQzB4s8U5rKGVE8NAt/LxlUzEjJrhPI5m+8llLWYY2PH4atZzuIYvYR0CVWhZvZQzLQc33vDOjQohxV9Lg26MwSqK+bj6tsr9ZkMs2zqNbS4b2blGnr37+dnwz+FF7Es9gReqyPxL9bn5PU/+mK0zWMHoZSEfUkXBrgqKoMQTsYzbMERluByEpZm9nRJ6ypvr9gufft9MsWC2LPhEx0O9YDahgrCsL/yiQVL+3x00DO9sBOXxi8kI81Mv2Rl4JSyswh1mzGAsT1s4q6fxtlUl5Ooz6La693IjUZO/AjN8sZPh03H9WiyewowkhMfS0H06rtGQ== gcpuser
Notice that you are in group google-sudoers
gcpuser#instance-1:~/.ssh$ id
uid=1000(gcpuser) gid=1001(gcpuser) groups=1001(gcpuser),4(adm),30(dip),44(video),46(plugdev),1000(google-sudoers)
sudo to root with no password
gcpuser#instance-1:~$ sudo -i -u root
root#instance-1:~#
Notice the sudoers file:
root#instance-1:~# cat /etc/sudoers.d/google_sudoers
%google-sudoers ALL=(ALL:ALL) NOPASSWD:ALL
Conclusion
Specifying the username in "Enter the entire key data" has these results:
creating the user in the virtual machine.
uploading the key to ~/.ssh
membership in a passwordless sudo group

ping external host from zabbix agent

We are running a typical zabbix server setup. A zabbix server and a couple linux servers that has zabbix agent installed and monitored by the zabbix server. However, my problem is there a way to check ping (icmppingsec maybe? :confused:) in between linux_host A to linux_host B and output the result to the zabbix server coming from linux_host A??
I have tried simple check icmppingsec[<target>,<packets>,<interval>,<size>,<timeout>,<mode>] but I found out that the ping is executed by the zabbix server itself and not the host A.
Thanks for the help!
Found solution to add this user parameter:
UserParameter=chk.fping[*],sudo /usr/bin/fping -c 3 $1 2>&1 | tail -n 1 | awk '{print $NF}' | cut -d '/' -f2
Add permition in /etc/sudoers because of error in creating SOCKET.
zabbix ALL=(ALL) NOPASSWD:/usr/bin/fping
In template you can add items you'd like to ping
chk.fping[8.8.8.8]
Currently, the ability to ping a host by Zabbix agent is not supported out of the box, but there is a feature request for that: ZBXNEXT-739. Meanwhile, you should add a user parameter on the agent that would do the pinging.
I have found a way to get the ping latency from the zabbix agent in order to ping an external host. I declared this parameter to the zabbix_agentd.conf
UserParameter=key_name[*],fping -e x.x.x.x | awk '{ print $4 }' | tr -d '('
It outputs the response time, numeric value only. My next problem is how to make this command readable by zabbix server so that it will be viewable thru graph. On zabbix server the output is "no data" but under Hosts > Items, it is green and enabled.
Thanks for the help!

Access Hive Metastore server information from Hive Shell

I know it is possible to have a metastore located on a remote server. In order to set this up, I must specify the ConnectionURL, Driver, Username, and Password in the hive-site.xml file. Is it possible to access the information in the hive-site.xml file from the hive shell?
You could use the SET command through the Hive CLI. But it prints all the variables in the namespaces hivevar, hiveconf, system, and env. So, you could use it with grep to print just the properties you need. For example if you want to see what is the value of mapred.reduce.tasks which you set in hive-site.xml then you could do this :
bin/hive -S -e "set" | grep mapred.reduce.tasks
Or to get the metastore related info you could do this :
bin/hive -S -e "set" | grep metastore
I don't know if this what you were expecting, but it does the trick for me. Hope this helps you too.