I'm trying to script my VM creation and setup process.
Currently the script asks for my ssh passphrase multiple times.
Is there a way to enter the passphrase once at the beginning of the script and be done?
here's the first script:
gcloud -q compute instances create $VM_NAME \
--zone=$ZONE \
--machine-type=n1-standard-1 \
--image-project=ml-images \
--image-family=tf-1-14 \
--scopes=cloud-platform \
--boot-disk-size=24GB \
&& \
echo vm created \
&& \
gcloud -q compute scp --recurse \
~/altered-source/ $VM_NAME:~ \
--zone=$ZONE \
&& \
gcloud -q compute scp --recurse \
~/vm-scripts/ $VM_NAME:~ \
--zone=$ZONE \
&& \
echo files transfered \
&& \
gcloud -q compute ssh $VM_NAME \
--zone=$ZONE
Please provide some details of how you're attempting this.
By default, if you using gcloud, an ssh keypair will be generated automatically for you and stored in the metadata service so that you can ssh seamlessly, e.g.
gcloud compute instances create ${INSTANCE} ...
gcloud compute ssh ${INSTANCE} ... --command=....
Possible a better method to recreate the instance(s) programmatically, is to developer a startup script and then pass this to the instance during creation:
https://cloud.google.com/sdk/gcloud/reference/compute/instances/create#startup-script
Yo!
Definitively, use a Storage Bucket for that intra-transfer thing; you'll see you'll have better control and faster responses all the way.
If you really require to use a "third leg" perhaps using your local machine could work, you just need to install the SDK and use gcloud commands, it wont ask you for keys once you exchanged them between local and remote VMs, the caveat? you rely on your ISP Up/Down speeds, the good thing you know what, and how long is taking a file to upload.
Now, once again I suggest as other here already did, use a cloud bucket, that way you only need to refer your file as gs:///file and forget about the rest.
In any case here is some info about transferring files to instances.
Have an awesome day!
-JP
Related
I'm sorry if this is a simple question, but I am just starting out with qemu and can't find a easy way to do this.
I am trying to somewhat automate my KVM deployment. I am currently running into the issue that I can't find a way to automatically set parameters for a filterref.
This is what my network option for virt-install currently looks like and that is working fine for now.
--network type=bridge,network=default,bridge=bridge0,model=e1000e,mac=$mac,filterref=clean-traffic
However I can't find anything to set a parameter to define the IP address it's supposed to be locked down to. This is the result that I want in the xml:
<filterref filter='clean-traffic'>
<parameter name='IP' value='XXX.XXX.XXX.XXX'/>
</filterref>
I am looking for a way to automatically add that parameter, preferably directly with virt-install or to an extent were I can just run a script, enter the few variables I want to set. And at this point the VM would already be running and waiting for the setup to be completed, with the filter loaded. Basically I want the parameter to be loaded before the first startup, so that there is no chance of anyone trying to mess with the ip address.
Is this possible?
This is the whole "script" I just copy into the console at the moment.
name=WindowsTest
mac=00:50:56:00:05:C5
size=70
ram=6000
vcpus=6
let cores=vcpus/2
virt-install \
--name=$name \
--ram=$ram \
--cpu=host \
--vcpus=$vcpus,maxvcpus=$vcpus,sockets=1,cores=$cores,threads=2 \
--os-type=windows \
--os-variant=win10 \
--disk path=/var/lib/libvirt/clutchImages/$name.qcow2,size=$size,format=qcow2,bus=virtio \
--cdrom /var/isos/Windows_20H2_English.iso \
--disk /var/isos/virtio-win-0.1.185.iso,device=cdrom \
--network type=bridge,network=default,bridge=bridge0,model=e1000e,mac=$mac,filterref=clean-traffic \
--graphics spice,listen=157.90.2.208 \
--graphics vnc
virsh version output:
virsh version
Compiled against library: libvirt 6.0.0
Using library: libvirt 6.0.0
Using API: QEMU 6.0.0
Running hypervisor: QEMU 4.2.0
I am on CentOS Linux release 8.3.2011.
Make arbitrary edits to virt-install's xml output
According to the man page you can make direct edits to the XML using XPath
syntax.
e.g.
virt-install \
#...
--network network="${net}",mac="${macaddr},filterref.filter=clean-traffic" \
--xml xpath.create=./devices/interface/filterref/parameter \
--xml xpath.set=./devices/interface/filterref/parameter/#name=IP \
--xml xpath.set=./devices/interface/filterref/parameter/#value=10.0.0.20
#...
virt-install man page excerpt:
man virt-install | grep -m1 -A40 '\-\-xml'
--xml
Syntax: --xml ARGS
Make direct edits to the generated XML using XPath syntax. Take an ex‐
ample like
virt-install --xml ./#foo=bar --xml ./newelement/subelement=1
This will alter the generated XML to contain:
<domain foo='bar' ...>
...
<newelement>
<subelement>1</subelement>
</newelement>
</domain>
The --xml option has 4 sub options:
--xml xpath.set=XPATH[=VALUE]
The default behavior if no explicit suboption is set. Takes the
form XPATH=VALUE unless paired with xpath.value . See below for
how value is interpreted.
--xml xpath.value=VALUE
xpath.set will be interpreted only as the XPath string, and
xpath.value will be used as the value to set. May help sidestep
problems if the string you need to set contains a '=' equals
sign.
If value is empty, it's treated as unsetting that particular
node.
--xml xpath.create=XPATH
Create the node as an empty element. Needed for boolean elements
like <readonly/>
--xml xpath.delete=XPATH
Delete the entire node specified by the xpath, and all its chil‐
dren
XML result
<interface type="network">
<!-- ... -->
<filterref filter="clean-traffic">
<parameter name="IP" value="10.0.0.20"/>
</filterref>
</interface>
virsh version output:
Compiled against library: libvirt 7.7.0
Using library: libvirt 7.7.0
Using API: QEMU 7.7.0
Running hypervisor: QEMU 6.2.0
Quick & dirty
name=WindowsTest
mac=00:50:56:00:05:C5
IP=xxx.yyy.zzz.qqq
size=70
ram=6000
vcpus=6
let cores=vcpus/2
virt-install \
--name=$name \
--ram=$ram \
--cpu=host \
--vcpus=$vcpus,maxvcpus=$vcpus,sockets=1,cores=$cores,threads=2 \
--os-type=windows \
--os-variant=win10 \
--disk path=/var/lib/libvirt/clutchImages/$name.qcow2,size=$size,format=qcow2,bus=virtio \
--cdrom /var/isos/Windows_20H2_English.iso \
--disk /var/isos/virtio-win-0.1.185.iso,device=cdrom \
--network type=bridge,network=default,bridge=bridge0,model=e1000e,mac=$mac,filterref=clean-traffic \
--graphics spice,listen=157.90.2.208 \
--graphics vnc
--print-xml > /tmp/{$name}.xml && \
sed -i "s/<filterref.*/<filterref filter='clean-traffic'>\n <parameter name='IP' value='${IP}'\/>\n <\/filterref>/g" /tmp/{$name}.xml && \
virsh create /tmp/{$name}.xml
Is it posible to run .sh files from Azure-CLI in windows command line.
Following is the cosmosDB collection creation script that I'm trying to run on local Azure-CLI
#!/bin/bash
# Set variables for the new account, database, and collection
resourceGroupName='testgropu'
location='testlocation'
accountName='testaccount'
databaseName='testDB'
prefix='prefix.'
tenantName='testTenant'
collectionTest='.test'
originalThroughput=400
newThroughput=500
az cosmosdb collection create \
--resource-group $resourceGroupName \
--collection-name "$prefix$tenantName$collectionTest" \
--name $accountName \
--db-name $databaseName \
--partition-key-path "/'\$v'/testId/'\$v'"
Is it posible to run these commands as a script from Azure-CLI like in linux . test.sh?
Actually, you cannot run the bash script like in Linux . test.sh. But you can run the bash script in the command prompt if you install the WSL in your windows like this:
bash test.sh
Additional, if you do not install the WSL in your windows, then the bash is not recognized as an internal or external command. And you also cannot use the bash command in the PowerShell.
Note: you should take care of when you create the bash script in your windows, there are different about the chars between Windows and Linux. You can use the tools that support multiple languages, for example, the notepad++.
I have a Docker container that performs a single large computation. This computation requires lots of memory and takes about 12 hours to run.
I can create a Google Compute Engine VM of the appropriate size and use the "Deploy a container image to this VM instance" option to run this job perfectly. However once the job is finished the container quits but the VM is still running (and charging).
How can I make the VM exit/stop/delete when the container exits?
When the VM is in its zombie mode only the stackdriver containers are left running:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bfa2feb03180 gcr.io/stackdriver-agents/stackdriver-logging-agent:0.2-1.5.33-1-1 "/entrypoint.sh /u..." 17 hours ago Up 17 hours stackdriver-logging-agent
161439a487c2 gcr.io/stackdriver-agents/stackdriver-metadata-agent:0.2-0.0.17-2 "/bin/sh -c /opt/s..." 17 hours ago Up 17 hours 8000/tcp stackdriver-metadata-agent
I create the VM like this:
gcloud beta compute --project=abc instances create-with-container vm-name \
--zone=us-central1-c --machine-type=custom-1-65536-ext \
--network=default --network-tier=PREMIUM --metadata=google-logging-enabled=true \
--maintenance-policy=MIGRATE \
--service-account=xyz \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--image=cos-stable-69-10895-71-0 --image-project=cos-cloud --boot-disk-size=10GB \
--boot-disk-type=pd-standard --boot-disk-device-name=vm-name \
--container-image=gcr.io/abc/my-image --container-restart-policy=on-failure \
--container-command=python3 \
--container-arg="a" --container-arg="b" --container-arg="c" \
--labels=container-vm=cos-stable-69-10895-71-0
When you create the VM, you'll need to give it write access to compute so you can delete the instance from within. You should also set container environment variables like gce_zone and gce_project_id at this time. You'll need them to delete the instance.
gcloud beta compute instances create-with-container {NAME} \
--container-env=gce_zone={ZONE},gce_project_id={PROJECT_ID} \
--service-account={SERVICE_ACCOUNT} \
--scopes=https://www.googleapis.com/auth/compute,...
...
Then within the container, whenever YOU determine your task is finished:
request an api token (im using curl for simplicity and DEFAULT gce service account)
curl "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google"
This will respond with json that looks like
{
"access_token": "foobarbaz...",
"expires_in": 1234,
"token_type": "Bearer"
}
Take that access token and hit the instances.delete api endpoint (notice the environment variables)
curl -XDELETE -H 'Authorization: Bearer {TOKEN}' https://www.googleapis.com/compute/v1/projects/$gce_project_id/zones/$gce_zone/instances/$HOSTNAME
Having grappled with the problem for some time, here's a full solution that works pretty well.
This solution doesn't use the "start machine with a container image" option. Instead it uses a startup script, which is more flexible. You still use a Container-Optimized OS instance.
Create a startup script:
#!/usr/bin/env bash
# get image name and container parameters from the metadata
IMAGE_NAME=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/image_name -H "Metadata-Flavor: Google")
CONTAINER_PARAM=$(curl http://metadata.google.internal/computeMetadata/v1/instance/attributes/container_param -H "Metadata-Flavor: Google")
# This is needed if you are using a private images in GCP Container Registry
# (possibly also for the gcp log driver?)
sudo HOME=/home/root /usr/bin/docker-credential-gcr configure-docker
# Run! The logs will go to stack driver
sudo HOME=/home/root docker run --log-driver=gcplogs ${IMAGE_NAME} ${CONTAINER_PARAM}
# Get the zone
zoneMetadata=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/zone" -H "Metadata-Flavor:Google")
# Split on / and get the 4th element to get the actual zone name
IFS=$'/'
zoneMetadataSplit=($zoneMetadata)
ZONE="${zoneMetadataSplit[3]}"
# Run compute delete on the current instance. Need to run in a container
# because COS machines don't come with gcloud installed
docker run --entrypoint "gcloud" google/cloud-sdk:alpine compute instances delete ${HOSTNAME} --delete-disks=all --zone=${ZONE}
Put the script somewhere public. For example put it on Cloud Storage and create a public URL. You can't use a gs:// URI for a COS startup script.
Start an instance using a startup-script-url, and passing the image name and parameters, e.g.:
gcloud compute --project=PROJECT_NAME instances create INSTANCE_NAME \
--zone=ZONE --machine-type=TYPE \
--metadata=image_name=IMAGE_NAME,\
container_param="PARAM1 PARAM2 PARAM3",\
startup-script-url=PUBLIC_SCRIPT_URL \
--maintenance-policy=MIGRATE --service-account=SERVICE_ACCUNT \
--scopes=https://www.googleapis.com/auth/cloud-platform --image-family=cos-stable \
--image-project=cos-cloud --boot-disk-size=10GB --boot-disk-device-name=DISK_NAME
(You probably want to limit the scopes, the example uses full access for simplicity)
I wrote a self-contained Python function based on Vincent's answer.
def kill_vm():
"""
If we are running inside a GCE VM, kill it.
"""
# based on https://stackoverflow.com/q/52748332/321772
import json
import logging
import requests
# get the token
r = json.loads(
requests.get("http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token",
headers={"Metadata-Flavor": "Google"})
.text)
token = r["access_token"]
# get instance metadata
# based on https://cloud.google.com/compute/docs/storing-retrieving-metadata
project_id = requests.get("http://metadata.google.internal/computeMetadata/v1/project/project-id",
headers={"Metadata-Flavor": "Google"}).text
name = requests.get("http://metadata.google.internal/computeMetadata/v1/instance/name",
headers={"Metadata-Flavor": "Google"}).text
zone_long = requests.get("http://metadata.google.internal/computeMetadata/v1/instance/zone",
headers={"Metadata-Flavor": "Google"}).text
zone = zone_long.split("/")[-1]
# shut ourselves down
logging.info("Calling API to delete this VM, {zone}/{name}".format(zone=zone, name=name))
requests.delete("https://www.googleapis.com/compute/v1/projects/{project_id}/zones/{zone}/instances/{name}"
.format(project_id=project_id, zone=zone, name=name),
headers={"Authorization": "Bearer {token}".format(token=token)})
A simple atexit hook gets me my desired behavior:
import atexit
atexit.register(kill_vm)
Another solution is to not use GCE and instead use AI Platform's custom job service, which automatically shuts down the VM after the Docker container exits.
gcloud ai-platform jobs submit training $JOB_NAME \
--region $REGION \
--master-image-uri $IMAGE_URI
You can specify --master-machine-type.
See the GCP documentation on custom containers.
The simplest way, from within the container, once it's finished:
ZONE=`gcloud compute instances list --filter="name=($HOSTNAME)" --format 'csv[no-heading](zone)'`
gcloud compute instances delete $HOSTNAME --zone=$ZONE -q
-q skips the interactive confirmation
$HOSTNAME is already exported
Just use curl and the local metadata server (no need for Python scripts or gcloud). Add the following to the end of your Docker Entrypoint script, so it's run when the container finishes:
# Note: inside the container the name is exposed as $HOSTNAME
INSTANCE_NAME=$(curl -sq "http://metadata.google.internal/computeMetadata/v1/instance/name" -H "Metadata-Flavor: Google")
INSTANCE_ZONE=$(curl -sq "http://metadata.google.internal/computeMetadata/v1/instance/zone" -H "Metadata-Flavor: Google")
echo "Terminating instance [${INSTANCE_NAME}] in zone [${INSTANCE_ZONE}}"
TOKEN=$(curl -sq "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google" | jq -r '.access_token')
curl -X DELETE -H "Authorization: Bearer ${TOKEN}" https://www.googleapis.com/compute/v1/$INSTANCE_ZONE/instances/$INSTANCE_NAME
For security sake, and Principle of Least Privilege, you can run the VM with a custom service account, and give that service account a role, with this permission (a custom role is best).
compute.instances.delete
I have tried to set in a new instances template the maintenance policy to "MIGRATE" and the automatic restart to "On" (as the Web Console does); but it ignores the flags.
This is the command I am using:
gcloud compute instance-templates create \
$TEMPLATE_NAME \
--boot-disk-size 50GB \
--image coreos-beta-681-0-0-v20150527 \
--image-project coreos-cloud \
--machine-type n1-standard-2 \
--metadata-from-file user-data=my-cloud-config.yml \
--scopes compute-rw,storage-full,logging-write \
--tags web-minion \
--maintenance-policy MIGRATE \
--boot-disk-type pd-standard
But the template is created with Automatic restart to "Off" and On host maintenance to "Terminate VM instance". Instances created from this template have also the same settings.
When I log HTTP requests and responses this appears in the create request:
{"automaticRestart": true, "onHostMaintenance": "MIGRATE"}
so it does not seem a client error.
How can I create templates with the same settings the Web Console uses?
EDIT: Version of gcloud: 0.9.61; Version of compute: 2015.05.19.
EDIT 2: This also occurs now in Developers Console; it's a regression because I had a template with the correct values before.
The issue has been fixed and now I can create templates with the correct settings.
I'm trying to install tungsten replicator 3.0.0-524 GA from MySQL to MongoDB but when I'm running the cookbook/validate_cluster the error:
There is already another Tungsten installation script running
(InstallationScriptCheck)
Keep showing up
The configuration I'm using for the cluster are:
./tools/tpm configure mysql2mongodb \
--enable-heterogenous-master=true \
--topology=master-slave \
--master=mysql \
--replication-user=boahub_boahub \
--replication-password=*****\
--slaves=tracking-mongo \
--home-directory=/opt/mysql \
--svc-extractor-filters=replicate \
--property=replicator.filter.replicate.do=boahub_boahub.urls,boahub_boahub.media_campaigns \
--start-and-report
./tools/tpm configure mysql2mongodb \
--hosts=tracking-mongo \
--datasource-type=mongodb \
--replication-port=27017
./tools/tpm -v install --install-directory=/opt/tungsten
I've configured both "mysql" and "tracking-mongo" hosts under /etc/hosts file
So far I've tried to
1. Reboot the system
2. Clear my /opt/tungsten installation directory
3. Delete the deploy.cfg
The verbose output of the tools/tpm -v install shows the SSH between the two machines succeeded and the command for checking other tungsten script is
ps ax 2>/dev/null | grep configure.rb | grep -v firewall | grep -v grep | awk '{print $1}'
When I execute this command it comes up with nothing.
What can I do? Is there and way to ignore this check?
Thanks!
You can remove any check using --skip-validation-check option(argument required). You can use this option multiple time without problem.
The option takes as argument the name of the check which can be found in the error message.
In your case you can add the following option to your command:
--skip-validation-check InstallationScriptCheck
I have a feeling this may help you get through.
Have you tried install your master and slave separately? Do a
./tools/tpm install
after configuring & installing master, clear the configuration with
./tools/tpm configure defaults --reset
Then apply your slave settings and do the other tpm install.
A few weeks ago I had run into some similar (maybe, I can't recall as clear) trouble. The phrase "another script" in your post has brought back some memory of that for me, hope it works.
Good Luck!