Find image path to GCE image - google-compute-engine

When configuring a gitlab runner I can tell it what type of instances to spawn on GCP via the google-machine-image keyword.
How do I find out what the path to an GCE image is?

You can find the URIs running the following command:
$ gcloud compute images list --uri
Here's is more information about it.

Related

Cannot map agent.conf using Cygnus docker installation

I have problem installing CYGNUS using docker as source, simply i cannot understand where i should map what specific agent.conf.
Image i am using is from here.
When i try to map agent.conf witch have my specific setup to container it starts and run but fail to copy, and not only that any change i made to file inside container wont stay it returns to previous default state.
While i have no issues with grouping_rules.conf using same approach.
I used docker and docker compose both same results.
Path on witch i try to copy opt/apache-flume/conf/agent.conf
docker run -v /home/igor/Documents/cygnus/agent.conf:/opt/apache-flume/conf/agent.conf fiware/cygnus-ngsi
Can some who managed to run it using his config tell me if i misunderstood location of agent.conf or something because this is weird, i used many docker images and never had issue where i was not able to copy from my machine to docker container.
Thanks in advance.
** EDIT **
Link of agent.conf
Did you copy the agent.conf file to your directory before start the container?
As you can see here, when you define a volume with "-v" option, docker copies the content of the host directory, inside the container directory using the mount point. Therefore, you must first provide the agent.conf file on your host.
The reason is that when using a "bind mounted" directory from the
host, you're telling docker that you want to take a file or directory
from your host and use it in your container. Docker should not modify
those files/directories, unless you explicitly do so. For example, you
don't want -v /home/user/:/var/lib/mysql to result in your
home-directory being replaced with a MySQL database.
If you do not have access to the agent.conf file, you can download the template in the source code from the official cygnus github repo here. You can also copy it once the docker container is running, using the docker cp option:
docker cp <containerId>:/file/path/within/container /host/path/target
Keep in mind, that you will have to edit the agent.conf file to configure it according to the database you are using. You can find in the official doc how to configure cygnus to use differents sinks like MongoDB, MySQL, etc.
I hope I have been helpful.
Best regards!

how to create imagestream of jbossweb in openshift origin

How can I create and use the imagestream of jboss webserver in openshift origin ?
Image yaml available in this link. I see that it is automatically built with openshift enterprise version (link) . but why not in origin ?
Thanks.
I expected it to pull itself the image during build but did not happen.
D:\docker\apps>oc new-build --image-stream=jboss-webserver31-tomcat7-openshift:1.1 --name=newapp --binary=true
warning: Cannot find git. Ensure that it is installed and in your path. Git is required to work with git repositories.
error: unable to locate any images in image streams with name "jboss-webserver31-tomcat7-openshift:1.1"
The 'oc new-build' command will match arguments to the following types:
1. Images tagged into image streams in the current project or the 'openshift' project
- if you don't specify a tag, we'll add ':latest'
2. Images in the Docker Hub, on remote registries, or on the local Docker engine
3. Git repository URLs or local paths that point to Git repositories
--allow-missing-images can be used to force the use of an image that was not matched
See 'oc new-build -h' for examples.
So I tried to create the import yaml in webconsole but got below error with yaml.
Failed to process the resource.
Resource is missing kind field.
Got it. Apparently one has to be logged in redhat
oc import-image my-jboss-webserver-3/webserver31-tomcat7-openshift --from=registry.access.redhat.com/jboss-webserver-3/webserver31-tomcat7-openshift --confirm

How to run base centos image in minishift?

I try to learn about Open Shift, how it works, how to run apps, build images etc.
To start with something, which I thought will be rather simple, I decided to run a pod with pure centos7 OS, based on this image. I installed locally minishift v1.11.0+4459917, I created a new project, and performed command:
oc new-app openshift/base-centos7 in this project. As a result I received the following message:
--> Found Docker image bb81a09 (11 months old) from Docker Hub for "openshift/base-centos7"
* An image stream will be created as "pon3:latest" that will track this image
* This image will be deployed in deployment config "pon3"
* The image does not expose any ports - if you want to load balance or send traffic to this component
you will need to create a service with 'expose dc/pon3 --port=[port]' later
* WARNING: Image "openshift/base-centos7" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources ...
imagestream "pon3" created
deploymentconfig "pon3" created
--> Success
Run 'oc status' to view your app.
As I can see in the warning this image runs as root, which is clearly not a good practice, but it may be worked around, as described here and here. I tried both approaches - I have created a new service account with anyuid scc, and I assigned anyuid scc to default sa. Unfortunately I'm still not able to run a pod based on this image. The result looks like this:
oc get pods
mycentos-1-deploy 1/1 Running 0 32s
mycentos-1-p1vh5 0/1 CrashLoopBackOff 1 30s
I try to troubleshoot this way:
oc logs -p mycentos-1-p1vh5
This image serves as the base image for all OpenShift v3 S2I builder images.
It provides all essential libraries and development tools needed to
successfully build and run an application.
To use this image as a base image, you need to have 's2i/bin' directory in the
same directory as your S2I image Dockerfile. This directory should contain S2I
scripts.
This base image also provides the default user you should use to run your
application. Your Dockerfile should include this instruction after you finish
installing software:
USER default
The default directory for installing your application sources is
'/opt/app-root/src' and the WORKDIR and HOME for the 'default' user is set
to this directory as well. In your S2I scripts, you don't have to use absolute
path, but rather rely on the relative path.
To learn more about S2I visit: https://github.com/openshift/source-to-image
Additionally I tried to troubleshoot with oc adm diagnostics but to be honest I didn't see anything relevant to this issue.
I'm clearly missing something here. Can someone give me a hint how this should be handled or how can I try to troubleshoot this? Is there a different way to run pure centos OS?
Thank you for any help.
You need the image you want to deploy using oc new-app to have an actual application in it. The openshift/base-centos7 image is a base image only on which other images are built and doesn't have an application in it.
If you just want to spin up a container and be presented with a shell environment in which you can play in use the oc run command instead.
OpenShift isn't like a traditional VPS where you just spin up permanent shell environments which you then access to set up your application manually. The idea is that you build your application into an image and deploy the application.
I would suggest you go read:
https://www.openshift.com/promotions/for-developers.html
https://www.openshift.com/promotions/devops-with-openshift.html
and work through the exercises at:
https://learn.openshift.com
to learn more about what OpenShift is and how to use it.

Add a new persistent disk to GCE which created with VirtualBox custom image

I followed the tutorial google has up on youtube for creating a custom image for compute engine using VirtualBox by the link as follow
https://www.youtube.com/watch?v=YlcR6ZLebTM
I have succeed in created custom images and imported it to the Google Compute Engine.
But when I try to follow this document to attached a new persistent disk :
https://cloud.google.com/compute/docs/disks/persistent-disks#attachdiskcreation
The document mentions a command line tool :
/usr/share/google/safe_format_and_mount
but the folder /usr/share/google does not exist in my custom image.
How can I install it ?
or is there another way to mount a new persistence disk in GCE instance
?
The /usr/share/google/safe_format_and_mountcommand comes with the Google Compute Engine image packages. You can see the source code here.
You can either install the packages or run these commands:
1- Determine the device location of your new persistent disk: ls -l /dev/disk/by-id/google-*. Let's suppose it's /dev/sdb
2- sudo mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 -F /dev/sdb
3- sudo mount -o discard,defaults /dev/sdb <destination_folder>
Run df-h or mount to check if your disk is already mounted in the destination folder.

Simply uploading a file to google compute

I want to upload a file to the disk attached to my google compute vm from my local machine.
abhigenie92_gmail_com#instance-1:~$ pwd
/home/abhigenie92_gmail_com
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: ./
abhigenie92_gmail_com#instance-1:~$ gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28 Runge kutta 2nd order try.nl
ogo: /home/abhigenie92_gmail_com
ERROR: (gcloud.compute.copy-files) All sources must be
edit2: Get the following error now:
RE: edit2
Since gcloud's copy-files is a custom implementation of scp, you need to specify the complete path on your VM where you want to copy the files to. In your specific case:
LOCAL-FILE-PATH> gcloud compute copy-files [FILENAMES] [VM-NAME]:[FULL-REMOTE-PATH]
In your specific example:
C:\Users\sony\Desktop> gcloud compute copy-files copy.nlogo instance-1:/home/abhigenie92_gmail_com/
This command will then place the file(s) into your user's home directory root. Just make sure the remote path exists, and that you user has write rights to the destination.
From the looks of what you posted, you're trying to copy things from your local machine to a cloud instance from inside the instance. I'm afraid you can't do that.
I take it you have already installed the gcloud compute tool? If not, install that on your local machine (follow the link) and open up the windows command line, type gcloud auth login to authenticate, then you should be able to do what you want to with the following command:
gcloud compute copy-files C:\Users\sony\Desktop\Feb\Model\MixedCrowds28\ Runge\ kutta\ 2nd\ order\ try.nlogo <VM Name>:~/
Note that I have escaped the spaces in your filename - it's a good idea to get out of the habit of spaces in filenames - and made a couple of assumptions:
Your VM is running linux
You are okay with copying up to your home directory on the VM
If any of these assumptions is incorrect, you may have problems. To copy somewhere else, change the path in the <VM Name>:~/ part
Edit: I mangled a file extension in the original, fixed now!