Why do I get 'compute.images.get' permission error when cloning Google Compute instance? - google-compute-engine

I am working on a production web site on a Google Compute instance.
I want to set up a staging site, and read that the quickest way to do that is to clone the production instance.
When attempting to clone it, I get the error:
Required 'compute.images.get' permission for 'projects/wordpress-sites-170807/global/images/SANITISED-template'
I've not been able to find any useful reference to Required 'compute.images.get' permission in any Google search.
Questions:
1. I only have Editor level permissions on this particular Cloud Platform console. Is this error specific to me as a user? (I am now an "Owner" of the project, so we've eliminated the likelihood of my personal permissions being an issue)
2. If this permissions issue is related to the instance itself, how do I go about changing the permissions so that it has the "compute.image.get" permission?

As discussed in this thread, using the clone button "Create Similar" button copies the configuration to a new template. It does not make a new identical instance with the exact content of your persistent disk. In your case, the configuration included the source image from a different project. Thus, Compute Engine tried to access that project and since you have no access, it threw the error.
If the goal is to clone the instance including the persistent disk you need to create a new snapshot or image from the persistent disk, and then to retain other configs, you may use the clone button, but change the source image to the snapshot or image you have created earlier.
If the goal is creating a new instance from that original image in the other project, you need IAM roles in that project. For further information about the subject check this document
UPDATE:
The Google Cloud Console interface was updated several weeks ago, and the "CLONE" button was replaced by "CREATE SIMILAR".

Related

How to get updated server files from elastic beanstalk?

I am hosting my server for my website on AWS Elastic beanstalk. On my server I am storing files that are getting uploaded by end users (or myself) in an "Images" folder. Thus new files are getting created in the folder every time an image gets uploaded to the website.
How can i download the latest file of my server on E.B. with these new images. I can download the original zip file I uploaded but it doesn't have the new data in the folders.
TY
You should not be storing any files worth anything on the EB node.
if user uploads content, you should be uploading that in turn to S3, or your DB, or any kind of file storage. That is usually decided during architecture.
So while the actual answer is "this should never happened in the first place", I must precise the main reason is that auto-scaling can kill your nodes without you knowing. which would destroy the uploads. or bring new nodes, spreading your content through multiple nodes.
While I also understand this answer might not be helping you if you already done the mistake have content to be transfered out of the node. I would
disable autoscaling
enable termination protection on the node
transfer data via rsync/ssh/s3 or favorite different options by SSH
automate transfer method to be done a subsequent time.
implement new upload method for the future of your app
deploy new method so no new content is uploaded to previous storage location
re-transfer your date from old to new storage location.
disable termination protection and reenable autoscaling
make sure new nodes and receiving traffic and why not kill that previous node
remember servers are cattle not pets

Monitor and automatically upload local files to Google Cloud Bucket

My goal is to make a website (hosted on Google's App Engine through a Bucket) that includes a upload button much like
<p>Directory: <input type="file" webkitdirectory mozdirectory /></p>
that prompts users to select a main directory.
The main directory will first generate a subfolder and have discrete files being written every few seconds, up to ~4000 per subfolder, upon which the machine software will create another subfolder and continue, and so on.
I want Google Bucket to automatically create a Bucket folder based on metadata (e.g. user login ID and time) in the background, and the website should monitor the main directory and subfolders, and automatically upload every file, sequentially from the time they are finished being written locally, into the Cloud Bucket folder. Each 'session' is expected to run for ~2-5 days.
Creating separate Cloud folders is meant to separate user data in case of multiple parallel users.
Does anyone know how this can be achieved? Would be good if there's sample code to adapt into existing HTML.
Thanks in advance!
As per #JohnHanely, this is not really feasible using a application. I also do not understand the use case entirely but I can provide some insight into monitoring Cloud Buckets.
GCP provides Cloud Functions:
Respond to change notifications emerging from Google Cloud Storage. These notifications can be configured to trigger in response to various events inside a bucket—object creation, deletion, archiving and metadata updates.
The Cloud Storage Triggers will help you avoid having to monitor the buckets yourself and can instead leave that to GCF.
Maybe you could expand on what you are trying to achieve with that many folders? Are you trying to create ~4,000 sub-folders per user? There may be a better path forward should we know more about the intended use of the data? Is seems you want hold data and perhaps a DB is better suited?
- Application
|--Accounts
|---- User1
|-------Metadata
|----User2
|------Meatadata

How can i share a wirecloud marketplace between users

I built some widgets and uploaded on my local marketplace, is there a way to share that?
And better is it possible to share a Mashup (the widget composition) without giving the
possibility to wiring to the user?
I mean the user should use an application layout without change anything.
You can make public your workspaces/dashboards following the steps documented in the user guide. Only the owner of a workspace will be able to modify it. I think this is what are you searching for.
Another option is to create a packaged mashup using the "Upload to my resources" option in the editor view:
Take a look to the "Advanced" tab, where you can block widgets (make them unremovable), block connections (make wiring connections unremovable) and embed used widgets/operator (by default packaged mashups depends on the user having installed all the required widgets/operators. This way you can distribute the widgets and operators used by the mashup in the same package).
However, take into account that this method is meant for sharing mashup templates, the user will always be able to add additional widgets and create new connections in the wiring view.
Once packaged, mashups/dashboards (and widgets and operators) can be uploaded to a WStore server (e.g. to the Store portal provided on FIWARE Lab) for sharing them with other users. The steps for making this is also described in the WireCloud's user guide.
I have the problem, revisited.
I have set up a working Marketplace instance (v2.3) but am unable to integrate it with Wirecloud. The marketplace is correctly registered but all the requests i am making to this Marketplace are throwing 502 error, even though i am actually able to see some results when querying the Marketplace server through a browser.
Indicatively, i can issue a GET command at http://:8080/FiwareMarketplace/v1/registration/stores/ and get an answer, but Wirecloud's internal APIs return a 502 (Bad Gateway).
Any idea on what might have gone wrong?
PS: This happens for WC v0.6.5. When upgrading to a newer (Beta) version of WC, everything seems to be performing as expected, i.e. the marketplace is correctly inserted and the stores are correctly retrieved and processed.

Can chrome extension modify a file in our hard drive?

I am making a chrome extension which needs to add/delete/modify file in any location in our hard drive. The location can be temporary folder. How is it possible to make it. Please give comments and helpful links which can lead to me have this work done.
You can not, but adding a local server (nodejs/deno/cs-script/go/python/lua/..) to have a fixed logic (security) to do file stuff and providing a http server to answer back in an ajax/jsonp request would work.
The extension will not be able to install the software part.
edit: if you want to get started using nodejs, this could help
edit2: With File and Directory Entries API (this could help) you can get hold of a FILE OR complete FOLDER (getDirectory(), showDirectoryPicker()).
Thankfully, this is impossible.
Google or any other company wouldn't have many friend if their extension(s') installation caused compromise including complete control over any files(ie. control over machine) on your hard drive. The extension can save information to disk in a location that is available for storing local information as mentioned. You will not have any execute permission on the root or anywhere nor will you have any read or write permission outside of the storage location.
However, extensions can still be malicious if they gather information from a user of a web page (I am sure that Google can filter some suspicious extensions).
If you really need to make changes on your hard drive you can store information on a server and poll for changes with a windows client application or perhaps you can find where the storage information is kept and access it from there from a windows app.

How to share google compute engine images across projects?

From documentation I know that images are global resources that "are accessible by any resource in any zone within the same project".
I am looking for a functionality similar to sharing AMIs within AWS. That is, I create an image, make it public and anyone can use it immediately.
How to best achieve something similar in GCE? The problem is that the image I need to create would be large - around 100GB. So I am looking for a way of sharing the image that wouldn't involve slow copying (e.g. from a bucket in google cloud storage).
You could use Google Cloud Storage, but then you will be charged every time it will be downloaded, and it will take long time (if it is 100 GB).
The better solution is to give the user "Can view" access to your project. Then user can create a instance using your image, using gcloud command line. But be aware, that user will see all that you have in the project, he just won't be able to change it.
This solution is faster, if the owner of the image, has used it in the zone, you want to use. Because the image is cashed there, and it will start instantaneously. Another plus is, that owner doesn't get charged if someone downloads this image.
`gcloud compute instances create [INSTANCE_NAME]
--image-family [IMAGE_FAMILY]
--image-project [IMAGE_PROJECT]
--machine_type=<type-of-machine>
--network="default"
--family my-image-family`
If you want to give access to a larger group of people, you can create a Google group, allow group members "Can view" access to the project. Now you can control who can access it, using your Google group.
You can store your image into Google Cloud Storage. Google Cloud storage gives you even more options to control who has access to the image besides just making it public.
You can use gcutil to access that image. Something like
gcutil --project=project-name-1 addimage "my-awesome-image" gs://bucket-name/myawesome.images.gz.
Compute Engine as added an IAM role (Compute.imageUser) that allows sharing images with other projects.
Documentation: https://cloud.google.com/compute/docs/images/sharing-images-across-projects
Disclaimer - I work for Google Cloud.
One can simply grand roles/compute.imageUser access to a service account of <ID-of-your-project> that is used for creating instances:
gcloud projects add-iam-policy-binding <ID-of-image-owners-project> \
--member serviceAccount:account-to-create-instances#<ID-of-your-project>.iam.gserviceaccount.com \
--role roles/compute.imageUser