Can´t deploy to firebase cloud function - google-cloud-functions

When I try to run an firebase deploy --only functions I get this error for ALMOST every function:
"WARNING: Failed to delete temporary cache image to stable name; this will not affect current build: DELETE https://*****/gcf-artifacts/application--on_application_write/cache/manifests/4500b6e2-9253-4d70-8c91-5ba800c62978: DENIED: Permission "artifactregistry.repositories.deleteArtifacts" denied on resource "projects/*__/repositories/gcf-artifacts" (or it may not exist)""
Can someone tell me what to do? Which account does not have the permission? Is it my admin-sdk service account?

You will need to grant your account the "artifactregistry.repositories.deleteArtifacts" permission on the "gcf-artifacts" repository. This can typically be done using the Cloud Console or the gcloud command-line tool. For example, you can use the following gcloud command to grant the necessary permissions to the service account associated with your Firebase project:
gcloud projects add-iam-policy-binding <PROJECT_ID> \
--member serviceAccount:<SERVICE_ACCOUNT> \
--role roles/artifactregistry.repositorie.deleteArtifacts
Replace <PROJECT_ID> with the ID of your Firebase project, and <SERVICE_ACCOUNT> with the email address of the service account that you are using to deploy your functions.

Related

Error while deployment of Gitlab on OpenShift pipeline

I am trying to deploy Gitlab source code OpenShift. But I am facing an issue. Though in Gitlab pipeline it is successful. It keeps talking about the unauthorized error.
My expected output is to have deployment on OpenShift [Error message] (https://i.stack.imgur.com/CBBzO.png)
The error indicates that the Deployment Pod is unable to pull the specified image.
It appears your Deployment is in the namespace roks-test-demo-project while the image your are trying to pull is in the oc-custom-dev namespace. In order for a Deployment in one namespace to pull an image from another, the Deployment's service account must be authorized to do so.
See the OpenShift documentation for how to achieve this.
In your case, assuming your Deployment is running as the default service account:
$ oc policy add-role-to-user \
system:image-puller system:serviceaccount:roks-test-demo-project:default \
--namespace=oc-custom-dev
If your Deployment is running as a non-default service account, replace default with that service account name in the above command.

Could not push code to the CodeCommit repository

I am trying to push my code to the beanstalk but I am getting an error when I hit the eb create command
WARNING: You have uncommitted changes.
Starting environment deployment via CodeCommit
Could not push code to the CodeCommit repository:
ERROR: CommandError - An error occurred while handling git command.
Error code: 128 Error: fatal: unable to access 'https://git-codecommit.us-west-2.amazonaws.com/v1/repos/origin/': The requested URL returned error: 403
I have already created an environment using aws beanstalk, how I should push to that.
This issue could be related to your AccessKey and SecurityKey. Perhaps AK or SK has expired/inactivaded.
In my case with the same error, the problem was an expired key.
After releasing a new key, make sure the git credential store have the new key (if used).
I fixed this by:
Attaching AWSCodeCommitPowerUser to my user group
Generating a CodeCommit credential for my user.
STEP 1
Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.
In the IAM console, in the navigation pane, choose Users, and then choose the IAM user you want to configure for CodeCommit access.
On the Permissions tab, choose Add Permissions.
In Grant permissions, choose Attach existing policies directly.
From the list of policies, select AWSCodeCommitPowerUser or another managed policy for CodeCommit access. For more information, see AWS managed (predefined) policies for CodeCommit.
STEP 2
On the user details page, choose the Security Credentials tab, and in HTTPS Git credentials for AWS CodeCommit, choose Generate.
Use the Username and Password when prompt to enter credentials for the git repo
Take a look at this article for more information https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html

Using service accounts to automate deployments is failing

We are trying to automate the build and deployment of containers to projects created in openshift v3.3. From the documentation I can see that we will need to leverage service accounts to do this but the documentation is hard to follow and the examples I have found in the blogs don't complete the task. My workflow is as follows with examples oc commands I use:
BUILDER_TOKEN='xxx'
DEPLOYER_TOKEN='xxx'
# build and push the image works as expected
docker build -t registry.xyz.com/want/want:latest .
docker login --username=<someuser> --password=${BUILDER_TOKEN} registry.xyz.com
docker push registry.xyz.com/<repo>/<image>:<tag>
# This fails with error
oc login https://api.xyz.com --token=${DEPLOYER_TOKEN}
oc project <someproject>
oc new-app registry.xyz.com/<repo>/<image>:<tag>
Notice I login into the rest api interface, select the project and create the app but this fails with the following errors:
error: User "system:serviceaccount:want:deployer" cannot create deploymentconfigs in project "default"
error: User "system:serviceaccount:want:deployer" cannot create services in project "default"
Any ideas?
Service accounts only have permission in their owning project by default. You would need to grant deployer access to deploy in other projects.
OK so it seems that using a service account to accomplish this is not the best way to go about things. This is not helped by the documentation. The use case above is very common and the correct approach is to simply evoke the new-app with the image name and corresponding tag:
oc new-app ${APP}:${TAG}
There is no need to mess around with service accounts.

Zabbix external checks cannot be executed due to SELinux

I try to implement external checks in Zabbix 2.2. I've created simple bash script for SSL verification which should be executed by zabbix service. The script is located in /var/lib/zabbixsrv/externalchecks directory. Even if there are 777 permission for the .sh script I still receive message telling
unable to execute /var/lib/zabbixsrv/externalscripts/test.sh: Permission denied
I've got same message when I try to run the command even as root. The ls -Z /var/lib/zabbixsrv/externalscripts/test.sh command output says:
-rwxrwxrwx. zabbixsrv zabbixsrv unconfined_u:object_r:default_t:s0 /var/lib/zabbixsrv/externalscripts/test.sh
There is no message relating this in /var/log/massages. Does anybody know how to force selinux to allow execute zabbixsrv user the script without disabling selinux?
Which zabbix service (zabbix-server, zabbix-agent, ...) should execute the external checks script?
Did you tried to set AllowRoot=1 in /etc/zabbix/zabbix_agentd.conf?
The main issue was in /etc/fstab configuration file. The Zabbix has defined as default values for script /var/lib/zabbixsrv/excernalscripts directory. My server has /var mounted with rw and noexec permissions.
I've already moved the script to different location and change the configuration file accordingly. Checks are working fine now.
Thanks everybody for any contribution relating this topic.

gsutil not working in GCE

So when I bring up a GCE instance using the standard debian 7 image, and issue a "gsutil config" command, it fails with the following message:
jcortez#master:~$ gsutil config
Failure: No handler was ready to authenticate. 4 handlers were checked. ['ComputeAuth', 'OAuth2Auth', 'OAuth2ServiceAccountAuth', 'HmacAuthV1Handler'] Check your credentials.
I've tried it on the debian 6 and centos instances and had the same results. Issuing "gcutil config" works fine however. I gather I need to set up my ~/.boto file but I'm not sure what to.
What am I doing wrong?
Using service account scopes as E. Anderson mentions is the recommended way to use gsutil on Compute Engine, so the images are configured to get OAuth access tokens from the metadata server in /etc/boto.cfg:
[GoogleCompute]
service_account = default
If you want to manage gsutil config yourself, rename /etc/boto.cfg, and gsutil config should work:
$ sudo mv /etc/boto.cfg /etc/boto.cfg.orig
$ gsutil config
This script will create a boto config file at
/home/<...snipped...>/.boto
containing your credentials, based on your responses to the following questions.
<...snip...>
Are you trying to use a service account to have access to Cloud Storage without needing to enter credentials?
It sounds like gsutil is searching for an OAuth access token with the appropriate scopes and is not finding one. You can ensure that your VM has access to Google Cloud Storage by requesting the storage-rw or storage-full permission when starting your VM via gcutil, or by selecting the appropriate privileges under "Project Access" on the UI console. For gcutil, something like the following should work:
> gcutil addinstance worker-1 \
> --service_account_scopes=https://www.googleapis.com/auth/devstorage.read_write,https://www.googleapis.com/auth/compute.readonly
When you configured your GCE instance, did you set it up with a service account configured? Older versions of gsutil got confused when you attempted to run gsutil config when you already had service account credentials configured.
If you already have a service account configured you shouldn't need to run gsutil config - you should be able to simply run gsutil ls, cp, etc. (it will use credentials located elsewhere than your ~/.boto file).
If you really do want to run gsutil config (e.g., to set up credentials associated with your login identity, rather than service account credentials), you could try downloading the current gsutil from http://storage.googleapis.com/pub/gsutil.tar.gz, unpacking it, and running that copy of gsutil. Note that if you do this, the personal credentials you create by running gsutil config will essentially "hide" your service account credentials (i.e., you would need to move your .boto file aside if you ever want to user your service account credentials again).
Mike Schwartz, Google Cloud Storage team
FYI I'm working on some changes to gsutil now that will handle the problem you encountered more smoothly. That version should be out within the next week or two.
Mike