google storage bucket upload permissions - acl

I have tried to create "vendor dropbox" , using the instructions in Google cloud storage documntation
the folloing set of commands were executed :
creating the bucket
gsutil mb gs://customer-10
adding external user permissions
gsutil chacl -u user#company.com:FC gs://customer-10
adding default acl
gsutil chdefacl -u -u user#company.com:FC gs://customer-10
verify the acl modifications , using the command
gsutil getacl gs://customer-10 (verified succesfuly )
<Entry>
<Scope type="UserByEmail">
<EmailAddress>user#company.com</EmailAddress>
<Name>firstname lastname</Name>
</Scope>
<Permission>FULL_CONTROL</Permission>
</Entry>
but when the user is accessing the bucket , using the link
https://storage.cloud.google.com/?arg=customer-10&pli=1#customer-10
it is not possible to upload any file into this bucket .
What is missing in my scenario ? please help

This issue was solved in gsutil version 3.37.
Currently it is working as documented .

Related

Github Action Run - Security import is showing "One or more parameters passed to a function were not valid"error

I built the input file (decoded base64 file into p12 file) as CERTIFICATE_PATH, P12_PASSWORD is password in secret, KEYCHAIN_PATH is defined. when I run the command on CLI, I get "1 item imported" success message. but when I run from *.yml file on GitHub action, I get "security: SecKeychainItemImport: One or more parameters passed to a function were not valid." error. any suggestions?
security import $CERTIFICATE_PATH -P $P12_PASSWORD -A -t cert -f pkcs12 -k $KEYCHAIN_PATH
CERTIFICATE_PATH - file that contains cert.p12 data,
KEYCHAIN_PATH is TEMP/app-signing.keychain-db
Another reason in Github actions could be that you are using the wrong environment.
Take a look at this ---> Difference between Github's "Environment" and "Repository" secrets?.
Set the right environment:
environment: production
found the issue.. was passing wrong cert file.. once added correct file in the security build , was able to get it working

Why gsutil restore a file from a bucket encrypted with KMS (using a service account without DECRYPT permission)?

I am working with GCP KMS, and it seems that when I send a file to a GCP bucket (using gustil cp) it is encrypted.
However, I have a question related to the permission to restore that file from the same bucket, using a different service account. I mean, the service account that I am using to restore the file from the bucket, doesn't have Decrypt privilege and even so the gustil cp works.
My question is whether it's normal behavior, or if I'm missing something ?
Let me describe my question:
First of all, I confirm that the default encryption for the bucket is the KEY that I set up previously:
$ kms encryption gs://my-bucket
Default encryption key for gs://my-bucket:
projects/my-kms-project/locations/my-location/keyRings/my-keyring/cryptoKeys/MY-KEY
Next, with gcloud config, I set a service account, which has "Storage Object Creator" and "Cloud KMS CryptoKey Encrypter" permissions:
$ gcloud config set account my-service-account-with-Encrypter-and-object-creator-permissions
Updated property [core/account].
I send a local file to the bucket:
$ gsutil cp my-file gs://my-bucket
Copying file://my-file [Content-Type=application/vnd.openxmlformats-officedocument.presentationml.presentation]...
| [1 files][602.5 KiB/602.5 KiB]
Operation completed over 1 objects/602.5 KiB.
After sending the file to the bucket, I confirm that the file is encrypted using the KMS key I created before:
$ gsutil ls -L gs://my-bucket
gs://my-bucket/my-file:
Creation time: Mon, 25 Mar 2019 06:41:02 GMT
Update time: Mon, 25 Mar 2019 06:41:02 GMT
Storage class: REGIONAL
KMS key: projects/my-kms-project/locations/my-location/keyRings/my-keyring/cryptoKeys/MY-KEY/cryptoKeyVersions/1
Content-Language: en
Content-Length: 616959
Content-Type: application/vnd.openxmlformats-officedocument.presentationml.presentation
Hash (crc32c): 8VXRTU==
Hash (md5): fhfhfhfhfhfhfhf==
ETag: xvxvxvxvxvxvxvxvx=
Generation: 876868686868686
Metageneration: 1
ACL: []
Next, I set another service account, but this time WITHOUT DECRYPT permission and with object viewer permission (so that it be able to read files from the bucket):
$ gcloud config set account my-service-account-WITHOUT-DECRYPT-and-with-object-viewer-permissions
Updated property [core/account].
After set up the new service account (WITHOUT Decrypt permission), the gustil to restore the file from the bucket works smooth...
gsutil cp gs://my-bucket/my-file .
Copying gs://my-bucket/my-file...
\ [1 files][602.5 KiB/602.5 KiB]
Operation completed over 1 objects/602.5 KiB.
My question is whether is it a normal behavior ? Or, since the new service account doesn't have Decrypt permission, the gustil cp to restore the file shouldn't work ? I mean, it is not the idea that with KMS encryption, the 2nd gustil cp command should fail with a "403 permission denied" error message or something..
If I revoke "Storage object viewer" privilege from the 2nd service account (to restore the file from the bucket), in this case the gustil fails, but it is because it doesn't have permission to read the file:
$ gsutil cp gs://my-bucket/my-file .
AccessDeniedException: 403 my-service-account-WITHOUT-DECRYPT-and-with-object-viewer-permissions does not have storage.objects.list access to my-bucket.
I appreciate if someone else could give me a hand, and clarify the question....specifically I don't sure whether the command gsutil cp gs://my-bucket/my-file . should work or not.
I think it shouldn't work (because the service account doesn't have Decrypt permission), or should it work ?
This is working correctly. When you use Cloud KMS with Cloud Storage, the data is encrypted and decrypted under the authority of the Cloud Storage service, not under the authority of the entity requesting access to the object. This is why you have to add the Cloud Storage service account to the ACL for your key in order for CMEK to work.
When an encrypted GCS object is accessed, the KMS decrypt permission of the accessor is never used and its presence isn't relevant.
If you don't want the second service account to be able to access the file, remove its read access.
By default, Cloud Storage encrypts all object data using Google-managed encryption keys. You can instead provide your own keys. There are two types:
CSEK which you must supply
CMEK which you also supply, but this time is managed by Google KMS service (this is the one you are using).
When you use gsutil cp, you are already using the encryption method behind the curtains. So, as stated on the documentation for Using Encryption Keys:
While decrypting a CSEK-encrypted object requires supplying the CSEK
in one of the decryption_key attributes, this is not necessary for
decrypting CMEK-encrypted objects because the name of the CMEK used to
encrypt the object is stored in the object's metadata.
As you can see, the key is not necessary because it is already included on the metadata of the object which is the one the gsutil is using.
If encryption_key is not supplied, gsutil ensures that all data it
writes or copies instead uses the destination bucket's default
encryption type - if the bucket has a default KMS key set, that CMEK
is used for encryption; if not, Google-managed encryption is used.

smbclient --authentication-file "session setup failed: NT_STATUS_INVALID_PARAMETER" and "SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_NO_MEMORY"

(I have Centos 7 with samba-client.x86_64 4.6.2-8.el7 against windows server 2008 that is in a AD Domain controlled by separate windows server 2008 AD domain controller)
Started with this:
smbclient -W my.domain -U myuser //svr.my.domain/fred mypassword -c list
... which worked great, then decided to move domain,user and password into a file and use -A as described in the smbclient manpage. File windows-credentials, content:
username=myuser
domain=my.domain
password=mypassword
... with command line:
smbclient -A windows-credentials //svr.my.domain/fred -c list
.... did not work, gave error:
SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_NO_MEMORY
session setup failed: NT_STATUS_NO_MEMORY
... an hour on the internet suggested lots of people had this trouble and just about each had a different ticked answer, and none of them worked for me. Tried various combinations of their answers - in particular, https://askubuntu.com/questions/1008992/ubuntu-17-10-to-access-windows-files-shares-within-workplace-it, and ended up with...
Created a separate my.smb.conf with just:
[global]
# seems to get rid of
# SPNEGO(gse_krb5) NEG_TOKEN_INIT failed: NT_STATUS_NO_MEMORY
client use spnego = no
# seems to get rid of
# session setup failed: NT_STATUS_NO_MEMORY
client ntlmv2 auth = no
... and used:
smbclient -s my.smb.conf -A windows-credentials //svr.my.domain/fred -c list
... and it looks like it works, but I'm not really sure as there seems to be credentials caching and a complete lack of information on how this stuff works or is supposed to work.
Can anyone actually explain any of this? Even if not, perhaps yet another answer to this problem will help someone somewhere.
This appears to be specific to Windows 2008. Attaching to Windows Server 2016 works without the modified smb.conf file. I have been unable to locate any real details.
In case of problems with smbclient
you can mount smb folder and use it like local folder
mount -t cifs //<ip>/<share folder>$ /mnt -o user=<user>,pass=<password>,domain=<workdomain>

Restarting a MySQL server managed by Ambari

I have a scenario where I need to change several parameters of a hadoop cluster managed by Ambari to document performance of a particular application. The change in the configs entails a restart of the affected components.
I am using the Ambari REST API for achieving this. I figured out how to do this for all service components of hadoop. I' am not sure whether the API provides a way to restart the MySQL server that Hive uses.
I have the following questions:-
Is it the case that a mere stop and start of mysqld on the appropriate machine is enough to ensure that the required configuration changes are recognized by Ambari and the application?
I chose the 'New MySQL database' option while installing Hive via Ambari. Does this mean that restarts are reflected in Ambari only when it is carried out from the Ambari UI?
Your inputs would be highly appreciated.
Thanks!
Found a solution to the problem. I used the following commands using the Ambari REST API for changing configurations and restarting services from the backend.
Login to the host on which the ambari server is running and use the already provided config.sh script as described below.
Modifying configuration files
#!/bin/bash
CLUSTER_NAME=$1
CONFIG_FILE=$2
PROPERTY_NAME=$3
PROPERTY_VALUE=$4
/var/lib/ambari-server/resources/scripts/configs.sh -port <ambari-server-port> set localhost $1 $2 "$3" "$4"
where CONFIG_FILE can take values like tez-site, mapred-site, hadoop-site, hive-site etc. PROPERTY_NAME and PROPERTY_VALUE should be set to values relevant to the specified CONFIG_FILE.
Restarting host components
curl -uadmin:admin -H 'X-Requested-By: ambari' -X POST -d '
{
"RequestInfo":{
"command":"RESTART",
"context":"Restart MySQL server used by Hive Metastore on node3.cluster.com and HDFS client on node1.cluster.com",
"operation_level":{
"level":"HOST",
"cluster_name":"c1"
}
},
"Requests/resource_filters":[
{
"service_name":"HIVE",
"component_name":"MYSQL_SERVER",
"hosts":"node3.cluster.com"
},
{
"service_name":"HDFS",
"component_name":"HDFS_CLIENT",
"hosts":"node1.cluster.com"
}
]
}' http://localhost:<ambari-server-port>/api/v1/clusters/c1/requests
Reference Links:
Restarting components
modifying configurations
Hope this helps!

Enable secure admin by script

Does anyone know how to enable secure admin from script? The problem is that the command asadmin enable-secure-admin requires authentication from the commandline but I would like to do that by script. I already tried to save the user/password in a temporary file and then passing it in by asadmin enable-secure-admin < auth.txt but unfortunately I get an authentication failure for user null
Has anyone already done this?
I discovered the solution by myself I only had to read the command help ;-)
Usage: asadmin [-H|--host <host(default:localhost)>]
[-p|--port <port(default:4848)>] [-u|--user <user(default:admin)>]
[-W|--passwordfile <passwordfile>]
[-t|--terse[=<terse(default:false)>]]
[-s|--secure[=<secure(default:false)>]]
[-e|--echo[=<echo(default:false)>]]
[-I|--interactive[=<interactive(default:true)>]]
[-?|--help[=<help(default:false)>]
[--detach(default:false)]
[--notify(default:false)] [subcommand [options] [operands]]
Here's what I did: asadmin --interactive=false --user admin --passwordfile /path/to/passwordfile enable-secure-admin