az login with certificate protected with password - azure-cli

I was told to use Azure service principal with auth by certificate which is protected by password
I'm trying to login with az cli: az login --service-principal --username '$APP_ID' --tenant '$TENANT_ID' --password C:\sda-cript-dvlp.pfx
..obviously there're no cert password and login fails
Where should I paste certificate's password?

To sign in with a certificate, it must be available locally as a PEM or DER file, in ASCII format. When using a PEM file, the PRIVATE KEY and CERTIFICATE must be appended together within the file.
You could refer to the steps below.
1.Login with a user(need the permission to create a service principal), create a service principal along with a self-signed certificate.
az ad sp create-for-rbac --name 'joyapp234' --create-cert
2.Copy the fileWithCertAndPrivateKey in step 1, login as below.
az login --service-principal --username '<app-id>' --tenant '<tenant-id>' --password 'C:\\Users\\joyw\\tmpbnpcixh8.pem'
For more details, see this and this.

Related

login openshift with kubadmin fail: Login failed (401 Unauthorized)

As per offical documentation by Openshift , we can get kubadmin password as below:
crc console --credentials
To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p gALwE-jY6p9-poc9U-gRcdu https://api.crc.testing:6443'
However , I can login successfully with developer/developer .kubeadmin will fail with "Login failed (401 Unauthorized)" . Restart CRC muiltiple times . Still not works ... Any idea about this ?
$ oc login -u developer -p developer https://api.crc.testing:6443
Login successful.
You have one project on this server: "demo"
Using project "demo"
$ oc login -u kubeadmin -p gALwE-jY6p9-poc9U-gRcdu https://api.crc.testing:6443
Login failed (401 Unauthorized)
Verify you have provided correct credentials.
Any inputs will be appreciated . Thanks in advance..
You said you restarted CRC. Have you tried deleting and recreating the cluster?
One of the first steps in productionizing a cluster is to remove the kubeadmin account - is it possible that you've done that and the "crc console --credentials" is now only displaying what it used to be?
If you have another admin account try:
$ oc get -n kube-system secret kubeadmin
The step to remove that account (see: https://docs.openshift.com/container-platform/4.9/authentication/remove-kubeadmin.html) is to simply delete that secret. If you've done that at some point in this cluster's history you'll either need to use your other admin accounts in place of kubeadmin, or recreate the CRC instance (crc stop; crc delete; crc setup)
Just in case others are having this issue and the issue persists even after trying crc stop, crc delete, crc cleanup, crc setup, crc start, I was able to sign in as kubeadmin by NOT using the following command after crc start got my CodeReady Container up and running.
eval $(crc oc-env)
Instead, I issue the crc oc-env command. In this example that the output returns /home/john.doe/.crc/bin/oc.
~]$ crc oc-env
export PATH="/home/john.doe/.crc/bin/oc:$PATH"
# Run this command to configure your shell:
# eval $(crc oc-env)
I then list the contents of the /home/john.doe/.crc/bin/oc directory which shows that the /home/john.doe/.crc/bin/oc directory is symbolically linked to the /home/john.doe/.crc/cache/crc_libvirt__amd64/oc file.
~]$ ll /home/john.doe/.crc/bin/oc
lrwxrwxrwx. 1 john.doe john.doe 61 Jun 8 20:27 oc -> /home/john.doe/.crc/cache/crc_libvirt_4.10.12_amd64/oc
And I was then able to sign in using the absolute path to the oc command line tool.
~]$ /home/john.doe/.crc/cache/crc_libvirt_4.10.12_amd64/oc login -u kubeadmin -p 28Fwr-Znmfb-V6ySF-zUu29 https://api.crc.testing:6443
Login successful.
I'm sure I could dig a bit more into this, checking the contents of my users $PATH, but suffice to say, this at least is a work around for me that gets me to be able to sign in as kubeadmin.

Command string for JDBC connection to MySQL with SSL

I'm trying to connect a data modeling tool (DbSchema) to a MySQL database running in Google Cloud SQL. The cloud instance requires SSL. I've downloaded the necessary keys to my Mac and can connect through certain tools, like Sequel Pro and MySQL Workbench. However, these tools give me a way to enter the key locations into their connection windows. But, DbSchema does not - all it does is allow me to modify the connection string it uses to connect to the DB via JDBC.
What I have so far is:
jdbc:mysql://<MY IP ADDRESS>:3306?useUnicode=true&characterEncoding=UTF8&zeroDateTimeBehavior=convertToNull&useOldAliasMetadataBehavior=true&useSSL=true&verifyServerCertificate=false
This ends up giving me a password error although the PW I've used is correct. I think the problem is that JDBC isn't using the SSL keys. Is there a way to specify the locations of the SSL keys in this connection string?
This MySQL JDBC (for SSL) link may help you. Please see Setting up Client Authentication:
Once you have the client private key and certificate files you want to
use, you need to import them into a Java keystore so that they can be
used by the Java SSL library and Connector/J. The following
instructions explain how to create the keystore file:
Convert the client key and certificate files to a PKCS #12 archive:
shell> openssl pkcs12 -export -in client-cert.pem -inkey client-key.pem \
-name "mysqlclient" -passout pass:mypassword -out client-keystore.p12
Import the client key and certificate into a Java keystore:
shell> keytool -importkeystore -srckeystore client-keystore.p12 -srcstoretype pkcs12 \
-srcstorepass mypassword -destkeystore keystore -deststoretype JKS -deststorepass mypassword
Set JDBC connection properties:
clientCertificateKeyStoreUrl=file:path_to_truststore_file
clientCertificateKeyStorePassword=mypassword

Kettle, JDBC, MySQL, SSL: Could not Connetct to database

I am trying to connect to a MySQL Database with SSL using a Client Certificate. I have created a truststore with the CA Certificate:
keytool -import -alias mysqlServerCACert -file ca.crt -keystore truststore
Then I created a keystore with my private key and my client certificate:
openssl pkcs12 -export -out bi.pfx -inkey bi.key -in bi.crt -certfile ca.crt
openssl x509 -outform DER -in bi.pem -out bi.der
keytool -importkeystore -file bi.der -keystore keystore -alias mysqlClientCertificate
I added useSSL=true and requireSSL=true to the jdbc URL and passed
-Djavax.net.ssl.keyStore=${db.keyStore}
-Djavax.net.ssl.keyStorePassword=${db.keyStore.pwd}
-Djavax.net.ssl.trustStore=${db.trustStore}
-Djavax.net.ssl.trustStorePassword=${db.keyStore.pwd}
to the kettle transformation from the surrounding job. I still get "Could not create connection to database server".
I can connect via SSL using the command line tool:
mysql --protocol=tcp -h myqlhost -P 3309 -u bi -p --ssl=on --ssl-ca=ca.crt --ssl-cert=bi.crt --ssl-key=bi.key db_name
Therefore my current guess is, that ther is an issue with the SSL Certificates.
Is there a way to make the MySQL JDBC Driver tell me more details, what went wrong?
Is my assumtion wrong, that kettle parameters can be used to set system properties? How do I do that instead then?
Establish Secure Connection (SSL) To AWS (RDS) Aurora / MySQL from Pentaho (PDI Kettle)
1. You need to create a new user id and Grant SSL rights to it. So this user id can connect to Aurora / MySQL only using Secured connection.
GRANT USAGE ON *.* TO 'admin'#'%' REQUIRE SSL
2. Download public RDS key (.pem fie) from AWS (http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Aurora.html#Aurora.Overview.Security.SSL)
3. Downloaded file contains certificates / keys for each region.
4. Split certificates from .PEM file into different .PEM files
5. Use JDK keytool command utility to import all these PEM files into a single truststore (xyz.jks) file
a. keytool -import -alias xyz.jks -file abc1.pem -keystore truststore
6. Configure JNDI entry for your Aurora / MySQL instance in Pentaho Properties File "data-integration\simple-jndi\jdbc.properties"
a. Sample JNDI configuration
-------------------------------------------------------------------------
RDSSecured/type=javax.sql.DataSource
RDSSecured/driver=com.mysql.jdbc.Driver
RDSSecured/user=admin
RDSSecured/password=password
RDSSecured/url=jdbc:mysql://REPLACE_WITH_RDS_ENDPOINT_HERE:3306/DATABASE_NAME?verifyServerCertificate=true&useSSL=true&requireSSL=true
-------------------------------------------------------------------------
7. Make sure you copied MySQL connector jar in "lib" directory of your pentaho installation. Use connector version 5.1.21 or higher.
8.
9. Create a copy of Spoon.bat / Spoon.sh based on your operating system E.g. Spoon_With_Secured_SSL_TO_RDS.bat or Spoon_With_Secured_SSL_TO_RDS.sh
10. Now we need to pass the truststore details to Pentaho at startup, so edit the copied script and append below mentioned arguments to OPT variable
a. -Djavax.net.ssl.trustStore="FULL_PATH\xyz.jks"
b. -Djavax.net.ssl.trustStorePassword="YOUR_TRUSTSTORE_PASSWORD"
11. Use new script to start Spoon here after to establish the secure connection
12. Open/create your Job / Transformation
13. Go To View Tab - Database Connections and create new connection
a. Connection Type: MySQL
b. Access: JNDI
c. JNDI Name: RDSSecured
i. Same as name used in JDBC.properties file
14. Test Connection and you are ready…. :)
OK, here is the solution, that I have found now:
The start scripts for the various kettle tools pass parameters to the JVM by reading an environment-variable "OPT". So I have set
export OPT="-Djavax.net.ssl.keyStore=/path/to/keystore -Djavax.net.ssl.keyStorePassword=private -Djavax.net.ssl.trustStore=/path/to/truststore -Djavax.net.ssl.trustStorePassword=private"
Now the MySQL JDBC Driver finds its certificates and private key and can establish the connection.

How to get password for sudo

I'm trying to use the command sudo -i -u postgres for PostgreSQL, and the Google Compute Engine VM is asking me for my password for my account (not root).
As I never issued a password, and I always login to my server via SSH key, I'm not sure what the password is, how I can reset it, or where it can be found.
Please tell me where I can get my password?
To become another non-root user on a GCE VM, first become root via password-less sudo (since that's how sudo is configured on GCE VM images):
sudo su -
and then switch to the user you want to become or run a command as another use, e.g., in your case, that's:
sudo -i -u postgres
Per https://cloud.google.com/compute/docs/instances ,
The instance creator and any users that were added using the metadata
sshKeys value are automatically administrators to the account, with
the ability to run sudo without requiring a password.
So you don't need that non-existent password -- you need to be "added using the metadata sshKeys value"! The canonic way to do that, and I quote from that same page:
$ echo user1:$(cat ~/.ssh/key1.pub) > /tmp/a
$ echo user2:$(cat ~/.ssh/key2.pub) >> /tmp/a
$ gcloud compute project-info add-metadata --metadata-from-file sshKeys=/tmp/a
or you can use the Google Developers Console for similar purposes, see https://cloud.google.com/compute/docs/console#sshkeys if you'd prefer that.
Summary
While creating the VM, specify the ssh user in the "Enter the entire key data" box.
Details
generate the ssh key pair and identify the public key:
if ssh-keygen, a file ending with ".pub"
if PuTTYgen, the text in box "Public key for pasting ..."
Notice the fields, all one one line, separated by spaces: "protocol key-blob username".
For username, you may find your Windows user name or a string like "rsa-key-20191106". You will replace that string with your choice of Linux user name.
Paste the public key info into the "Enter the entire key data" box.
Change the 3rd field to the actual user that you want to create on the VM. If, for example, "gcpuser", then:
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAjUIG3Z8wKWf+TZQ7nVQzB4s8U5rKGVE8NAt/LxlUzEjJrhPI5m+8llLWYY2PH4atZzuIYvYR0CVWhZvZQzLQc33vDOjQohxV9Lg26MwSqK+bj6tsr9ZkMs2zqNbS4b2blGnr37+dnwz+FF7Es9gReqyPxL9bn5PU/+mK0zWMHoZSEfUkXBrgqKoMQTsYzbMERluByEpZm9nRJ6ypvr9gufft9MsWC2LPhEx0O9YDahgrCsL/yiQVL+3x00DO9sBOXxi8kI81Mv2Rl4JSyswh1mzGAsT1s4q6fxtlUl5Ooz6La693IjUZO/AjN8sZPh03H9WiyewowkhMfS0H06rtGQ== gcpuser
create your VM. (Debian, for example)
Connect to the VM
directly from ssh or PuTTY (not browser window)
use the private key
specify the user
Notice that your public key is present:
gcpuser#instance-1:~/.ssh$ cat authorized_keys
# Added by Google
ssh-rsa AAAAB3NzaC1yc2EAAAABJQAAAQEAjUIG3Z8wKWf+TZQ7nVQzB4s8U5rKGVE8NAt/LxlUzEjJrhPI5m+8llLWYY2PH4atZzuIYvYR0CVWhZvZQzLQc33vDOjQohxV9Lg26MwSqK+bj6tsr9ZkMs2zqNbS4b2blGnr37+dnwz+FF7Es9gReqyPxL9bn5PU/+mK0zWMHoZSEfUkXBrgqKoMQTsYzbMERluByEpZm9nRJ6ypvr9gufft9MsWC2LPhEx0O9YDahgrCsL/yiQVL+3x00DO9sBOXxi8kI81Mv2Rl4JSyswh1mzGAsT1s4q6fxtlUl5Ooz6La693IjUZO/AjN8sZPh03H9WiyewowkhMfS0H06rtGQ== gcpuser
Notice that you are in group google-sudoers
gcpuser#instance-1:~/.ssh$ id
uid=1000(gcpuser) gid=1001(gcpuser) groups=1001(gcpuser),4(adm),30(dip),44(video),46(plugdev),1000(google-sudoers)
sudo to root with no password
gcpuser#instance-1:~$ sudo -i -u root
root#instance-1:~#
Notice the sudoers file:
root#instance-1:~# cat /etc/sudoers.d/google_sudoers
%google-sudoers ALL=(ALL:ALL) NOPASSWD:ALL
Conclusion
Specifying the username in "Enter the entire key data" has these results:
creating the user in the virtual machine.
uploading the key to ~/.ssh
membership in a passwordless sudo group

using mysql-proxy to manipulate login information

Is it possible to intercept and change login information within a lua script for mysql-proxy.
for example, if a user were to hit the proxy like this:
mysql -h localhost -P 4040 -u bob -D orders -p
i would want the connection not only redirected to a backend server, but also the username/database name changed, so that the above command was the equivalent of this:
mysql -h production.server -P 3306 -u bob_production -D bob_orders -p
I notice that it seems that I can only get auth information in the script after the auth has been passed, and even if I could get it before, i don't see a way to easily inject it.
Does anyone have an idea on how this would be possible within mysql-proxy, or with some other solution?
It is possible. In the share/docs directory of the installation bundle have a look at the tutorial script tutorial-scramble.lua which is an example that validates a hashed password from a remote client and substitutes the authentication credentials required by the server.
The function used in the tutorial example is: read_auth()
You might also want to monitor the authentication response from the server which can be done with read_auth_result().