I am trying to run this command:
az repos ref create --name tags/my_tag --object-id objectid --org org_url --project abc--repository xyz
But I see this error:
ERROR: The requested resource requires user authentication: https://dev.azure.com/org/abc/_apis/git/repositories/xyz/refs
I am trying to understand which setting in my PAT is associated with this ERROR and what permission do I need to set it to to get rid of this error.
Related
As per offical documentation by Openshift , we can get kubadmin password as below:
crc console --credentials
To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p gALwE-jY6p9-poc9U-gRcdu https://api.crc.testing:6443'
However , I can login successfully with developer/developer .kubeadmin will fail with "Login failed (401 Unauthorized)" . Restart CRC muiltiple times . Still not works ... Any idea about this ?
$ oc login -u developer -p developer https://api.crc.testing:6443
Login successful.
You have one project on this server: "demo"
Using project "demo"
$ oc login -u kubeadmin -p gALwE-jY6p9-poc9U-gRcdu https://api.crc.testing:6443
Login failed (401 Unauthorized)
Verify you have provided correct credentials.
Any inputs will be appreciated . Thanks in advance..
You said you restarted CRC. Have you tried deleting and recreating the cluster?
One of the first steps in productionizing a cluster is to remove the kubeadmin account - is it possible that you've done that and the "crc console --credentials" is now only displaying what it used to be?
If you have another admin account try:
$ oc get -n kube-system secret kubeadmin
The step to remove that account (see: https://docs.openshift.com/container-platform/4.9/authentication/remove-kubeadmin.html) is to simply delete that secret. If you've done that at some point in this cluster's history you'll either need to use your other admin accounts in place of kubeadmin, or recreate the CRC instance (crc stop; crc delete; crc setup)
Just in case others are having this issue and the issue persists even after trying crc stop, crc delete, crc cleanup, crc setup, crc start, I was able to sign in as kubeadmin by NOT using the following command after crc start got my CodeReady Container up and running.
eval $(crc oc-env)
Instead, I issue the crc oc-env command. In this example that the output returns /home/john.doe/.crc/bin/oc.
~]$ crc oc-env
export PATH="/home/john.doe/.crc/bin/oc:$PATH"
# Run this command to configure your shell:
# eval $(crc oc-env)
I then list the contents of the /home/john.doe/.crc/bin/oc directory which shows that the /home/john.doe/.crc/bin/oc directory is symbolically linked to the /home/john.doe/.crc/cache/crc_libvirt__amd64/oc file.
~]$ ll /home/john.doe/.crc/bin/oc
lrwxrwxrwx. 1 john.doe john.doe 61 Jun 8 20:27 oc -> /home/john.doe/.crc/cache/crc_libvirt_4.10.12_amd64/oc
And I was then able to sign in using the absolute path to the oc command line tool.
~]$ /home/john.doe/.crc/cache/crc_libvirt_4.10.12_amd64/oc login -u kubeadmin -p 28Fwr-Znmfb-V6ySF-zUu29 https://api.crc.testing:6443
Login successful.
I'm sure I could dig a bit more into this, checking the contents of my users $PATH, but suffice to say, this at least is a work around for me that gets me to be able to sign in as kubeadmin.
I'd need to create a PostgreSQL application in Openshift 4 using the 'oc' commandline.
I can see that by adding the app from the Catalog, the application is correctly added.
On the other hand, by using the following shell, the Pod goes into CrashLoopBackOff:
oc new-app -e POSTGRESQL_USER=postgres -e POSTGRESQL_PASSWORD=postgres -e POSTGRESQL_DATABASE=demodb postgresql
The following error is contained in the log file.
fixing permissions on existing directory /var/lib/postgresql/data ... initdb: error: could not change permissions of directory "/var/lib/postgresql/data": Operation not permitted
Which is the correct shell to start up PostgreSQL?
Thanks
Using: MySQL 5.7
What I want to achieve:
To save console output of Cloud SQL output as a text.
Error:
Cloud SQL returns this:
ERROR 1045 (28000): Access denied for user 'root'#'%' (using password: YES)
Things I tried:
Logging in with no password → asks password anyways, and any password including that of the server itself does not work.
Creating various users with password → same error result
Creating a Cloud SQL instance with skip-grant-tables so that no permission is required to modify the table → Cloud SQL does not support this flag
I tried manually flagging the database with this option, but Cloud Shell doesn’t even support root login without password.
Possible solution:
If I can: mysql -u root with Cloud SQL with no password, then it should be able to do this just fine. It seems that any user besides root cannot even login to the instance.
Thank you in advance. Any clues / help is appreciated.
I believe the most trivial solution is to use the Google Cloud SDK with the following command.
You will export the results of the query in CSV format to Google Cloud Storage bucket, and copy them from the bucket to your system. Then you’ll have to parse the CSV file which is a standard procedure.
There’s an how-to guide here and you can take a look at a concrete example below:
Have some variables that will be used in multiple commands
INSTANCE_ID=your_cloud_sql_instance_id
BUCKET=gs://your_bucket here
Create bucket if you don’t have one, choosing the location accordingly
gsutil mb -l EU -p $DEVSHELL_PROJECT_ID $BUCKET
You can read the explanation of the following commands in the documentation 2, but bottom line will have a csv file on your file system at the end. Also make sure to edit the name of the DATABASE variable below as well as the correspondent query.
gsutil acl ch -u `gcloud sql instances describe $INSTANCE_ID --format json | jq -c ".serviceAccountEmailAddress" | cut -d \" -f 2`:W $BUCKET
DATABASE=db_visit
FILENAME=$DATABASE'_'`date +%Y%m%d%H%M%Y`_query.csv
gcloud beta sql export csv $INSTANCE_ID $BUCKET/$FILENAME --database=$DATABASE --query="select * from visit"
gsutil cp $BUCKET/$FILENAME .
To automate the login through mysql client and make subsequent queries and get its output I encourage you to research a solution along the lines of pexpect.
I have been developing my application from a dev sandbox and want to push the reference data from "dev" to "prod". I thought I'd succeeded by executing the following commands:
On my OSX dev machine:
cbbackup http://127.0.0.1:8091 ~/couchbase-reference-data -b reference_data -u username -p password
Again on my OSX dev machine:
cbrestore ~/couchbase-reference-data http://prod.server.com:8091/ -u password -p password
Now when I go to the admin console on production I see this:
Looks good at this point. However, if I click any of the "Edit Document" button things go tragically wrong:
Any help would be GREATLY appreciated!
UPDATE:
I've noticed that now when I run the cbrestore command I get the following errors:
2013-06-03 16:53:48,295: s0 error: CBSink.connect() for send: error: SASL auth exception: aws.internal-ip.com:11210, user: reference_data
2013-06-03 16:53:48,295: s0 error: async operation: error: SASL auth exception: aws.internal-ip.com:11210, user: reference_data on sink: http://prod.server.com:8091/(reference_data#127.0.0.1:8091)
error: SASL auth exception: aws.internal-ip.com:11210, user: reference_data
This reminds me that I think what I did was copy the ~/couchbase-reference-data directory to the production environment and then ran the cbrestore from there. I have just done that now and get the following confirmation:
[####################] 100.0% (189/189 msgs)
bucket: reference_data, msgs transferred...
: total | last | per sec
batch : 1 | 1 | 16.1
byte : 36394 | 36394 | 585781.0
msg : 189 | 189 | 3042.1
done
After this process, however, the problem still exists in the same manner as described before.
UPDATE 2
I decided to delete, re-create, and re-import the bucket on production. All steps completed and I still have the same error but I'm wondering if the LOG file has any interesting information in it:
The things that stand out as interesting to me are:
The loading time was "0 seconds" ... as much as I'd like to believe that it may be a little too quick? It's not a ton of data but still.
The "module code" is named 'ns_memecached001' ... is that an issue? Memcached? I did double check that I set this up as a couchbase bucket. It is.
It seems as if your destination server is not OS X, but e.g. Linux. Here you have to use the "rehash"-extra-option.
Backup your data on your dev machine (using cbbackup)
Copy the data to your prod machine
Restore the data with the -x rehash=1 flag: (using cbrestore -x rehash=1)
I'm using Ubuntu 10.4 server and I'm trying to configure OpenLDAP as a protocol for authentication for SVN and other services. However I quite don't understand how ldap works and after setting a example config I tried to populate it without success. This is the error:
ldap_bind: Invalid credentials (49)
It seems to be example config problem, more precisely with the admin configuration. However I tried to change it using cryptographic password but got no results. Code config bellow
# Load modules for database type
dn: cn=module,cn=config
objectclass: olcModuleList
cn: module
olcModuleLoad: back_bdb.la
# Create directory database
dn: olcDatabase=bdb,cn=config
objectClass: olcDatabaseConfig
objectClass: olcBdbConfig
olcDatabase: bdb
# Domain name (e.g. home.local)
olcSuffix: dc=home,dc=local
# Location on system where database is stored
olcDbDirectory: /var/lib/ldap
# Manager of the database
olcRootDN: cn=admin,dc=home,dc=local
olcRootPW: admin
# Indices in database to speed up searches
olcDbIndex: uid pres,eq
olcDbIndex: cn,sn,mail pres,eq,approx,sub
olcDbIndex: objectClass eq
# Allow users to change their own password
# Allow anonymous to authenciate against the password
# Allow admin to change anyone's password
olcAccess: to attrs=userPassword
by self write
by anonymous auth
by dn.base="cn=admin,dc=home,dc=local" write
by * none
# Allow users to change their own record
# Allow anyone to read directory
olcAccess: to *
by self write
by dn.base="cn=admin,dc=home,dc=local" write
by * read
Have you tried to connect via CLI?
ldapsearch -x -D "cn=admin,dc=home,dc=local" -W -h <hostname>
Do check your syslog, slapd by default logs its output there.
You can also use slapcat, which must be executed locally, to know whether your database was created or not (slapd would break if otherwise, anyway). It will output the first database avaliable. Use the flag -n to extract an specific database:
slapcat -n <database number>
My bets are that you're authenticating against the wrong database.