Failed to import extension mercurial_keyring - Rhodecode - Object has no attribute NullHandler - mercurial

Machine Env: Windows7 box with Cygwin/TortoiseHg, Linux box (RHEL).
Mercurial/Hg - 3.0.1 version
I'm trying to integrate mercurial_keyring to perform username/password less operations. Mercurial keyring prompts first time per user / per repository link but after that, it doesn't prompt.
Our Hg repository code recently migrated to RhodeCode.
I have mercurial_keyring.py (python) file available on my machine (provided by this link): https://pypi.python.org/pypi/mercurial_keyring and https://bitbucket.org/Mekk/mercurial_keyring/src/tip/mercurial_keyring.py
While doing hg clone or any hg command, I'm getting the following error when used at command prompt (either in Linux or Windows machine via Cygwin).
*** failed to import extension hgext.mercurial_keyring from /root/AKS/goga/mercurial_keyring.py: 'module' object has no attribute 'NullHandler'
My ~/.hgrc file looks like:
# example config (see "hg help config" for more info)
[ui]
# name and email, e.g.
# username = Jane Doe <jdoe#example.com>
username=koba <koba.loki#shenzi.com>
[extensions]
# uncomment these lines to enable some popular extensions
# (see "hg help extensions" for more info)
# pager =
# progress =
# color =
hgext.mercurial_keyring = /root/AKS/goga/mercurial_keyring.py
[paths]
default = http://hg-server.cm.shenzi.com:8082
[auth]
default1.schemes = http https
default1.prefix = hg-server:8082
default1.username = koba
default.schemes = http https
default.prefix = hg-server.cm.shenzi.com:8082
default.username = koba
default3.schemes = http https
default3.prefix = 12.112.91.112
default3.username = koba
In Cygwin, I also got another error:
*** failed to import extension hgext.mercurial_keyring from ~/MerKeyRing/mercurial_keyring.py: No module named keyring

Related

Unable to retrieve Puppet agent SSL certificate from Puppet master

I have configured a Puppet Master-Agent setup (OS: Ubuntu). Both can ping/ssh each other. DNS is set properly. Master is able to generate new CA and cert while Agent is throwing error when 'puppet agent -t' is executed to generate the certificates.
I received error along with the solution and I performed as suggested and further received:
Exiting; failed to retrieve certificate and waitforcert is disabled
Kindly help in getting this one resolved.
Below is the /etc/puppet/puppet.conf (Same on Master-Agent)
#Settings in [main] are used if a more specific section does not set a value.
[main]
certname = puppetmaster01.example.com
logdir=/var/log/puppet
vardir=/var/lib/puppet
basemodulepath = /etc/puppetlabs/puppet/environments/production/modules:/opt/puppet/share/puppet/modules
ssldir=/var/lib/puppet/ssl
rundir=/var/run/puppet
factpath=$vardir/lib/facter
server = puppetmaster01.example.com
user = puppet
group = puppet
archive_files = true
archive_file_server = puppetmaster01.example.com
[master]
# This section is used by the Puppet master and Puppet cert applications.
dns_alt_names = puppet,puppet.example.com,puppetmaster01,puppetmaster01.example.com,puppetagent01,puppetagent01.example.com
certname = puppetmaster01.example.com
reports = http,puppetdb
reporturl = https://localhost:443/reports/upload
node_terminus = exec
external_nodes = /etc/puppetlabs/puppet-dashboard/external_node
ssl_client_header = SSL_CLIENT_S_DN
ssl_client_verify_header = SSL_CLIENT_VERIFY
storeconfigs_backend = puppetdb
storeconfigs = true
autosign = true
# This section is used by the Puppet agent application.
[agent]
report = true
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
graph = true
pluginsync = true
environment = production
In a puppet master/agent deployment and from the docs, the administrator will need to sign the client's Cert on the puppet master. Have you signed the cert on your puppet master?
Depending on which version of puppet you're on:
Try running sudo puppetserver ca sign fullnameOFhost.something.com
or
sudo puppet cert sign <name of host>
You can look at outstanding client certs that need signing by running sudo puppet cert list or sudo puppetserver ca list, again depending on the version.

Boto3 Error: botocore.exceptions.NoCredentialsError: Unable to locate credentials

When I simply run the following code, I always gets this error.
s3 = boto3.resource('s3')
bucket_name = "python-sdk-sample-%s" % uuid.uuid4()
print("Creating new bucket with name:", bucket_name)
s3.create_bucket(Bucket=bucket_name)
I have saved my credential file in
C:\Users\myname\.aws\credentials, from where Boto should read my credentials.
Is my setting wrong?
Here is the output from boto3.set_stream_logger('botocore', level='DEBUG').
2015-10-24 14:22:28,761 botocore.credentials [DEBUG] Skipping environment variable credential check because profile name was explicitly set.
2015-10-24 14:22:28,761 botocore.credentials [DEBUG] Looking for credentials via: env
2015-10-24 14:22:28,773 botocore.credentials [DEBUG] Looking for credentials via: shared-credentials-file
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: config-file
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: ec2-credentials-file
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: boto-config
2015-10-24 14:22:28,774 botocore.credentials [DEBUG] Looking for credentials via: iam-role
try specifying keys manually
s3 = boto3.resource('s3',
aws_access_key_id=ACCESS_ID,
aws_secret_access_key= ACCESS_KEY)
Make sure you don't include your ACCESS_ID and ACCESS_KEY in the code directly for security concerns.
Consider using environment configs and injecting them in the code as suggested by #Tiger_Mike.
For Prod environments consider using rotating access keys:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_RotateAccessKey
I had the same issue and found out that the format of my ~/.aws/credentials file was wrong.
It worked with a file containing:
[default]
aws_access_key_id=XXXXXXXXXXXXXX
aws_secret_access_key=YYYYYYYYYYYYYYYYYYYYYYYYYYY
Note that there must be a profile name "[default]". Some official documentation make reference to a profile named "[credentials]", which did not work for me.
If you are looking for an alternative way, try adding your credentials using
AmazonCLI
from the terminal type:-
aws configure
then fill in your keys and region.
Make sure your ~/.aws/credentials file in Unix looks like this:
[MyProfile1]
aws_access_key_id = yourAccessId
aws_secret_access_key = yourSecretKey
[MyProfile2]
aws_access_key_id = yourAccessId
aws_secret_access_key = yourSecretKey
Your Python script should look like this, and it'll work:
from __future__ import print_function
import boto3
import os
os.environ['AWS_PROFILE'] = "MyProfile1"
os.environ['AWS_DEFAULT_REGION'] = "us-east-1"
ec2 = boto3.client('ec2')
# Retrieves all regions/endpoints that work with EC2
response = ec2.describe_regions()
print('Regions:', response['Regions'])
Source: https://boto3.readthedocs.io/en/latest/guide/configuration.html#interactive-configuration.
I also had the same issue,it can be solved by creating a config and credential file in the home directory. Below show the steps I did to solve this issue.
Create a config file :
touch ~/.aws/config
And in that file I entered the region
[default]
region = us-west-2
Then create the credential file:
touch ~/.aws/credentials
Then enter your credentials
[Profile1]
aws_access_key_id = XXXXXXXXXXXXXXXXXXXX
aws_secret_access_key = YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY
After set all these, then my python file to connect bucket. Run this file will list all the contents.
import boto3
import os
os.environ['AWS_PROFILE'] = "Profile1"
os.environ['AWS_DEFAULT_REGION'] = "us-west-2"
s3 = boto3.client('s3', region_name='us-west-2')
print("[INFO:] Connecting to cloud")
# Retrieves all regions/endpoints that work with S3
response = s3.list_buckets()
print('Regions:', response)
You can also refer below links:
Amazon S3 with Python Boto3 Library
Boto 3 documentation
Boto3: Amazon S3 as Python Object Store
from the terminal type:-
aws configure
then fill in your keys and region.
after this do next step use any environment. You can have multiple keys depending your account. Can manage multiple enviroment or keys
import boto3
aws_session = boto3.Session(profile_name="prod")
# Create an S3 client
s3 = aws_session.client('s3')
Create an S3 client object with your credentials
AWS_S3_CREDS = {
"aws_access_key_id":"your access key", # os.getenv("AWS_ACCESS_KEY")
"aws_secret_access_key":"your aws secret key" # os.getenv("AWS_SECRET_KEY")
}
s3_client = boto3.client('s3',**AWS_S3_CREDS)
It is always good to get credentials from os environment
To set Environment variables run the following commands in terminal
if linux or mac
$ export AWS_ACCESS_KEY="aws_access_key"
$ export AWS_SECRET_KEY="aws_secret_key"
if windows
c:System\> set AWS_ACCESS_KEY="aws_access_key"
c:System\> set AWS_SECRET_KEY="aws_secret_key"
Exporting the credential also work, In linux:
export AWS_SECRET_ACCESS_KEY="XXXXXXXXXXXX"
export AWS_ACCESS_KEY_ID="XXXXXXXXXXX"
These instructions are for windows machine with a single user profile for AWS. Make sure your ~/.aws/credentials file looks like this
[profile_name]
aws_access_key_id = yourAccessId
aws_secret_access_key = yourSecretKey
I had to set the AWS_DEFAULT_PROFILEenvironment variable to profile_name found in your credentials.
Then my python was able to connect. eg from here
import boto3
# Let's use Amazon S3
s3 = boto3.resource('s3')
# Print out bucket names
for bucket in s3.buckets.all():
print(bucket.name)
I work for a large corporation and encountered this same error, but needed a different work around. My issue was related to proxy settings. I had my proxy set up so I needed to set my no_proxy to whitelist AWS before I was able to get everything to work. You can set it in your bash script as well if you don't want to muddy up your Python code with os settings.
Python:
import os
os.environ["NO_PROXY"] = "s3.amazonaws.com"
Bash:
export no_proxy = "s3.amazonaws.com"
Edit: The above assume a US East S3 region. For other regions: use s3.[region].amazonaws.com where region is something like us-east-1 or us-west-2
If you have multiple aws profiles in ~/.aws/credentials like...
[Profile 1]
aws_access_key_id = *******************
aws_secret_access_key = ******************************************
[Profile 2]
aws_access_key_id = *******************
aws_secret_access_key = ******************************************
Follow two steps:
Make one you want to use as a default using export AWS_DEFAULT_PROFILE=Profile 1 command in terminal.
Make sure to run above command in the same terminal from where you use boto3 or you open an editor.[Understand the following scenario]
Scenario:
If you have two terminal open called t1 and t2.
And you run the export command in t1 and you open JupyterLab or any other from t2, you will get NoCredentialsError: Unable to locate credentials error.
Solution:
Run the export command in t1 and then open JupyterLab or any other from the same terminal t1.
In case of MLflow a call to mlflow.log_artifact() will raise this error if you cannot write to AWS3/MinIO data lake.
The reason is not setting up credentials in your python env (as these two env vars):
os.environ['DATA_AWS_ACCESS_KEY_ID'] = 'login'
os.environ['DATA_AWS_SECRET_ACCESS_KEY'] = 'password'
Note you may also access MLflow artifacts directly, using minio client (which requires a separate connection to the data lake, apart from mlflow's connection). This client can be started like this:
minio_client_mlflow = minio.Minio(os.environ['MLFLOW_S3_ENDPOINT_URL'].split('://')[1],
access_key=os.environ['AWS_ACCESS_KEY_ID'],
secret_key=os.environ['AWS_SECRET_ACCESS_KEY'],
secure=False)
I have solved the problem like this:
aws configure
Afterwards I manually entered:
AWS Access Key ID [None]: xxxxxxxxxx
AWS Secret Access Key [None]: xxxxxxxxxx
Default region name [None]: us-east-1
Default output format [None]: just hit enter
After that it worked for me
The boto3 is looking for the credentials in the folder like
C:\ProgramData\Anaconda3\envs\tensorflow\Lib\site-packages\botocore\.aws
You should save two files in this folder credentials and config.
You may want to check out the general order in which boto3 searches for credentials in this link. Look under the Configuring Credentials sub heading.
If you're sure you configure your aws correctly, just make sure the user of the project can read from ./aws or just run your project as a root
I just had this problem. This is what worked for me:
pip install botocore==1.13.20
Source: https://github.com/boto/botocore/issues/1892
In case of using AWS
In my case I had to add the following policy in IAM role to allow ec2 tags to be read by the EC2 instances. That would eliminate Unable to locate credentials error
:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:DescribeTags",
"Resource": "*"
}
]
}

TC7 (20939) : upgrade : mercurial : http auth : Test Connection Succeeds... but build checks fail (http auth)

Have been using EAP 7 for a couple of months, this is the 2nd upgrade.
Upgraded to build 20939 today and now get errors when builds are trying to check mercurial for changes (VCS problem: FOO Edit this VCS root>>). If I edit the VCS Root and click Test Connection it succeeds. How do I go about debugging this issue?
Have tried re-saving the vcs root. I deleted and recreated the vcs root on one project and get the same result.
The recent entries in the teamcity-vcs log don't have domain\user:password, should they?
I now have both the teamcity and buildagent services running under my AD account. I don't remember what account the teamcity service was using before the upgrade (is that logged somewhere?).
If the vcs root is configured with an 'https://' and has user/password why don't I see the credentials in the log message (see above post)?
My user directory contains mercurial.ini / ssl cert (and was working pre-upgrade).
TeamCity hosted on Windows2k8, mercurial repo, using Active Directory credentials for authentication.
teamcity service is running as Local System
buildagent running as AD account (for builds that deploy to other machines)
newest errors:
[2012-01-11 17:12:39,578] WARN [cutor 4 {id=29}] - jetbrains.buildServer.VCS - Error while loading changes for root mercurial: https://mycompany.com/myproject {instance id=29, parent id=8}, cause: 'cmd /c hg pull https://mycompany.com/MyProject' command failed.
stderr: abort: http authorization required
older errors:
[2012-01-10 16:38:02,791] INFO [TeamCity Agent ] - jetbrains.buildServer.VCS - Patch applied for agent=computer {id=1, host=127.0.0.1:9090}, buildType=Project :: MVC3 {id=bt12}, root=mercurial: https://mycompany/myproject {instance id=12, parent id=1}, version=3775:7fc0ae5029e6
[2012-01-11 10:30:36,277] INFO [_Server_StartUp] - jetbrains.buildServer.VCS - Server-wide hg path is not set, will use path from the VCS root settings
The problem persisted after a complete uninstall/re-install.
In the VCS Root definition... I left the user/password fields blank and encoded the user:password into the 'Pull changes from' string (just like you'd do on the command-line.
https://domain\user:password#hg.mycompany.com/Repo
To sorta clean up the plaintext password I created a project level property 'MyPassword' (type password) and used it in the connection string like this:
https://domain\user:%MyPassword%#hg.mycompany.com/Repo
Still not great but I'm up and running and the password is not viewable by causal users.

Redmine svn repository - can't auth from remote

Here's my setup:
Ubuntu Server 11.04
Apache 2.2.17
MySQL 5.1.54
RAILS_ENV=production /usr/share/redmine/script/about
About your application's environment
Ruby version 1.8.7 (i686-linux)
RubyGems version 1.3.7
Rack version 1.1
Rails version 2.3.11
Active Record version 2.3.11
Active Resource version 2.3.11
Action Mailer version 2.3.11
Active Support version 2.3.11
Edge Rails revision unknown
Application root /usr/share/redmine
Environment production
Database adapter mysql
Database schema version 20110511000000
/etc/apache2/sites-available/default
<VirtualHost *:80>
DocumentRoot /mnt/data/vortex/Dev/Web
ErrorLog ${APACHE_LOG_DIR}/apache_error.log
CustomLog ${APACHE_LOG_DIR}/apache_access.log combined
</VirtualHost>
/etc/apache2/sites-available/redmine
DocumentRoot /mnt/data/vortex/Dev/Web/redmine
PassengerDefaultUser www-data
RailsEnv production
RailsBaseURI /redmine
ErrorLog ${APACHE_LOG_DIR}/redmine_error.log
CustomLog ${APACHE_LOG_DIR}/redmine_access.log combined
/etc/apache2/conf.d/redmine-svn.conf
PerlLoadModule Apache::Authn::Redmine
<Location /svn>
DAV svn
SVNParentPath "/mnt/data/svn"
AuthType Basic
AuthName Redmine
Require valid-user
PerlAccessHandler Apache::Authn::Redmine::access_handler
PerlAuthenHandler Apache::Authn::Redmine::authen_handler
RedmineDSN "DBI:mysql:database=redmine_default;host=localhost"
RedmineDbUser "redmine"
RedmineDbPass "***"
</Location>
/etc/cron.d/redmine
*/10 * * * * root ruby /usr/share/redmine/extra/svn/reposman.rb --redmine localhost/redmine --svn-dir /mnt/data/svn --owner www-data --url file:///mnt/data/svn --key=***
Everything in Redmine is working fine, the repositories get created by reposman and can be browsed from their project page.
The problem arises when i try to access a svn repo via a remote pc.
If i type svn ls http://server-ip/svn/prj it shows me the repo content without asking for login.
With svn mkdir http://server-ip/svn/prj/dir instead it asks for password but as i enter it, I get prompted for login again. After the third try i get the following error:
svn: MKACTIVITY di '/svn/test1/!svn/act/25265483-dc10-4e3b-a7a5-a2e5bb84486f': authorization failed: Could not authenticate to server: rejected Basic challenge (http://192.168.1.201)
I can't figure out why authentication doesn't work.
I was expecting a login prompt also for the svn ls command.
I also checked the sessions on MySQL server and I can't see any for the user 'redmine' when i try to access the repository, so it seems Apache/Redmine don't even try to connect to MySQL for authentication.
I followed this guide to set up svn access.
Does someone knows how to fix my problem?
Thank you
I had the same issue. The problem is with the accounts that use LDAP authentication. If you create an internal account and add it to the project as a developer you will be able to commit.
To get Redmine, LDAP, and SVN to work you need to add "PerlLoadModule Authen::Simple::LDAP" to you apache configuration As mentioned here:
http://www.redmine.org/projects/redmine/wiki/Repositories_access_control_with_apache_mod_dav_svn_and_mod_perl#optional-LDAP-Authentication
You should have better luck on Ubuntu but to get Authen::Simple::LDAP installed on OpenSUSE 11.3 inside our corperate firewall I had to:
get CPAN to FTP in passive mode http://www.netadmintools.com/art273.html
configure CPAN and install in the following order
cpan> install Module::Implementation
cpan> install Attribute::Handlers
cpan> install Params::Validate
cpan> install Authen::Simple
cpan> install Authen::Simple::LDAP
After this it still was not working so I started debugging. I installed tcpdump and figured out that it was not using the configured port to do the authentication. I modified Redmine.pm from a hostname to a URL and that fixed it
# open (LEELOG, ">>/tmp/leelog");
# print LEELOG "-----------\n";
# print LEELOG "$rowldap[0]\n";
# print LEELOG "$rowldap[1]\n";
# print LEELOG "$rowldap[2]\n";
# print LEELOG "$rowldap[3]\n";
# print LEELOG "$rowldap[4]\n";
# print LEELOG "$rowldap[5]\n";
# print LEELOG "$rowldap[6]\n";
my $ldap = Authen::Simple::LDAP->new(
host => ($rowldap[2] eq "1" || $rowldap[2] eq "t") ? "ldaps://$rowldap[0]:$rowldap[1]" : "ldap://$rowldap[0]:$rowldap[1]",
port => $rowldap[1],
basedn => $rowldap[5],
binddn => $rowldap[3] ? $rowldap[3] : "",
bindpw => $rowldap[4] ? $rowldap[4] : "",
filter => "(".$rowldap[6]."=%s)"
This page is helpful:
http://www.rhonabwy.com/wp/2009/12/24/debugging-active-directory-ldap-authentication-in-redmine/
In my situation, I just unclick Settings-Information-Public and it works! Public projects don't need auth and I cannot commit.
I didn't use LDAP.

mercurial .hgrc notify hook

Could someone tell me what is incorrect in my .hgrc configuration? I am trying to use gmail to send a e-mail after each push and/or commit.
.hgrc
[paths]
default = ssh://www.domain.com/repo/hg
[ui]
username = intern <user#domain.com>
ssh="C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub"
[extensions]
hgext.notify =
[hooks]
changegroup.notify = python:hgext.notify.hook
incoming.notify = python:hgext.notify.hook
[email]
from = user#domain.com
[smtp]
host = smtp.gmail.com
username = user#gmail.com
password = sure
port = 587
tls = true
[web]
baseurl = http://dev/...
[notify]
sources = serve push pull bundle
test = False
config = /path/to/subscription/file
template = \ndetails: {baseurl}{webroot}/rev/{node|short}\nchangeset: {rev}:{node|short}\nuser: {author}\ndate: {date|date}\ndescription:\n{desc}\n
maxdiff = 300
Error
Incoming comand failed for P/project. running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" user#domain.com "hg -R repo/hg serve --stdio""
sending hello command
sending between command
remote: FATAL ERROR: Server unexpectedly closed network connection
abort: no suitable response from remote hg!
, error code: -1
running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" user#domain.com "hg -R repo/hg serve --stdio""
sending hello command
sending between command
remote: FATAL ERROR: Server unexpectedly closed network connection
abort: no suitable response from remote hg!
Did you follow the steps detailled in "AccessingSshRepositoriesFromWindows"?
If yes, you still can try:
Plink.exe also has a -batch argument which tells plink to run non-interactively.
Any activity that would normally require user interaction (a new host key, for instance) will cause plink to exit immediately rather than stall.
When an ssh operation fails, you can use the --debug argument to figure out what went wrong.
I believe you have to have the private key locally, and the public key goes on the target machine. It does seem strange that it would connect at all though.
The problem can be with the push not with the send email using notify extension.
If you followed the instructions correctly, maybe you have a problem if the public key and private key.
You need to edit the authorized_keys, at your server, inside .ssh folder of your user, and put your public key of your key inside this file.
The private key of your key you will use at client with pageant (Add Key button).
I recommend to use another email service instead gmail, maybe, if you send a lot of automatic email. gmail can put your ip to black list and block the emails.