hg notify gives "interrupted" on push - mercurial

I have a tortoise repository on my local machine. I would like to send notification mails on push.
On pushing the changes, all that gives is the message "interrupted!". I could not figure out the exact issue. Can someone help to get know more details on the error.
<< Output after push >>
% hg push http://localhost/mercurial/test_mail
pushing to http://localhost/mercurial/test_mail
searching for changes
remote: adding changesets
remote: adding manifests
remote: adding file changes
remote: added 3 changesets with 3 changes to 2 files
[command completed successfully Thu Jan 31 09:07:39 2019]
interrupted!
<<.hg/hgrc>> inside that repository
[extensions]
notify =
[hooks]
changegroup.notify = python:hgext.notify.hook
#commit.notify = python:hgext.notify.hook
[email]
from = Testing Email Notifications <mailid#company.com>
method = smtp
[smtp]
host = localhost
[notify]
sources = serve serve push pull bundle
test =False
template =
details: {baseurl}/{webroot}/rev/{node|short}
branches: {branches}
changeset: {rev}:{node|short}
user: {author}
date: {date|date}
description:
{desc}\n
maxdiff = 1000
[usersubs]
# key is subscriber email, value is comma-separated list of glob patterns
abc#gmail.com = *
[reposubs]
* = mailid#company.com
[web]
baseurl = http://localhost/mercurial/

Related

Github actions repository redirect

When a repository name changes , github auto redirects the repository
eg: https://github.com/nick-invision/retry will auto redirect to https://github.com/nick-fields/retry
But when used in an action as such
- name: Validate version number (domain)
uses: nick-invision/retry#v2.6.0
with:
timeout_seconds: 20
max_attempts: 15
retry_wait_seconds: 10
command: |
set -x
IP=$(curl "https://something")
echo $IP | grep ${{ github.sha }}
This fails with a repository not found.

What does $.changes[0].toHash represent in a Bitbucket webhook when the event is a tag added?

Our Bitbucker server is configured to invoke webhooks (received by Jenkins) on push events, which include branch updates, and tags added. The HTTP POST content included in this webhook is a JSON describing the event. The event payloads are described here: https://confluence.atlassian.com/bitbucketserver076/event-payload-1026535078.html
(I'll use "$" to refer to the root of the received JSON)
When I perform a git push origin {my_branch}, the JSON included in the webhook gives values for $.changes[0].fromHash and $.changes[0].toHash that I can correlate to my git log.
E.g., if the received JSON is:
{
"eventKey":"repo:refs_changed",
"date":"2017-09-19T09:45:32+1000",
"actor":{ ... },
"repository":{ ... },
"changes":[
{
"ref":{
"id":"refs/heads/master",
"displayId":"master",
"type":"BRANCH"
},
"refId":"refs/heads/master",
"fromHash":"ecddabb624f6f5ba43816f5926e580a5f680a932",
"toHash":"178864a7d521b6f5e720b386b2c2b0ef8563e0dc",
"type":"UPDATE"
}
]
}
...then I'd be able to see {fromHash} and {toHash} in my git log, e.g.:
$ git log --oneline -n 4
178864a sit
dcbc68d dolor
ecddabb ipsum
b8bf8f0 lorem
But when I push a git tag, e.g.:
$ git tag -a 0.1.0 -m "0.1.0"
$ git push origin 0.1.0
...then {fromHash} is the obviously-invalid 0000..., but {toHash} is a not-obviously-invalid value that I cannot reconcile with anything in my git log. E.g.:
{
"eventKey":"repo:refs_changed",
"date":"2017-09-19T09:47:32+1000",
"actor":{ ... },
"repository":{ ... },
"changes":[
{
"ref":{
"id":"refs/tags/0.1.0",
"displayId":"0.1.0",
"type":"TAG"
},
"refId":"refs/tags/0.1.0",
"fromHash":"0000000000000000000000000000000000000000",
"toHash":"b82dd854c413d8e09aaf68c3c286f11ec6780be6",
"type":"ADD"
}
]
}
The git log output remains unchanged in my shell, so what does the {toHash} value of b82dd85... represent?
The toHash represents the SHA of the annotated tag you created with git tag -a ... You can see both the commit id and the SHA of the tag object with git show-ref --tags -d:
In your case it should show something like this
$ git show-ref --tags -d | grep 0.1.0
b82dd854c413d8e09aaf68c3c286f11ec6780be6 refs/tags/0.1.0
178864a7d521b6f5e720b386b2c2b0ef8563e0dc refs/tags/0.1.0^{}

Problems with execution aws command via ssh jenkins

Good morning, how are you?
I have a problem with one execution via ssh in my jenkins.
Those characters that appear before did not appear, and we have not changed anything in the node..
The code we use is:
withCredentials([usernamePassword(credentialsId: 'id', passwordVariable: 'pass', usernameVariable: 'user')]) {
def remote = [:]
remote.name = 'id_nme'
remote.host = 'ip_node'
remote.user = user
remote.password = pass
remote.allowAnyHosts = true
remote.timeoutSec = 300
sshCommand remote: remote, command: "aws autoscaling describe-auto-scaling-groups --auto-scaling-group-name [name_asg]"
}
When the command is launched, the job is stuck and does not progress.

Sidekiq server is not processing scheduled jobs when started using systemd

I have a cuba application which I want to use sidekiq with.
This is how I setup the config.ru:
require './app'
require 'sidekiq'
require 'sidekiq/web'
environment = ENV['RACK_ENV'] || "development"
config_vars = YAML.load_file("./config.yml")[environment]
Sidekiq.configure_client do |config|
config.redis = { :url => config_vars["redis_uri"] }
end
Sidekiq.configure_server do |config|
config.redis = { url: config_vars["redis_uri"] }
config.average_scheduled_poll_interval = 5
end
# run Cuba
run Rack::URLMap.new('/' => Cuba, '/sidekiq' => Sidekiq::Web)
I started sidekiq using systemd. This is the systemd script which I adapted from the sidekiq.service on the sidekiq site.:
#
# systemd unit file for CentOS 7, Ubuntu 15.04
#
# Customize this file based on your bundler location, app directory, etc.
# Put this in /usr/lib/systemd/system (CentOS) or /lib/systemd/system (Ubuntu).
# Run:
# - systemctl enable sidekiq
# - systemctl {start,stop,restart} sidekiq
#
# This file corresponds to a single Sidekiq process. Add multiple copies
# to run multiple processes (sidekiq-1, sidekiq-2, etc).
#
# See Inspeqtor's Systemd wiki page for more detail about Systemd:
# https://github.com/mperham/inspeqtor/wiki/Systemd
#
[Unit]
Description=sidekiq
# start us only once the network and logging subsystems are available,
# consider adding redis-server.service if Redis is local and systemd-managed.
After=syslog.target network.target
# See these pages for lots of options:
# http://0pointer.de/public/systemd-man/systemd.service.html
# http://0pointer.de/public/systemd-man/systemd.exec.html
[Service]
Type=simple
Environment=RACK_ENV=development
WorkingDirectory=/media/temp/bandmanage/repos/fall_prediction_verification
# If you use rbenv:
#ExecStart=/bin/bash -lc 'pwd && bundle exec sidekiq -e production'
ExecStart=/home/froy001/.rvm/wrappers/fall_prediction/bundle exec "sidekiq -r app.rb -L log/sidekiq.log -e development"
# If you use the system's ruby:
#ExecStart=/usr/local/bin/bundle exec sidekiq -e production
User=root
Group=root
UMask=0002
# if we crash, restart
RestartSec=1
Restart=on-failure
# output goes to /var/log/syslog
StandardOutput=syslog
StandardError=syslog
# This will default to "bundler" if we don't specify it
SyslogIdentifier=sidekiq
[Install]
WantedBy=multi-user.target
The code calling the worker is :
raw_msg = JSON.parse(req.body.read, {:symbolize_names => true})
if raw_msg
ts = raw_msg[:ts]
waiting_period = (1000*60*3) # wait 3 min before checking
perform_at_time = Time.at((ts + waiting_period)/1000).utc
FallVerificationWorker.perform_at((0.5).minute.from_now, raw_msg)
my_res = { result: "success", status: 200}.to_json
res.status = 200
res.write my_res
else
my_res = { result: "not found", status: 404}.to_json
res.status = 404
res.write my_res
end
I am only using the default q.
My problem is that the job is not being processed at all.
After you run systemctl enable sidekiq so that it starts at boot and systemctl start sidekiq so that it starts immediately, then you should have some logs to review which will provide some detail about any failure to start:
sudo journalctl -u sidekiq
Review the logs, review the systemd docs and adjust your unit file as needed. You can find all the installed systemd documentation with apropos systemd. Some of the most useful man pages to review are systemd.service,systemd.exec and systemd.unit

my gitlab build on digital-ocean cannot send mail to new user

i tried to configure SMTP in my Gitlab-Instance (following this guideline). but dont get it working.
gitlab.rb
gitlab_rails['gitlab_email_from'] = "admin#example.com"
gitlab_rails['gitlab_support_email'] = "admin#example.com"
#nginx['redirect_http_to_https'] = false
#nginx['ssl_certificate'] = "/etc/gitlab/ssl/gitlab.crt"
#nginx['ssl_certificate_key'] = "/etc/gitlab/ssl/gitlab.key"
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = 'smtp.exmail.qq.com'
gitlab_rails['smtp_port'] = 25
gitlab_rails['smtp_user_name'] = 'admin#example.com'
gitlab_rails['smtp_password'] = 'has been removed'
gitlab_rails['smtp_domain'] = 'smtp.qq.com'
gitlab_rails['smtp_authentication'] = :plain
gitlab_rails['smtp_enable_starttls_auto'] = true
production.log
Sent mail to i#example.com (8017.5ms)
mail.log
May 9 09:02:14 nday postfix/smtp[27203]: B16EF12019C: to=<i#example.com>, relay=none, delay=1049, delays=1017/0.04/32/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=mxbiz2.qq.com type=AAAA: Host not found, try again)s
May 9 09:02:14 nday postfix/smtp[27202]: 40274120CA7: to=<i#example.com>, relay=none, delay=988, delays=955/0.04/32/0, dsn=4.4.3, status=deferred (Host or domain name not found. Name service error for name=mxbiz2.qq.com type=AAAA: Host not found, try again)
BTW:I have changed the DNS and refresh. mail.log haven't logged my operation. It's old log.
Is your account new on Digital Ocean?
If yes, you need to ask them to unlock sendmail functionality.
This unlock is by account, not by droplet.
You will can use sendmail in all others droplets created by you after this unlock.