What I have:
GNU/Linux host
nginx is up and running
there is a cron-job scheduled to run immediately after a specific file has been removed (similar to run-crons)
GitHub sends a webhook when someone pushes to a repository
What I want:
I do now want to run either lua or anything comparable to parse GitHub's request and validate it and then delete a file (if the request was valid of course).
Preferably all of this should happen without the hassle to maintain an additional PHP installation as there is currently none, or the need to use fcgiwrap or similar.
Template:
On the nginx side I have something equivalent to
location /deploy {
# execute lua (or equivalent) here
}
To read json body of GH webhook you nead use JSON4Lua lib, and to validate HMAC signature use luacrypto.
Preconfigure
Install required modules
$ sudo luarocks install JSON4Lua
$ sudo luarocks install luacrypto
In Nginx define location for deploy
location /deploy {
client_body_buffer_size 3M;
client_max_body_size 3M;
content_by_lua_file /path/to/handler.lua;
}
The max_body_size and body_buffer_size should be equal to prevent error
request body in temp file not supported
https://github.com/openresty/lua-nginx-module/issues/521
Process webhook
Get request payload data and check is correct
ngx.req.read_body()
local data = ngx.req.get_body_data()
if not data then
ngx.log(ngx.ERR, "failed to get request body")
return ngx.exit (ngx.HTTP_BAD_REQUEST)
end
Verify GH signature with use luacrypto
local function verify_signature (hub_sign, data)
local sign = 'sha1=' .. crypto.hmac.digest('sha1', data, secret)
-- this is simple comparison, but it's better to use a constant time comparison
return hub_sign == sign
end
-- validate GH signature
if not verify_signature(headers['X-Hub-Signature'], data) then
ngx.log(ngx.ERR, "wrong webhook signature")
return ngx.exit (ngx.HTTP_FORBIDDEN)
end
Parse data as json and check is master branch, for deploy
data = json.decode(data)
-- on master branch
if data['ref'] ~= branch then
ngx.say("Skip branch ", data['ref'])
return ngx.exit (ngx.HTTP_OK)
end
If all correct, call deploy function
local function deploy ()
-- run command for deploy
local handle = io.popen("cd /path/to/repo && sudo -u username git pull")
local result = handle:read("*a")
handle:close()
ngx.say (result)
return ngx.exit (ngx.HTTP_OK)
end
Example
Example constant time string compare
local function const_eq (a, b)
-- Check is string equals, constant time exec
getmetatable('').__index = function (str, i)
return string.sub(str, i, i)
end
local diff = string.len(a) == string.len(b)
for i = 1, math.min(string.len(a), string.len(b)) do
diff = (a[i] == b[i]) and diff
end
return diff
end
A complete example of how I use it in github gist https://gist.github.com/Samael500/5dbdf6d55838f841a08eb7847ad1c926
This solution does not implement verification for GitHub's hooks and assumes you have the lua extension and the cjson module installed:
location = /location {
default_type 'text/plain';
content_by_lua_block {
local cjson = require "cjson.safe"
ngx.req.read_body()
local data = ngx.req.get_body_data()
if
data
then
local obj = cjson.decode(data)
if
# checksum checking should go here
(obj and obj.repository and obj.repository.full_name) == "user/reponame"
then
local file = io.open("<your file>","w")
if
file
then
file:close()
ngx.say("success")
else
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
end
else
ngx.exit(ngx.HTTP_UNAUTHORIZED)
end
else
ngx.exit(ngx.HTTP_NOT_ALLOWED)
end
}
}
Related
I am new in ruby on rails and I want to read data from a JSON file from a specified directory, but I constantly get an error in chap3(File name)
Errno::ENOENT in TopController#chap3. No such file or directory # rb_sysopen - links.json.
In the console, I get a message
Failed to load resource: the server responded with a status of 500 (Internal Server Error)
How I can fix that?
Code:
require "json"
class TopController < ApplicationController
def index
#message = "おはようございます!"
end
def chap3
data = File.read('links.json')
datahash = JSON.parse(data)
puts datahash.keys
end
def getName
render plain: "名前は、#{params[:name]}"
end
def database
#members = Member.all
end
end
JSON file:
{ "data": [
{"link1": "http://localhost:3000/chap3/a.html"},
{"link2": "http://localhost:3000/chap3/b.html"},
{"link3": "http://localhost:3000/chap3/c.html"},
{"link4": "http://localhost:3000/chap3/d.html"},
{"link5": "http://localhost:3000/chap3/e.html"},
{"link6": "http://localhost:3000/chap3/f.html"},
{"link7": "http://localhost:3000/chap3/g.html"}]}
I would change these two lines
data = File.read('links.json')
datahash = JSON.parse(data)
in the controller to
datahash = Rails.root.join('app/controllers/links.json').read
Note: I would consider moving this kind of configuration file into the /config folder and creating a simple Ruby class to handle it. Additionally, you might want to consider paths instead of URLs with a host because localhost:3000 might work in the development environment but in production, you will need to return non-localhost URLs anyway.
Rails use the content of file in the controller
#data = File.read("#{Rails.root}/app/controllers/links.json")
I am trying to write a simple Ada (with AWS) program to post data to a server. The curl command is working as follows and return a valid response in JSON after successful login:
curl -XPOST -d '{"type":"m.login.password", "user":"xxx", "password": "xxxxxxxxxx"}' "https://matrix.org/_matrix/client/r0/login"
My Ada program:
with Ada.Exceptions; use Ada.Exceptions;
with Ada.Text_Io; use Ada.Text_IO;
with AWS.Client;
with AWS.Communication.Client;
with AWS.MIME;
with AWS.Net;
with AWS.Response;
use AWS;
procedure Communicate is
Result : Response.Data;
Data : String := "{""type"":""m.login.password"", ""user"":""xxx"", ""password"": ""xxxxxxxxxx""}";
begin
Result := Client.Post
( URL => "https://matrix.org/_matrix/client/r0/login",
Data => Data,
Content_Type => AWS.MIME.Application_JSON ) ;
Put_Line ( Response.Message_Body ( Result ) ) ;
end Communicate;
An exception was raised. I can't figure out what is wrong with this code.
$ ./Communicate
raised PROGRAM_ERROR : aws-client.adb:543 finalize/adjust raised exception
To test the code, you can create an account at http://matrix.org and replace the login credential.
Thanks.
Adrian
After a few minor changes (mostly because I don't like compiler warnings), and an adaption to the Debian/Jessie version of AWS, I got it to work.
Here's the adapted version:
with Ada.Text_IO; use Ada.Text_IO;
with AWS.Client;
-- with AWS.MIME;
with AWS.Response;
use AWS;
procedure Communicate is
Result : Response.Data;
Data : constant String :=
"{""type"":""m.login.password"", ""user"":""xxx"", " &
"""password"": ""xxxxxxxxxx""}";
begin
Result := Client.Post
(URL => "https://matrix.org/_matrix/client/r0/login",
Data => Data,
Content_Type => "application/json");
-- Content_Type => AWS.MIME.Application_JSON);
Put_Line (Response.Message_Body (Result));
end Communicate;
Here is my project file:
with "aws";
project Communicate is
for Main use ("communicate");
package Builder is
for Default_Switches ("Ada")
use ("-m");
end Builder;
package Compiler is
for Default_Switches ("Ada")
use ("-fstack-check", -- Generate stack checking code (part of Ada)
"-gnata", -- Enable assertions (part of Ada)
"-gnato13", -- Overflow checking (part of Ada)
"-gnatf", -- Full, verbose error messages
"-gnatwa", -- All optional warnings
"-gnatVa", -- All validity checks
"-gnaty3abcdefhiklmnoOprstux", -- Style checks
"-gnatwe", -- Treat warnings as errors
"-gnat2012", -- Use Ada 2012
"-Wall", -- All GCC warnings
"-O2"); -- Optimise (level 2/3)
end Compiler;
end Communicate;
I built the program with:
% gprbuild -P communicate
gnatgcc -c -fstack-check -gnata -gnato13 -gnatf -gnatwa -gnatVa -gnaty3abcdefhiklmnoOprstux -gnatwe -gnat2012 -Wall -O2 communicate.adb
gprbind communicate.bexch
gnatbind communicate.ali
gnatgcc -c b__communicate.adb
gnatgcc communicate.o -L/usr/lib/x86_64-linux-gnu -lgnutls -lz -llber -lldap -lpthread -o communicate
%
And then tested with:
% ./communicate
{"errcode":"M_FORBIDDEN","error":"Invalid password"}
%
It looks like the problem is located in your AWS version/installation.
Problem resolved by building AWS with gnutls from MacPorts. Apple deprecated OpenSSL since OS X Lion and used CommonCrypto so modern macOS does not come with OpenSSL. The solution is to download and install OpenSSL or gnutls from Mac Ports or Home Brew.
Another problem is that Apple introduced SIP (System Integrity Protection) since El Capitan. With SIP enabled, user with administrator's rights is unable to change the contents in /usr/include and /usr/lib etc.
Mac Ports installs to /opt/local so I made references to /opt/local/include and /opt/local/lib so that AWS can build with either OpenSSL or gnutls.
I am trying to connect to a mysql server using LuaSql via a mysql proxy. I try to execute a simple program (db.lua):
require("luasql.mysql")
local _sqlEnv = assert(luasql.mysql())
local _con = nil
function read_auth(auth)
local host, port = string.match(proxy.backends[1].address, "(.*):(.*)")
_con = assert(_sqlEnv:connect( "db_name", "username", "password", "hostname", "3306"))
end
function disconnect_client()
assert(_con:close())
end
function read_query(packet)
local cur = con:execute("select * from t1")
myTable = {}
row = cur:fetch(myTable, "a")
print(myTable.id,myTable.user)
end
This code executes well when I execute it without mysql-proxy. When I am connecting with mysql-proxy, the error-log displays these errors:
mysql.lua:8: bad argument #1 to 'insert' (table expected, got nil)
db.lua:1: loop or previous error loading module 'luasql.mysql'
mysql.lua is a default file of LuaSql:
---------------------------------------------------------------------
-- MySQL specific tests and configurations.
-- $Id: mysql.lua,v 1.4 2006/01/25 20:28:30 tomas Exp $
---------------------------------------------------------------------
QUERYING_STRING_TYPE_NAME = "binary(65535)"
table.insert (CUR_METHODS, "numrows")
table.insert (EXTENSIONS, numrows)
---------------------------------------------------------------------
-- Build SQL command to create the test table.
---------------------------------------------------------------------
local _define_table = define_table
function define_table (n)
return _define_table(n) .. " TYPE = InnoDB;"
end
---------------------------------------------------------------------
-- MySQL versions 4.0.x do not implement rollback.
---------------------------------------------------------------------
local _rollback = rollback
function rollback ()
if luasql._MYSQLVERSION and string.sub(luasql._MYSQLVERSION, 1, 3) == "4.0" then
io.write("skipping rollback test (mysql version 4.0.x)")
return
else
_rollback ()
end
end
As stated in my previous comment, the error indicates that table.insert (CUR_METHODS, ...) is getting a nil as first arg. Since the first arg is CUR_METHODS, it means that this object CUR_METHODS has not been defined yet. Since this happens near top of the luasql.mysql module, my guess is that the luasql initialization was incomplete, maybe because the mysql DLL was not found. My guess is that the LUA_CPATH does not find the MySQL DLL for luasql, but I'm surprised that you wouldn't get a package error, so something odd is going on. You'll have to dig into the luasql module and C file to figure out why it is not being created.
Update: alternately, update your post to show the output of print("LUA path:", package.path) and print("LUA path:", package.cpath) from your mysql-proxy script and also show the path of folder where luasql is installed and contents of that folder.
I compiled Bind 9 from source (see below) and set up Bind9 with MySQL DLZ.
I keep getting an error when I attempt to fetch anything from the server about buffer overflow. I've googled many times but can not find anything on how to fix this error.
Configure options:
root#anacrusis:/opt/bind9/bind-9.9.1-P3# named -V BIND 9.9.1-P3 built
with '--prefix=/opt/bind9' '--mandir=/opt/bind9/man'
'--infodir=/opt/bind9/info' '--sysconfdir=/opt/bind9/config'
'--localstatedir=/opt/bind9/var' '--enable-threads'
'--enable-largefile' '--with-libtool' '--enable-shared'
'--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr'
'--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=yes'
'--with-dlz-bdb=no' '--with-dlz-filesystem=yes' '--with-dlz-stub=yes'
'--with-dlz-ldap=yes' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing
-DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' using OpenSSL version: OpenSSL 1.0.1 14 Mar 2012 using libxml2
version: 2.7.8
This is the error I get when I dig example.com (with debug):
Query String: select ttl, type, mx_priority, case when
lower(type)='txt' then concat('"', data, '"')
else data end from dns_records where zone = 'example.com' and host = '#'
17-Sep-2012 01:09:33.610 dns_rdata_fromtext: buffer-0x7f5bfca73360:1:
unexpected end of input 17-Sep-2012 01:09:33.610 dns_sdlz_putrr
returned error. Error code was: unexpected end of input 17-Sep-2012
01:09:33.610 Query String: select ttl, type, mx_priority, case when
lower(type)='txt' then concat('"', data, '"')
else data end from dns_records where zone = 'example.com' and host = '*'
17-Sep-2012 01:09:33.610 query.c:2579: fatal error: 17-Sep-2012
01:09:33.610 RUNTIME_CHECK(result == 0) failed 17-Sep-2012
01:09:33.610 exiting (due to fatal error in library)
Named.conf
options {
directory "/opt/bind9/";
allow-query-cache { none; };
allow-query { any; };
recursion no;
};
dlz "Mysql zone" {
database "mysql
{host=localhost dbname=system ssl=false user=root pass=*password*}
{select zone from dns_records where zone = '$zone$'}
{select ttl, type, mx_priority, case when lower(type)='txt' then concat('\"', data, '\"')
else data end from dns_records where zone = '$zone$' and host = '$record$'}
{}
{}
{}
{}";
};
Do you run named single-threaded (with "-n 1" parameter)? If not, named will crash in various places when working on more than one query in parallel, since the MySQL DLZ module is not thread safe.
Manually log into the DB and run the query. See what it comes up with. The error says it's got an unexpected end of input, meaning it was expecting to get something and it never got it. So the first thing is to see if it you can get it manually. Maybe the mysqld isn't running. Maybe the user isn't defined or password is set wrong or permissions are not granted on that table. These could all account for the error.
Assuming all this works then you have two options: Enable more logging in your named.conf so you have more data to work with on what's happeningRemove and reinstall BIND, ensuring that all hashes match on all libraries and that all dependancies are in place.
I have gotten Bind with DLZ working on CentOS 7. I do not get the error that is effecting you.
I realize this is an older post but I thought I would share my conf files , and configure options.
I am using Bind 9.11.0
configure
./configure --prefix=/usr --sysconfdir=/etc/bind --localstatedir=/var --mandir=/usr/share/man --infodir=/usr/share/info --enable-threads --enable-largefile --with-libtool --enable-shared --enable-static --with-openssl=/usr --with-gssapi=/usr --with-gnu-ld --with-dlz-postgres=no --with-dlz-mysql=yes --with-dlz-bdb=no --with-dlz-filesystem=yes --with-dlz-stub=yes --enable-ipv6
named.conf
// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind9/README.Debian.gz for information on the
// structure of BIND configuration files in Debian, *BEFORE* you customize
// this configuration file.
//
// If you are just adding zones, please do that in /etc/bind/named.conf.local
#auskommentiert !!!
#include "/etc/bind/named.conf.options";
#include "/etc/bind/named.conf.local";
#include "/etc/bind/named.conf.default-zones";
key "rndc-key" {
// how was key encoded
algorithm hmac-md5;
// what is the pass-phrase for the key
secret "noway";
};
#options {
#default-key "rndc-key";
#default-server 127.0.0.1;
#default-port 953;
#};
controls {
inet * port 953 allow { any; } keys { "rndc-key"; };
#inet * port 53 allow { any; } keys { "rndc-key"; };
};
logging {
channel query.log {
file "/var/log/query.log";
// Set the severity to dynamic to see all the debug messages.
severity dynamic;
};
category queries { query.log; };
};
dlz "Mysql zone" {
database "mysql
{host=172.16.254.100 port=3306 dbname=dyn_server_db user=db_user pass=db_password}
{SELECT zone FROM dyn_dns_records WHERE zone = '$zone$'}
{SELECT ttl, type, mx_priority, IF(type = 'TXT', CONCAT('\"',data,'\"'), data) AS data
FROM dyn_dns_records
WHERE zone = '$zone$' AND host = '$record$' AND type <> 'SOA' AND type <> 'NS'}
{SELECT ttl, type, data, primary_ns, resp_person, serial, refresh, retry, expire, minimum
FROM dyn_dns_records
WHERE zone = '$zone$' AND (type = 'SOA' OR type='NS')}
{SELECT ttl, type, host, mx_priority, IF(type = 'TXT', CONCAT('\"',data,'\"'), data) AS data, resp_person, serial, refresh, retry, expire, minimum
FROM dyn_dns_records
WHERE zone = '$zone$' AND type <> 'SOA' AND type <> 'NS'}
{SELECT zone FROM xfr_table where zone='$zone$' AND client = '$client$'}";
};
I have set up Jenkins, but I would like to find out what files were added/changed between the current build and the previous build. I'd like to run some long running tests depending on whether or not certain parts of the source tree were changed.
Having scoured the Internet I can find no mention of this ability within Hudson/Jenkins though suggestions were made to use SVN post-commit hooks. Maybe it's so simple that everyone (except me) knows how to do it!
Is this possible?
I have done it the following way. I am not sure if that is the right way, but it seems to be working. You need to get the Jenkins Groovy plugin installed and do the following script.
import hudson.model.*;
import hudson.util.*;
import hudson.scm.*;
import hudson.plugins.accurev.*
def thr = Thread.currentThread();
def build = thr?.executable;
def changeSet= build.getChangeSet();
changeSet.getItems();
ChangeSet.getItems() gives you the changes. Since I use accurev, I did List<AccurevTransaction> accurevTransList = changeSet.getItems();.
Here, the modified list contains duplicate files/names if it has been committed more than once during the current build window.
The CI server will show you the list of changes, if you are polling for changes and using SVN update. However, you seem to want to be changing the behaviour of the build depending on which files were modified. I don't think there is any out-of-the-box way to do that with Jenkins alone.
A post-commit hook is a reasonable idea. You could parameterize the job, and have your hook script launch the build with the parameter value set according to the changes committed. I'm not sure how difficult that might be for you.
However, you may want to consider splitting this into two separate jobs - one that runs on every commit, and a separate one for the long-running tests that you don't always need. Personally I prefer to keep job behaviour consistent between executions. Otherwise traceability suffers.
echo $SVN_REVISION
svn_last_successful_build_revision=`curl $JOB_URL'lastSuccessfulBuild/api/json' | python -c 'import json,sys;obj=json.loads(sys.stdin.read());print obj["'"changeSet"'"]["'"revisions"'"][0]["'"revision"'"]'`
diff=`svn di -r$SVN_REVISION:$svn_last_successful_build_revision --summarize`
You can use the Jenkins Remote Access API to get a machine-readable description of the current build, including its full change set. The subtlety here is that if you have a 'quiet period' configured, Jenkins may batch multiple commits to the same repository into a single build, so relying on a single revision number is a bit naive.
I like to keep my Subversion post-commit hooks relatively simple and hand things off to the CI server. To do this, I use wget to trigger the build, something like this...
/usr/bin/wget --output-document "-" --timeout=2 \
https://ci.example.com/jenkins/job/JOBID/build?token=MYTOKEN
The job is then configured on the Jenkins side to execute a Python script that leverages the BUILD_URL environment variable and constructs the URL for the API from that. The URL ends up looking like this:
https://ci.example.com/jenkins/job/JOBID/BUILDID/api/json/
Here's some sample Python code that could be run inside the shell script. I've left out any error handling or HTTP authentication stuff to keep things readable here.
import os
import json
import urllib2
# Make the URL
build_url = os.environ['BUILD_URL']
api = build_url + 'api/json/'
# Call the Jenkins server and figured out what changed
f = urllib2.urlopen(api)
build = json.loads(f.read())
change_set = build['changeSet']
items = change_set['items']
touched = []
for item in items:
touched += item['affectedPaths']
Using the Build Flow plugin and Git:
final changeSet = build.getChangeSet()
final changeSetIterator = changeSet.iterator()
while (changeSetIterator.hasNext()) {
final gitChangeSet = changeSetIterator.next()
for (final path : gitChangeSet.getPaths()) {
println path.getPath()
}
}
With Jenkins pipelines (pipeline supporting APIs plugin 2.2 or above), this solution is working for me:
def changeLogSets = currentBuild.changeSets
for (int i = 0; i < changeLogSets.size(); i++) {
def entries = changeLogSets[i].items
for (int j = 0; j < entries.length; j++) {
def entry = entries[j]
def files = new ArrayList(entry.affectedFiles)
for (int k = 0; k < files.size(); k++) {
def file = files[k]
println file.path
}
}
}
See How to access changelogs in a pipeline job.
Through Groovy:
<!-- CHANGE SET -->
<% changeSet = build.changeSet
if (changeSet != null) {
hadChanges = false %>
<h2>Changes</h2>
<ul>
<% changeSet.each { cs ->
hadChanges = true
aUser = cs.author %>
<li>Commit <b>${cs.revision}</b> by <b><%= aUser != null ? aUser.displayName : it.author.displayName %>:</b> (${cs.msg})
<ul>
<% cs.affectedFiles.each { %>
<li class="change-${it.editType.name}"><b>${it.editType.name}</b>: ${it.path} </li> <% } %> </ul> </li> <% }
if (!hadChanges) { %>
<li>No Changes !!</li>
<% } %> </ul> <% } %>
#!/bin/bash
set -e
job_name="whatever"
JOB_URL="http://myserver:8080/job/${job_name}/"
FILTER_PATH="path/to/folder/to/monitor"
python_func="import json, sys
obj = json.loads(sys.stdin.read())
ch_list = obj['changeSet']['items']
_list = [ j['affectedPaths'] for j in ch_list ]
for outer in _list:
for inner in outer:
print inner
"
_affected_files=`curl --silent ${JOB_URL}${BUILD_NUMBER}'/api/json' | python -c "$python_func"`
if [ -z "`echo \"$_affected_files\" | grep \"${FILTER_PATH}\"`" ]; then
echo "[INFO] no changes detected in ${FILTER_PATH}"
exit 0
else
echo "[INFO] changed files detected: "
for a_file in `echo "$_affected_files" | grep "${FILTER_PATH}"`; do
echo " $a_file"
done;
fi;
It is slightly different - I needed a script for Git on a particular folder...
So, I wrote a check based on jollychang.
It can be added directly to the job's exec shell script. If no files are detected it will exit 0, i.e. SUCCESS... this way you can always trigger on check-ins to the repository, but build when files in the folder of interest change.
But... If you wanted to build on-demand (i.e. clicking Build Now) with the changed from the last build.. you would change _affected_files to:
_affected_files=`curl --silent $JOB_URL'lastSuccessfulBuild/api/json' | python -c "$python_func"`
Note: You have to use Jenkins' own SVN client to get a change list. Doing it through a shell build step won't list the changes in the build.
It's simple, but this works for me:
$DirectoryA = "D:\Jenkins\jobs\projectName\builds" ####Jenkind directory
$firstfolder = Get-ChildItem -Path $DirectoryA | Where-Object {$_.PSIsContainer} | Sort-Object LastWriteTime -Descending | Select-Object -First 1
$DirectoryB = $DirectoryA + "\" + $firstfolder
$sVnLoGfIle = $DirectoryB + "\" + "changelog.xml"
write-host $sVnLoGfIle
I tried to add that to comments but code in comments is no way:
Just want to prettify code from heroin's answer:
def changedFiles = []
def changeLogSets = currentBuild.changeSets
for (entries in changeLogSets) {
for (entry in entries) {
for (file in entry.affectedFiles) {
echo "Found changed file: ${file.path}"
changedFiles += "${file.path}"
}
}
}
Keep in mind for some cases git plugin returns empty changeSet, like:
First run in newly created branch
'Build now' button build
Refer to https://issues.jenkins-ci.org/browse/JENKINS-26354 for more details.