artifactory flag artifacts as 'integration' not working so far - integration

I am using artifactory to store my artifacts, using a generic repo (I named it 'generic-local') and a layout that I have customized based on the maven2 layout (I believe one of the default layouts)
unchanged layout
[orgPath]/[module]/baseRev/[module]-baseRev(-[classifier]).[ext]
my version are of the following format
myartifact-1.0.0
myartifact-1.0.0-develop
myartifact-1.0.0-branch1234
to detect and flag release artifact.. I understand artifactory relies on certain regex
the Folder Integration Revision RegExp and File Integration Revision RegExp
for both I have set this regexp to 'branch.*|develop.*'
I would expect artifactory to now flag as 'integration' any artifact following the two last artifacts in my list above but it isn't working so far..
http://myrepo.com/artifactory/api/search/versions?g=My.Applications&a=myartifact&repos=generic-local
returns
{
"results": [
{
"version": "1.0.267-branch1234",
"integration": false
},
{
"version": "1.0.266-branch1234",
"integration": false
},
{
"version": "1.0.265-branch1234",
"integration": false
}
}
I tested the Test Artifact Path Resolution form in artifactory .. for each artifacts above, it returned :
Folder Integration Revision: branch1234
File Integration Revision: branch1234
Which makes me think my regex is valid. thus the artifacts are seen as integration .. however the api returns false..
What am I doing wrong

The above is working. I can see artifacts finally flagged with the flag integration=true.
I can use this to, for example, run 'deploy latest stable version'.
The fix was to wait. Seems artifactory does not apply the rule right away. even for new artifacts added after the rule change. Confusing and wished their documentation would mentioned it.

Related

Can't Define Services in CAS Overlay Template v. 6.4 (Docker)

Using the cas-overlay-template, I am trying to access the CAS login screen from HTTP(s)://localhost/admin:
https://localhost:8443/cas/login?service=https%3A%2F%2F0.0.0.0%2Fadmin
To do this, I am trying to define services inside /etc/cas/services/services.json:
{
"#class" : "org.apereo.cas.services.RegexRegisteredService",
"serviceId" : "^http://.*",
"name" : "http_services",
"allowed": true,
"ssoEnabled": true,
"anonymousAccess": false,
"id" : 1,
"evaluationOrder" : 1
},
{
"#class" : "org.apereo.cas.services.RegexRegisteredService",
"serviceId" : "^https://.*",
"name" : "https_services",
"allowed": true,
"ssoEnabled": true,
"anonymousAccess": false,
"id" : 2,
"evaluationOrder" : 2
}
FWIW, I've also tried to define a service file according to the pattern described here.
In /etc/config/cas.properties, I have defined the following:
cas.server.name=https://cas.example.org:8443
cas.server.prefix=${cas.server.name}/cas
cas.service-registry.json.location=classpath:/services
logging.config=file:/etc/cas/config/log4j2.xml
Finally in build.gradle, I have added the support for JSON service registry:
dependencies {
...
implementation "org.apereo.cas:cas-server-support-json-service-registry:${casServerVersion}"
}
No matter what I do, after building and running the Docker image, I always get the same thing:
INFO [org.apereo.cas.services.AbstractServicesManager] - <Loaded [0] service(s) from [JsonServiceRegistry].>
When I go to the URL, I am told
"Application Not Authorized to Use CAS".
What am I doing wrong?
Bonus question: https://cas.example.org:8443 does not work in the URL. Do I need to edit something in the docker container to get this to map onto my local machine?
-- UPDATE --
As was said in the answer, I needed to create a single, named service:
// File: /etc/cas/services/today-12345.json
{
"#class":"org.apereo.cas.services.RegexRegisteredService",
"serviceId":"^(https|http|imaps)://.*",
"name":"today",
"id" :12345
}
To part 2 of Misagh's answer, based on what I'm seeing in the Dockerfile, the /etc/cas/services directory simply doesn't exist by the time ./gradlew runs, and so the services aren't registered.
If I put in my cas.properties file
cas.service-registry.json.location=/etc/cas/services
I get a stacktrace that includes:
Caused by: java.io.FileNotFoundException: class path resource [etc/cas/services] cannot be resolved to URL because it does not exist
If I /bin/sh into the container, I can see the service inside of the /etc/cas/services directory.
I've been getting around this by simply copying the .json file after the Docker containers have been built
docker cp ~/emu/cas-overlay-template/etc/cas/services/today-12345.json [CONTID]:/tmp/services
(/tmp/services because that's where the console output says it's watching for services)
-- SOLUTION --
The path had to be:
cas.service-registry.json.location=file:/etc/cas/services
What am I doing wrong?
Multiple things.
You have your services in /etc/cas/services/services.json as a single JSON file. That is not correct. You need to have 1 file per 1 app. Consult the documentation for JSON service registry.
cas.service-registry.json.location should point to the directory location where such JSON files are found. You need to make sure this location in your Docker setup points or contains your service definitions.
I know I'm a bit late but I think the location /etc/cas/services it is defined in the apereo cas standalone profile. If you define another spring profile it will look for /tmp/services. It happened to me while configuring cas in docker environment and wanting to use a test profile.

BabelHelpers is not defined using Polymer

I'm using Polymer and its build process. The bundled files are generated throughmy polymer.json file.
I'm not explicitely using Babel, I've just seen it's used by "paper-autocomplete".
When going to the website, I have a js error stating the BabelHelpers is not defined.
When I use MAJ+F5, it works !
When I use F5, it doesn't work
(BabelHelpers is not defined)
When running it locally it works fine. When I deploy it to my server, I face this issue.
I'm running it as a standalone Java application as it has a Spring backend.
The website as multiple entry points, it works fine for all other ones.
The stacktrace :
Command :
polymer build --js-minify --css-minify --html-minify
The polymer.json file
{
"entrypoint": "pt.html",
"builds": [{
"bundle": true,
"js": {"compile": true, "minify": true},
"css": {"minify": true},
"html": {"minify": false},
"addServiceWorker": true
}],
"shell": "resources/elements-platform.html",
"fragments": [
"resources/html/lazy-resources.html",
"resources/html/ym-dashboard.html",
"resources/html/ym-partners.html",
"resources/html/ym-favorite.html",
"resources/html/ym-agenda.html",
"resources/html/ym-todos.html",
"resources/html/ym-profile.html",
"resources/html/ym-messages.html",
"resources/html/shop-list.html",
"resources/html/shop-detail.html"
],
"sources": [
"resources/src/**/*",
"resources/css/**/*",
"resources/data/**/*",
"resources/images/**/*",
"resources/img/**/*",
"resources/js/*",
"resources/js/cal/*",
"resources/js/countdown/*",
"resources/bower.json"
],
"extraDependencies": [
"resources/bower_components/webcomponentsjs/webcomponents-lite.min.js"
]
}
I run into the same issue few days ago after I updated Polymer-cli to newest version. But in my case my application throwed same error on local virtual host (with self-sign certificate). On production website it was fine.
Actually, babelHelpers are injected into file that is selected as entrypoint inside your polymer.json file. Maybe try to look there, if you have correct existing file.
there are few existing issues on github with this problem. Unfortunately there is no verified answer (Polymer team does not really care about github issues)
https://github.com/Polymer/polymer-cli/issues/787
https://github.com/Polymer/polymer-cli/issues/765
There is also same question on stackoverflow with answer:
polymer-cli - getting "Can’t find variable: babelHelpers" when I set compile to true
I too am seeing 'babelHelpers' is undefined. In my case it's coming from redux.js:
q='object'==('undefined'===typeof self?'undefined':babelHelpers
.typeof(self))&&self&&self.Object===Object&&self,r=p||q||
Function('return this')(),s=r.Symbol,t=Object.prototype,
Babel helpers is also raised as in Issue #606, which says that it's resolved and closed. But it or something similar is back.
I made a few changes, but I don't know which one solved the issue. The problem came from paper-autocomplete. When I stopped using it I didn't have the issue anymore.
I'm still using it, but I made several changes :
I made sure I was using the generated service-worker
I stopped using the version of JQuery that was in my bower (3.2.1) and used it from JQuery CDN (2.3.1) as I've seen it caused issues sometimes
Added manifest.json in the polymer.json file

Composer fails update on a require from git repository where branch name starts with digit and has a period

I've forked the repository at https://github.com/laravel-doctrine/orm and I'm trying to add it as a require in a composer.json script.
My json (the relevant bits) is as follows:
"repositories": [
{
"type": "git",
"url": "https://github.com/MyGHAccount/laravel-doctrine.git"
},
.
.
.
"require": {
.
.
.
"laravel-doctrine/orm": "dev-1.2"
Composer generates the following error:
Your requirements could not be resolved to an installable set of packages.
Problem 1
- The requested package laravel-doctrine/orm could not be found in any version, there may be a typo in the package name.
Potential causes:
- A typo in the package name
- The package is not available in a stable-enough version according to your minimum-stability setting see
https://getcomposer.org/doc/04-schema.md#minimum-stability for more
details.
Read https://getcomposer.org/doc/articles/troubleshooting.md for
further common problems.
The actual branch name from https://github.com/laravel-doctrine/orm is 1.2.
This SO question leads me to believe that Composer has no problem with periods in the branch name, but can't deal with branches starting with a digit.
I have found a workaround to this in that I simply renamed my branch on GitHub to master; I just want to know if there's a proper way to do this with Composer without the workaround.
Issue 1
This doesn't make sense: https://github.com/MyGHAccount/laravel-doctrine/orm.git.
In case you forked the original repository, the fork has another URL and not the one you posted here.
There are only two levels: github.com/vendor/repo.
It's possibly https://github.com/MyGHAccount/orm.git.
Issue 2
You're not using Composers verbose mode. Please use Composer's verbose mode (-vvv) and let Composer tell you the package resolution story and it's problems. You might be able to figure the issue out yourself.
Issue 3
Before - using packagist package:
composer.json:
{
"require": {
"laravel-doctrine/orm": "1.2"
}
}
After - overriding with your own fork:
composer.json:
{
"repositories": [
{
"type": "git",
"url": "https://github.com/your-account/orm"
}
],
"require": {
"laravel-doctrine/orm": "1.2"
}
}

How to process a github webhook payload in Jenkins?

I'm currently triggering my Jenkins builds through a GitHub webhook. How would I parse the JSON payload? If I try to parameterize my build and use the $payload variable, the GitHub webhook fails with the following error:
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"/>
<title>Error 400 This page expects a form submission</title>
</head>
<body><h2>HTTP ERROR 400</h2>
<p>Problem accessing /job/Jumph-CycleTest/build. Reason:
<pre> This page expects a form submission</pre></p><hr /><i><small>Powered by Jetty://</small></i><br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
<br/>
</body>
</html>
How can I get my GitHub webhook to work with a parameterized Jenkins build, and how could I then parse the webhook payload to use certain lines, such as the username of the committer, as conditionals in the build?
There are a few tricks to get this to work, and I found the (now defunct) chloky.com blog post to be helpful for most of it. Since it sounds like you've gotten the webhook communicating with your Jenkins instance at least, I'll skip over those steps for now. But, if you want more detail, just scroll past the end of my answer to see the content I was able to salvage from chloky.com - I do not know the original author and the information might be out of date but I did find it helpful.
So to summarize, you can do the following to deal with the payload:
Set up a string parameter called "payload" in your Jenkins job. If you are planning on manually running the build, it might be a good idea to give it a default JSON document at some point but you don't need one right now. This parameter name appears to be case-sensitive (I'm using Linux so that's no surprise...)
Set up the webhook in github to use the buildWithParameters endpoint instead of the build endpoint, i.e.
http://<<yourserver>>/job/<<yourjob>>/buildWithParameters?token=<<yourtoken>>
Configure your webhook to use application/x-www-form-encoded instead of application/json. The former approach packs the JSON data in a form variable called "payload", which is presumably how Jenkins can assign it to an environment variable. The application/json approach just POSTs raw JSON which does not seem to be mappable to anything (I couldn't get it to work). You can see the difference by pointing your webhook to something like requestbin and inspecting the results.
At this point, you should get your $payload variable when you kick off the build. To parse the JSON, I highly recommend installing jq on your Jenkins server and try out some of the parsing syntax here. JQ is especially nice because it's cross-platform.
From here, just parse what you need from the JSON into other environment variables. Combined with conditional build steps, this could give you a lot of flexibility.
Hope this helps!
EDIT here's what I could grab from the original blog posts at http://chloky.com/tag/jenkins/, which has been dead for a while.
Hopefully this content is also useful for someone.
Post #1 - July 2012
Github provides a nice way to fire off notifications to a CI system like jenkins whenever a commit is made against a repository. This is really useful for kicking off build jobs in jenkins to test the commits that were just made on the repo. You simply need to go to the administration section of the repository, click on service hooks on the left, click ‘webhook URLs’ at the top of the list, and then enter the URL of the webhook that jenkins is expecting (look at this jenkins plugin for setting up jenkins to receive these hooks from github).
Recently though, I was looking for a way to make a webhook fire when a pull request is made against a repo, rather than when a commit is made to the repo. This is so that we could have jenkins run a bunch of tests on the pull request, before deciding whether to merge the pull request in – useful for when you have a lot of developers working on their own forks and regularly submitting pull requests to the main repo.
It turns out that this is not as obvious as one would hope, and requires a bit of messing about with the github API.
By default, when you configure a github webhook, it is configured to only fire when a commit is made against a repo. There is no easy way to see, or change, this in the github web interface when you set up the webhook. In order to manipulate the webhook in any way, you need to use the API.
To make changes on a repo via the github API, we need to authorize ourselves. We’re going to use curl, so if we wanted to we could pass our username and password each time, like this:
# curl https://api.github.com/users/mancdaz --user 'mancdaz'
Enter host password for user 'mancdaz':
Or, and this is a much better option if you want to script any of this stuff, we can grab an oauth token and use it in subsequent requests to save having to keep entering our password. This is what we’re going to do in our example. First we need to create an oauth authorization and grab the token:
curl https://api.github.com/authorizations --user "mancdaz" \
--data '{"scopes":["repo"]}' -X POST
You will be returned something like the following:
{
"app":{
"name":"GitHub API",
"url":"http://developer.github.com/v3/oauth/#oauth-authorizations-api"
},
"token":"b2067d190ab94698a592878075d59bb13e4f5e96",
"scopes":[
"repo"
],
"created_at":"2012-07-12T12:55:26Z",
"updated_at":"2012-07-12T12:55:26Z",
"note_url":null,
"note":null,
"id":498182,
"url":"https://api.github.com/authorizations/498182"
}
Now we can use this token in subsequent requests for manipulating our github account via the API. So let’s query our repo and find the webhook we set up in the web interface earlier:
# curl https://api.github.com/repos/mancdaz/mygithubrepo/hooks?access_token=b2067d190ab94698592878075d59bb13e4f5e96
[
{
"created_at": "2012-07-12T11:18:16Z",
"updated_at": "2012-07-12T11:18:16Z",
"events": [
"push"
],
"last_response": {
"status": "unused",
"message": null,
"code": null
},
"name": "web",
"config": {
"insecure_ssl": "1",
"content_type": "form",
"url": "http://jenkins-server.chloky.com/post-hook"
},
"id": 341673,
"active": true,
"url": "https://api.github.com/repos/mancdaz/mygithubrepo/hooks/341673"
}
]
Note the important bit from that json output:
"events": [
"push"
]
This basically says that this webhook will only trigger when a commit (push) is made to the repo. The github API documentation describes numerous different event types that can be added to this list – for our purposes we want to add pull_request, and this is how we do it (note that we get the id of the webhook from the json output above. If you have multiple hooks defined, your output will contain all these hooks so be sure to get the right ID):
# curl https://api.github.com/repos/mancdaz/mygithubrepo/hooks/341673?access_token=b2067d190ab94698592878075d59bb13e4f5e96 -X PATCH --data '{"events": ["push", "pull_request"]}'
{
"created_at": "2012-07-12T11:18:16Z",
"updated_at": "2012-07-12T16:03:21Z",
"last_response": {
"status": "unused",
"message": null,
"code": null
},
"events": [
"push",
"pull_request"
],
"name": "web",
"config": {
"insecure_ssl": "1",
"content_type": "form",
"url": "http://jenkins-server.chloky.com/post-hook"
},
"id": 341673,
"active": true,
"url": "https://api.github.com/repos/mancdaz/mygithubrepo/hooks/341673"
}
See!
"events": [
"push",
"pull_request"
],
This webhook will now trigger whenever either a commit OR a pull request is made against our repo. Exactly what you do in your jenkins/with this webhook is up to you. We use it to kick off a bunch of integration tests in jenkins to test the proposed patch, and then actually merge and close (again using the API) the pull request automatically. Pretty sweet.
Post #2 - September 2012
In an earlier post, I talked about configuring the github webhook to fire on a pull request, rather than just a commit. As mentioned, there are many events that happen on a github repo, and as per the github documentation, a lot of these can be used to trigger the webhook.
Regardless of what event you decide to trigger on, when the webhook fires from github, it essentially makes a POST to the URL configured in the webhook, including a json payload in the body. The json payload contains various details about the event that caused the webhook to fire. An example payload that fired on a simple commit can be seen here:
payload
{
"after":"c04a2b2af96a5331bbee0f11fe12965902f5f571",
"before":"78d414a69db29cdd790659924eb9b27baac67f60",
"commits":[
{
"added":[
"afile"
],
"author":{
"email":"myemailaddress#mydomain.com",
"name":"Darren Birkett",
"username":"mancdaz"
},
"committer":{
"email":"myemailaddress#mydomain.com",
"name":"Darren Birkett",
"username":"mancdaz"
},
"distinct":true,
"id":"c04a2b2af96a5331bbee0f11fe12965902f5f571",
"message":"adding afile",
"modified":[
],
"removed":[
],
"timestamp":"2012-09-03T02:35:59-07:00",
"url":"https://github.com/mancdaz/mygithubrepo/commit/c04a2b2af96a5331bbee0f11fe12965902f5f571"
}
],
"compare":"https://github.com/mancdaz/mygithubrepo/compare/78d414a69db2...c04a2b2af96a",
"created":false,
"deleted":false,
"forced":false,
"head_commit":{
"added":[
"afile"
],
"author":{
"email":"myemailaddress#mydomain.com",
"name":"Darren Birkett",
"username":"mancdaz"
},
"committer":{
"email":"myemailaddress#mydomain.com",
"name":"Darren Birkett",
"username":"mancdaz"
},
"distinct":true,
"id":"c04a2b2af96a5331bbee0f11fe12965902f5f571",
"message":"adding afile",
"modified":[
],
"removed":[
],
"timestamp":"2012-09-03T02:35:59-07:00",
"url":"https://github.com/mancdaz/mygithubrepo/commit/c04a2b2af96a5331bbee0f11fe12965902f5f571"
},
"pusher":{
"email":"myemailaddress#mydomain.com",
"name":"mancdaz"
},
"ref":"refs/heads/master",
"repository":{
"created_at":"2012-07-12T04:17:51-07:00",
"description":"",
"fork":false,
"forks":1,
"has_downloads":true,
"has_issues":true,
"has_wiki":true,
"name":"mygithubrepo",
"open_issues":0,
"owner":{
"email":"myemailaddress#mydomain.com",
"name":"mancdaz"
},
"private":false,
"pushed_at":"2012-09-03T02:36:06-07:00",
"size":124,
"stargazers":1,
"url":"https://github.com/mancdaz/mygithubrepo",
"watchers":1
}
}
This entire payload gets passed in the POST requests as a single parameter, with the imaginative title payload. It contains a ton of information about the event that just happened, all or any of which can be used by jenkins when we build jobs after the trigger. In order to use this payload in Jenkins, we have a couple of options. I discuss one below.
Getting the $payload
In jenkins, when creating a new build job, we have the option of specifying the names of parameters that we expect to pass to the job in the POST that triggers the build. In this case, we would pass a single parameter payload, as seen here:
Passing parameters to a jenkins build job
Further down in the job configuration, we can specify that we would like to be able to trigger the build remotely (ie. that we want to allow github to trigger the build by posting to our URL with the payload):
Then, when we set up the webhook in our github repo (as described in the first post), we give it the URL that jenkins tells us to:
You can’t see it all in the screencap, but the URL I specified for the webhook was the one that jenkins told me to:
http://jenkins-server.chloky.com:8080/job/mytestbuild//buildWithParameters?token=asecuretoken
Now, when I built my new job in jenkins, for the purposes of this test I simply told it to echo out the contents of the ‘payload’ parameter (which is available in paramterized builds as a shell variable of the same name), using a simple script:
#!/bin/bash
echo "the build worked! The payload is $payload"
Now to test the whole thing we simply have to make a commit to our repo, and then pop over to jenkins to look at the job that was triggered:
mancdaz#chloky$ (git::master)$ touch myfile
mancdaz#chloky$ (git::master) git add myfile
mancdaz#chloky$ (git::master) git commit -m 'added my file'
[master 4810490] added my file
0 files changed, 0 insertions(+), 0 deletions(-)
create mode 100644 myfile
mancdaz#chloky$ (git::master) git push
Counting objects: 3, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (2/2), 232 bytes, done.
Total 2 (delta 1), reused 0 (delta 0)
To git#github.com:mancdaz/mygithubrepo.git
c7ecafa..4810490 master -> master
And over in our jenkins server, we can look at the console output of the job that was triggered, and lo and behold there is our ‘payload’ contained in the $payload variable and available for us to consume:
So great, all the info about our github event is here. and fully available in our jenkins job! True enough, it’s in a big json blob, but with a bit of crafty bash you should be good to go.
Of course, this example used a simple commit to demonstrate the principles of getting at the payload inside jenkins. As we discussed in the earlier post, a commit is one of many events on a repo that can trigger a webhook. What you do inside jenkins once you’ve triggered is up to you, but the real fun comes when you start interacting with github to take further actions on the repo (post comments, merge pull requests, reject commits etc) based on the results of your build jobs that got triggered by the initial event.
Look out for a subsequent post where I tie it all together and show you how to process, run tests for, and finally merge a pull request if successful – all automatically inside jenkins. Automation is fun!
There is a Generic Webhook Trigger plugin that can contribute values from the post content to the build.
If the post content is:
{
"app":{
"name":"GitHub API",
"url":"http://developer.github.com/v3/oauth/#oauth-authorizations-api"
}
}
You can configure it like this:
And when triggering with some post content:
curl -v -H "Content-Type: application/json" -X POST -d '{ "app":{ "name":"GitHub API", "url":"http://developer.github.com/v3/oauth/" }}' http://localhost:8080/jenkins/generic-webhook-trigger/invoke?token=sometoken
It will resolv variables and make them available in the build job.
{
"status":"ok",
"data":{
"triggerResults":{
"free":{
"id":2,
"regexpFilterExpression":"",
"regexpFilterText":"",
"resolvedVariables":{
"app_name":"GitHub API",
"everything_app_url":"http://developer.github.com/v3/oauth/",
"everything":"{\"app\":{\"name\":\"GitHub API\",\"url\":\"http://developer.github.com/v3/oauth/\"}}",
"everything_app_name":"GitHub API"
},
"searchName":"",
"searchUrl":"",
"triggered":true,
"url":"queue/item/2/"
}
}
}
}

How to expose Openshift enviroment variables on a json

I have installed node-push-server. The configuration is loaded from a json like this:
{
"webPort": 8000,
"mongodbUrl": "mongodb://username:password#localhost/database",
"gcm": {
"apiKey": "YOUR_API_KEY_HERE"
},
"apn": {
"connection": {
"gateway": "gateway.sandbox.push.apple.com",
"cert": "/path/to/cert.pem",
"key": "/path/to/key.pem"
},
"feedback": {
"address": "feedback.sandbox.push.apple.com",
"cert": "/path/to/cert.pem",
"key": "/path/to/key.pem",
"interval": 43200,
"batchFeedback": true
}
}
}
How can I set the enviroment variables for my application in this json file?
I don't think it's possible. You should be able to change all these settings in the code though. For example in node you can do: process.env.OPENSHIFT_VARIABLENAME to read an environment variable.
Example for MongoDB connection string from docs:
//provide a sensible default for local development
mongodb_connection_string = 'mongodb://127.0.0.1:27017/' + db_name;
//take advantage of openshift env vars when available:
if(process.env.OPENSHIFT_MONGODB_DB_URL){
mongodb_connection_string = process.env.OPENSHIFT_MONGODB_DB_URL + db_name;
}
As an alternative, there is a quick and easy deployable gear called AeroGear Push that might serve your needs.
Config files can be awkward because including them in your source repo isn't always a good move.
OpenShift deployments are mostly git push-driven, so there are several options for helping you correctly resolve your configs on the server.
Configuring your service using ENV vars is the most common approach, but since this one requires a flat file, you'll need to find a way to update the file with the correct values.
If you know what keys and values are needed, you should be able to write a script that updates the example json, or merges two json objects to produce a flat config file including the strings node-pushserver will expect.
It looks like mongodbUrl, webPort, (and domain?) would need to be populated with OpenShift-provided values (when available). config-multipaas might be able to help with that.
I would probably implement the config bootstrapping / merging work as a build step, allowing you to prep the config file and start node-pushserver in it's usual way