No files included in stash exception - exception

I am using stash command in groovy script. I am getting:
Caught: hudson.AbortException: No files included in stash
However the logs before the exception says:
Stashed 1 file(s)
[Pipeline] stash
Stashed 1 file(s)
can you please advise

I'm guessing based on your log you're doing more than one stash, perhaps you do have one that doesn't have any files, in that case you need allowEmpty: true
stash allowEmpty: true, includes: 'foo', name: 'bar'

I had this problem. I was making a mistake by specifying multiple files by name without comma separator. Correct way is:
stash includes: "a.bin,a.log", name: "<name>"
Please use "Pipeline Syntax" link to generate command and read description of fields you want to use.

Related

SaltStack - Unable to check if file exists on minion

I am trying to check if a particular file with some extension exists on a centos host using salt stack.
create:
cmd.run:
- name: touch /tmp/filex
{% set output = salt['cmd.run']("ls /tmp/filex") %}
output:
cmd.run:
- name: "echo {{ output }}"
Even if the file exists, I am getting the error as below:
ls: cannot access /tmp/filex: No such file or directory
I see that you already accepted an answer for this that talks about jinja being rendered first. which is true. but i wanted to add to that you don't have to use cmd.run to check the file. there is a state that is built in to salt for this.
file.exists will check for a file or directories existence in a stateful way.
One of the things about salt is you should be looking for ways to get away from cmd.run when you can.
create:
file.managed:
- name: /tmp/filex
check_file:
file.exists:
- name: /tmp/filex
- require:
- file: create
In SaltStack Jinja is evaluated before YAML. The file creation will (cmd.run) be executed after Jinja. So your Jinja variable is empty because the file isn’t created, yet.
See https://docs.saltproject.io/en/latest/topics/jinja/index.html
Jinja statements such as your set output line are evaluated when the sls file is rendered, before any of the states in it are executed. It's not seeing the file because the file hasn't been created yet.
Moving the check to the state definition should fix it:
output:
cmd.run:
- name: ls /tmp/filex
# if your underlying intent is to ensure something runs only
# once the file exists, you can enforce that here
- require:
- cmd: create

Setting Jenkins build name from package.json version value

I want to include the value of the "version" parameter in package.json as part of the Jenkins build name.
I'm using the Jenkins Build Name Setter plugin - https://wiki.jenkins-ci.org/display/JENKINS/Build+Name+Setter+Plugin
So far I've tried to use PROPFILE syntax in the "Build name macro template" step:
${PROPFILE,file="./mainline/projectDirectory/package.json",property="\"version\""}
This successfully creates a build, but includes the quotes and comma surrounding the value of the version property in package.json, for example:
"0.0.1",
I want just the value inside returned, so it reads
0.0.1
How can I do this? Is there a different plugin that would work better for parsing package.json and getting it into the template, or should I resort to some sort of regex for removing the characters I don't want?
UPDATE:
I tried using token transforms based on reading the Token Macro Plugin documentation, but it's not working:
${PROPFILE%\"\,#\",file="./mainline/projectDirectory/package.json",property="\"version\""}
still just returns
However, using only one escaped character and only one of # or % works. No other combinations I tried work.
${PROPFILE%\,,file="./mainline/projectDirectory/package.json",property="\"version\""}
which returns "0.0.1" (comma removed)
${PROPFILE#\"%\"\,,file="./mainline/projectDirectory/package.json",property="\"version\""}
which returns "0.0.1", (no characters removed)
UPDATE:
Tried to use the new Jenkins Token Macro plugin's JSON macro with no luck.
Jenkins Build Name Setter set to update the build name with Macro:
${JSON,file="./mainline/pathToFiles/package.json",path="version"}-${P4_CHANGELIST}
Jenkins build logs for this job show:
10:57:55 Evaluated macro: 'Error processing tokens: Error while parsing action 'Text/ZeroOrMore/FirstOf/Token/DelimitedToken/DelimitedToken_Action3' at input position (line 1, pos 74):
10:57:55 ${JSON,file="./mainline/pathToFiles/package.json",path="version"}-334319
10:57:55 ^
10:57:55
10:57:55 java.io.IOException: Unable to serialize org.jenkinsci.plugins.tokenmacro.impl.JsonFileMacro$ReadJSON#2707de37'
I implemented a new macro JSON, which takes a file and a path (which is the key hierarchy in the JSON for the value you want) in token-macro-2.1. You can only use a single transform per macro usage.
Try the token transformations # and % (see Token-Makro-Plugin):
${PROPFILE#"%",file="./mainline/projectDirectory/package.json",property="\"version\""}
(This will only help if you are using pipelines. But for what it's worth,..)
What works for me is a combination of readJSON from the Pipeline Utility Steps plugin and directly setting currentBuild.displayName, thusly:
script {
// readJSON from "Pipeline Utility Steps"
def packageJson = readJSON file: 'package.json'
def version = packageJson.version
echo "Setting build version: ${packageJson.version}"
currentBuild.displayName = env.BUILD_NUMBER + " - " + packageJson.version
// currentBuild.description = "other cool stuff"
}
Omitting error handling etc obvs.

Composer->does not contain valid JSON

Using: composer search 'tokens'command the IDE threw this error. I can't search neither download packages from: packagist.org
C:\ProgramData\ComposerSetup\bin\composer.bat search fosuserbundle
[Seld\JsonLint\ParsingException]
"http://packagist.org/packages.json" does not contain valid JSON
Parse error on line 1:
<HTML><HEAD><meta h
^
Expected one of: 'STRING', 'NUMBER', 'NULL', 'TRUE', 'FALSE', '{', '['
search [-N|--only-name] tokens1 ... [tokensN]
In windows machine, I've followed all these steps and get back in a correct behaviour (my problem was happening with composer require, but I believe it's the same as you described using composer search or if we would be using composer install for example. So, let´s see the steps:
Update composer (`composer self-update)
Disable IPV6 (as pointed in official docs, misconfigured IPV6 setting is a common source of issues)
Delete (or rename to repo_temp) your folder %LOCALAPPDATA%\Composer\repo (in order to have all contents updated)
Delete (or rename to vendor_temp) your vendor folder inside your project (in order to force composer to download all components again, and as pointed in this thread comment)
After doing these steps, in my, case, the issue was gone!
Message before (ERROR):
> composer require ...
...
"https://packagist.org/packages.json" does not contain valid JSON
Parse error on line 1:
▼\\\\\\♥��ݎ♀���
^
Expected one of: 'STRING', 'NUMBER', 'NULL', 'TRUE', 'FALSE', '{', '['
https://packagist.org could not be fully loaded, package information was loaded from the local cache and may be out of date
Message after doing the steps (OK)!:
> composer require ...
...
Writing lock file
Generating autoload files

Labels on Nodes and Relationships from a CSV file

I have problem when i want to add a label on a Node or to a Relatioship.
I do this in Neo4j with Cypher:
LOAD CSV WITH HEADERS FROM "file:c:/Users/Test/test.csv" AS line
CREATE (n:line.FROM)
and i get this error:
Invalid input '.': expected an identifier character, whitespace, NodeLabel, a property map, ')' or a relationship pattern (line 2, column 15 (offset: 99))
"CREATE (n:line.FROM)"
If there is not a possible way of doing this with the Cypher Language, can you recommend me an other way to do my job?
It is very important to find a solution on this problem even with a Cypher solution or any Java thing to do this job...
Depends on how dynamic you need it to be, for small variability:
LOAD CSV WITH HEADERS FROM "file:c:/Users/Test/test.csv" AS line
WHERE line.FROM = "Foo"
CREATE (n:Foo)
From Java you can use node.addLabel(DynamicLabel.label(line.from))
Otherwise you can look into my neo4j-shell-tools, which allow dynamic labels and rel-types: with #{FROM}.
see: https://github.com/jexp/neo4j-shell-tools#cypher-import
Thank you all for your answers but none of them helped me to solve my problem.
I found a solution to do exactly what i wanted. The solution was the Neo4jImporter tool (Link from official manual: Neo4jImporter tool Manual ) and not Cypher language nor Java.
So here is an example of what i have done and worked for me
A test.csv file contains the "PropertyTest" and ":LABEL". Firstly it creates one node with the label "TEST" and after the creation it adds the "proptest" property on the "TEST" node. So to add a Label on your node you use :LABEL and to add a Property on the same node you add any name you want as a header in .csv file.
Example of test.csv file:
PropertyTest,:LABEL
proptest,TEST
For windows i've done the Neo4jImport.bat command as it is described in the manual page of Neo4j.You can found the Neo4jImport.bat in Windows at "C:\Program Files\Neo4j Community\bin" and you run it from command line (cmd).
In details i opened the cmd, i followed the path to Neo4jImport.bat and finaly i wrote:
Neo4jImport.bat --into path-to-save-your-neo4j-database --nodes path-to-your-csv\test.csv
--delimiter ","
The default delimiter of Neo4jImporter is the "," but you can change it. For example if your information in .csv file is seperated with tab you can do the following:
Neo4jImport.bat --into path-to-save-your-neo4j-database --nodes path-to-your-csv\test.csv
--delimiter "TAB"
That was the way that i loaded dynamically a whole model of almost 2.000 nodes with different Labels and Properties.
Keep in mind from the manual that you can add as many labels and as many properties you want on a node by adding to your csv more headers
Example of two Labels in a node:
PropertyTest,:LABEL,:LABEL
proptest,TEST,SECOND_LABEL
Example of Neo4jImport.bat for two Labels and comma seperated CSV file:
Neo4jImport.bat --into path-to-save-your-neo4j-database --nodes path-to-your-csv\test.csv
--delimiter ","
I hope that you will find it useful to this certain problem of Labels from .csv files and please read the official manual, it helped me a lot to find a solution for my problem.
Below is the way for two csv files MIP_nodes.csv and MIP_edges.csv:
//Load csv data into the database - with dynamic label(s)
WITH "file:///MIP_nodes.csv" AS uri
LOAD CSV WITH HEADERS FROM uri AS row
WITH * WHERE row.label <> ""
call apoc.merge.node ([row.label],{nodeId:row.nodeId, name: row.name, type: row.type, created: row.created, property1: row.property1, property2: row.property2})
YIELD node as n1
//RETURN n1
WITH * WHERE row.label = ""
call apoc.merge.node (['DefaultNode'],{nodeId:row.nodeId, name: row.name, type: row.type, created: row.created, property1: row.property1, property2: row.property2})
YIELD node as n2
RETURN n1, n2
//Load csv data into the database - with dynamic relationship(s)
//:auto USING PERIODIC COMMIT 500
LOAD CSV WITH HEADERS FROM 'file:///MIP_edges.csv' AS row
MATCH (s)
WHERE s.nodeId = row.sourceId
//RETURN s
MATCH (d)
WHERE d.nodeId = row.destinationId
//RETURN d
CALL apoc.merge.relationship(s, row.label,{type:row.type, created: row.created, property1: row.property1, property2: row.property2},{}, d,{})
YIELD rel
//REMOVE rel.noOp;
RETURN rel;

dpkg-shlibdeps: error: no dependency information found for

I'm compiling a deb package and when I run dpkg-buildpackage I get:
dpkg-shlibdeps: error: no dependency information found for /usr/local/lib/libopencv_highgui.so.2.3
...
make: *** [binary-arch] Error 2
This happens because I installed the dependency manually. I know that the problem will be fixed if I install the dependency (or use checkinstall), and I want to generate the package anyway because I'm not interested on dependency checking. I know that I can give to dpkg-shlibdeps the option --ignore-missing-info which prevents a fail if dependency information can't be found. But I don't know how to pass this option to dpkg-shlibdeps since I'm using dpkg-buildpackage and dpkg-buildpackage calls dpkg-shlibdeps...
I have already tried:
sudo dpkg-buildpackage -rfakeroot -d -B
And with:
export DEB_DH_MAKESHLIBS_ARG=--ignore-missing-info
as root.
Any ideas?
use:
override_dh_shlibdeps:
dh_shlibdeps --dpkg-shlibdeps-params=--ignore-missing-info
if your rule file hasn't the dh_shlibdeps call in it. That's usually the case if you've
%:
dh $#
as only rule in it ... in above you must use a tab and not spaces in front of the dh_shlibdeps
If you want it to just ignore that flag, change the debian/rules line from:
dh_shlibdeps
to:
dh_shlibdeps --dpkg-shlibdeps-params=--ignore-missing-info
Yet another way, without modifying build scripts, just creating one file.
You can specify local shlib overrides by creating debian/shlibs.local with the following format: library-name soname-version dependencies
For example, given the following (trimmed) ldd /path/to/binary output
libevent-2.0.so.5 => /usr/lib/libevent-2.0.so.5 (0x00007fc9e47aa000)
libgcrypt.so.20 => /usr/lib/libgcrypt.so.20 (0x00007fc9e4161000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fc9e3b1a000)
The contents of debian/shlibs.local would be:
libevent-2.0 5 libevent-2.0
libgcrypt 20 libgcrypt
libpthread 0 libpthread
The "dependencies" list (third column) doesn't need to be 100% accurate - I just use the library name itself again.
Of course this isn't needed in a sane debian system which has this stuff defined in /var/lib/dpkg/info (which can be used as inspiration for these overrides). Mine isn't a sane debian system.
Instead of merely ignoring the error, you might also want to fix the source of the error, which is usually either a missing or an incorrect package.shlibs or package.symbols file in package which contains the shared library triggering the error.
[1] documents how dpkg-shlibdeps uses the package.shlibs resp. package.symbols, files, [2] documents the format of the package.shlibs and package.symbols files.
[1] https://manpages.debian.org/jessie/dpkg-dev/dpkg-shlibdeps.1.en.html
[2] https://www.debian.org/doc/debian-policy/ch-sharedlibs.html
You've just misspelled your export. It should be like this:
export DEB_DH_SHLIBDEPS_ARGS_ALL=--dpkg-shlibdeps-params=--ignore-missing-info
dpkg-buildpackage uses make to process debian/rules. in this process, dpkg-buildpackage it might call dpkg-shlibdeps.
thus, the proper way to pass modify a part of the package building process is to edit debian/rules.
it's hard to give you any more hints, without seeing the actual debian/rules.
Finally I did it in the brute way:
I edited the script /usr/bin/dpkg-shlibdeps, changing this :
my $ignore_missing_info = 0;
to
my $ignore_missing_info = 1;
You can use this:
dh_makeshlibs -a -n
exactly after dh_install