Can I make skipped tests more prominent in the Jenkins dashboard? - junit

I have a project under test in Jenkins ("test installation"; it does some regression tests ti verify that the installer works). This project has a soft dependency on stuff outside of Jenkins' control: if the latest installer isn't available, then we can't test it. We can always re-test an old installer, though, and that seems worth doing (we've got the CPU cycles, so we may as well burn 'em).
What I'd like is produce a loud warning if the installer isn't current, but then continue with the tests.
The first thing I tried was making a test that failed when the installer was out of date. That was prominent, but confusing because the installation tests weren't actually the thing that was failing.
Now I have the same test, but it uses a JUnit assumption instead of an assertion, which means that the test either skips or passes. This is also less-than-perfect, because Jenkins reports "9 tests, 0 failures" on the front page, and it's only when I drill down multiple layers into the test results that I see that 1 of the 9 tests was skipped.
Can I get Jenkins to report skipped tests on the front page? I didn't find an appropriate-looking plug-in for it. Is there a better method I should use to warn about the installer being out of date?

A bit late to answer question for you but might be of help to other people . . .
Two easy enough things that I think would work best for cases like this:
Add one extra test to check latest installer version and fail. So that job would be marked as unstable.
Or you could use one of the post-build plugins to check logs and mark job as failed instead of unstable.
There is not an easy way to make skipped tests more prominent.
But you could do some post-processing on test results.
We generate a VERSION.txt file in job script/test scripts and put it in job workspace. Then we use a Groovy Postbuild action and set the job description:
"Groovy postbuild"
def currentBuild = Thread.currentThread().executable
def ws = manager.build.workspace.getRemote()
String desc = new File(ws + "/VERSION.txt").text
currentBuild.setDescription(desc)
This is quite useful we can see the version tested or other details in jobs history.
To mark things more prominently . . . >;) you could use a badges and some groovy.
Plugins used:
https://wiki.jenkins.io/display/JENKINS/Groovy+Postbuild+Plugin
The Groovy Postbuild plugin is the only one really needed.
THe Badges API is part of Groovy Postbuild plugin.
https://wiki.jenkins.io/display/JENKINS/Groovy+plugin
The Groovy plugin is useful for experimenting with groovy or making jobs with groovy.
https://wiki.jenkins.io/display/JENKINS/Build+Trigger+Badge+Plugin
Actually the badges are available in jenkins groovy postbuild plugin. The BuildTriggerBadge plugin which I use is useful to have anyway if a variety of triggers are used but not actually needed for setting badge. I include it here as it is installed and I am not 100% sure my code would work without it (but I am maybe 98.5% sure). I do not have the Badge plugin installed.
See below some groovy experimentation with badges:
def currentBuild = Thread.currentThread().executable
def ws = manager.build.workspace.getRemote()
String desc = new File(ws + "/VERSION.txt").text
currentBuild.setDescription(desc)
if (desc.contains("ERROR")) {
coverageText="VarkeninK, SomethinK iz bad."
// Apologies :-7 I can only assume this was from Viktor in http://www.userfriendly.org/ web comic
manager.addShortText(coverageText, "black", "repeating-linear-gradient(45deg,
yellow, yellow 10px, Orange 10px, Orange 20px)", "0px", "white")
}
manager.addShortText("GreyWhite0pxWhite", "grey", "white", "0px", "white")
manager.addShortText("BlackGreen0pxWhite", "black", "green", "0px", "white")
manager.addShortText("BlackGreen5pxWhite", "black", "green", "5px", "white")
manager.addShortText("VERSION WhiteGreen0pxWhite", "white", "green", "0px", "white")
manager.addShortText("WhiteGreen5pxWhite", "white", "green", "5px", "white")
manager.addShortText("VERSION Black on Lime Green", "black", "limegreen", "0px", "white")
// darkgrey is lighter than grey!! :-P
manager.addShortText("OBSOLETE YellowDarkGrey5pxGrey", "yellow", "darkgrey", "5px", "grey")
manager.addShortText("OBSOLETE YellowGrey5pxGrey", "yellow", "grey", "5px", "grey")
manager.removeBadges()
manager.addShortText("VERSION Black on Lime Green", "black", "limegreen", "0px", "white")
manager.addShortText(desc, "black", "limegreen", "5px", "white")
manager.addShortText("OBSOLETE YellowGrey5pxGrey", "yellow", "grey", "5px", "grey")
manager.addBadge("warning.gif", "Warning test")
manager.addWarningBadge("other warning test")
// https://wiki.jenkins.io/display/JENKINS/Groovy+Postbuild+Plugin
// contains(file, regexp) - returns true if the given file contains a line matching regexp.
// logContains(regexp) - returns true if the build log file contains a line matching regexp.
// getMatcher(file, regexp) - returns a java.util.regex.Matcher for the first occurrence of regexp in the given file.
// getLogMatcher(regexp) - returns a java.util.regex.Matcher for the first occurrence of regexp in the build log file.
// setBuildNumber(number) - sets the build with the given number as current build. The current build is the target of all methods that add or remove badges and summaries or change the build result.
// addShortText(text) - puts a badge with a short text, using the default format.
// addShortText(text, color, background, border, borderColor) - puts a badge with a short text, using the specified format.
// addBadge(icon, text) - puts a badge with the given icon and text. In addition to the 16x16 icons offered by Jenkins, groovy-postbuild provides the following icons:
// - completed.gif
// - db_in.gif
// - db_out.gif
// - delete.gif
// - error.gif
// - folder.gif
// - green.gif
// - info.gif
// - red.gif
// - save.gif
// - success.gif
// - text.gif
// - warning.gif
// - yellow.gif
// addBadge(icon, text, link) - like addBadge(icon, text), but the Badge icon then actually links to the given link (since 1.8)
// addInfoBadge(text) - puts a badge with info icon and the given text.
// addWarningBadge(text) - puts a badge with warning icon and the given text.
// addErrorBadge(text) - puts a badge with error icon and the given text.
// removeBadges() - removes all badges from the current build.
// removeBadge(index) - removes the badge with the given index.
// createSummary(icon) - creates an entry in the build summary page and returns a summary object corresponding to this entry. The icon must be one of the 48x48 icons offered by Jenkins. You can append text to the summary object by calling its appendText methods:
// appendText(text, escapeHtml)
// appendText(text, escapeHtml, bold, italic, color)
// removeSummaries() - removes all summaries from the current build.
// removeSummary(index) - removes the summary with the given index.
// buildUnstable() - sets the build result to UNSTABLE.
// buildFailure() - sets the build result to FAILURE.
// buildSuccess() - sets the build result to SUCCESS.

Related

How to generate dynamic files using config file in palantir foundry

I have two columns in config file col1 and col2.
Now I have to import this config file in my main python-transform and then extract the values of columns in order to create dynamic output path from these values by iterating over all the possible values.
For example
ouput_path1=Constant+value1+value2
ouput_path2=Constant+value3+value4
Please suggest some solution for generating output file in palantir foundary(code-repo)
What you probably want to use is a transform generator. In the "Python Transforms" chapter of the documentation, there's a section "Transform generation" which outlines the basics of this.
The most straightforward path is likely to generate multiple transforms, but if you want just one transform that outputs to multiple datasets, that would be possible too (if a little more complicated.)
For the former approach, you would add a .yaml file (or similar) to your repo, in which you define your values, and then you read the .yaml file and generate multiple transforms based on the values. The documentation gives an example that does pretty much exactly this.
For the latter approach, you would probably want to read the .yaml file in your pipeline definer, and then dynamically add outputs to a single transform. In your transforms code, you then need to be able to handle an arbitrary number of outputs in some way (which I presume you have a plan for.) I suspect you might need to fall back to manual transform registration for this, or you might need to construct a transforms object without using the decorator. If this is the solution you need, I can construct an example for you.
Before you proceed with this though, I want to note that the number of inputs and outputs is fixed at "CI-time" or "compile-time". When you press the "commit" button in Authoring (or you merge a PR), it is at this point that the code is run that generates the transforms/outputs. At a later time, when you build the actual dataset (i.e. you run the transforms) it is not possible to add/remove inputs, outputs and transforms anymore.
So to change the number of inputs/outputs/transforms, you will need to go to the repo, modify the .yaml file (or whatever you chose to use) and then press the commit button. This will cause the CI checks to run, and publish the new code, including any new transforms that might have been generated in the process.
If this doesn't work for you (i.e. you want to decide at dataset build-time which outputs to generate) you'll have to fundamentally re-think your approach. Otherwise you should be good with one of the two solutions I roughly outlined above.
You cannot programmatically create transforms based on another datasets's content. The datasets are created at CI time.
You can however have a constants file inside your code repo, which can be read at CI time, and use that to generate transforms. I.e.:
myconfig.py:
dataset_pairs = [
{
"in": "/path/to/input/dataset,
"out": "/path/to/output/dataset,
},
{
"in": "/path/to/input/dataset2,
"out": "/path/to/output/dataset2,
},
# ...
{
"in": "/path/to/input/datasetN,
"out": "/path/to/output/datasetN,
},
]
///////////////////////////
anotherfile.py
from myconfig import dataset_pairs
TRANSFORMS = []
for conf in dataset_pairs:
#transform_df(Output(conf["out"]), my_input=Input(conf["in"]))
def my_generated_transform(my_input)
# ...
return df
TRANSFORMS.append(my_generated_transform)
To re-iterate, you cannot create the config.py programatically based on a dataset contents, because when this code is run, it is at CI time, so it doesn't have access to the datasets.

How do I debug lua functions called from conky?

I'm trying to add some lua functionality to my existing conky setup so that repetitive "code" in my conky text can be cleaned up. For example, I have information for each mounted FS, each core, etc. where each row displayed in my panel differs ONLY by one parameter.
My first skeletal, attempt at using lua functions for this seems to run but displays nothing in my panel. I've only found very simple examples to base this on, so I may have made a simple error, but I don't even know how to diagnose it. My code here is modeled after what I HAVE been able to find regarding writing functions, such as this How to implement a basic Lua function in Conky? , but that's about all the depth I've found on the topic except for drawing and cairo examples.
Here's the code added to my conky config, as well as the contents of my functions.lua file
conky.config = {
...
lua_load = '/home/conky-manager/MyConky/functions.lua',
};
conky.text = [[
...
${voffset 5}${lua conky_test 'test'}
...
]]
file - functions.lua
function conky_test(parm1)
return 'result text'
end
What I would expect is to see is "result text" displayed in my panel at the location where that function call appears, but nothing shows.
Is there a log created by conky as it runs, or a way to provide some debug output? Even if I'd made a simple error here, I'd still like to have the ability to diagnose things as my code gets more complex.
Success!
After cobbling info from several articles together, I figured out my basic flaws -
1. Missing a 'conky_main' function,
2. Missing a 'lua_draw_hook_post' to invoke it, and
3. Realizing that if I invoke conky from a terminal, print statements in lua would appear there.
So, for anyone who sees this question and has the same issues, here's the corrected code.
conky.config = {
...
lua_load = '/home/conky-manager/MyConky/functions.lua',
lua_draw_hook_post = "main",
};
conky.text = [[
...
${lua conky_test 'test'}
...
]]
and the proper basics in my functions.lua file
function conky_test(parm1)
return 'result text'
end
function conky_main()
if conky_window == nil then
return
end
end
A few notes:
I still haven't determined if using 'lua_draw_hook_pre' instead of 'lua_draw_hook_post' makes any difference, but it doesn't seem to in this example.
Also, some examples showed actually calling this 'test' function instead of writing a 'main', but the 'main' seemed to have value in checking to see if conky_window existed.
Some examples seemed to state that naming functions with the prefix 'conky_' was required, but then showed examples of calling those functions without the prefix, so I assume the prefix is inferred during the call.
a major note: you should run conky from the directory containing the lua scripts.

SteamVR HDK JSON Error

I'm about at my wit's end right now dealing with the GUI. I'm making a controller to track in VR for a school project, and I am at the stage where I am running simulations to verify my JSON. I had to hand make it because our group already decided on sensor placements, so I followed the reference controller template. Unfortunately, there is a perpetual error at [34, 33] that I can't figure out. I will post a piece of the code here:
"plus_z" : [ 0, 1, 0 ], "position" : [ -0.05221, 0, 0 ] }, "render_model" : "KNIFE_MODEL.obj", "head" :
These are lines 31-35. My error is a leading decimal, however when I've run it through several online JSON validators, my code is valid. Earlier today, the model visualized in OpenScad several times, however my sensor orientation was wrong so when I fixed the coordinates, the GUI gave me the same error as usual. No matter what version I reverted to I got the same error.

PhpStorm - remove 200 responses from console when using built in PHP server

I am using PhpStorm and tried yesterday to use the built-in server functionality that it provides.
It works, but I find the console showing in bright red the 200 responses quite annoying, as it makes quite hard to spot real issues.
In the picture below you'll see what I mean.
Is there any way to disable these and only show for instance warnings (maybe in yellow) and errors (maybe in red)?
Use Grep Console plugin for this -- should do the job fine (does so in similar consoles).
Based on your requirements it allows you to:
change color for the matching text (or whole line if desired) based on presence of some marker (matching text)
or even hide (filter out) such line completely.
Your marker here could be [200]: for successful responses -- this is a simple "match the exact text" pattern. If it does not work (e.g. because this text is in the middle of the string .. or because it looks like regex (as [] have special meaning in regex)) then just convert it into proper regex: .*\[200\]:.* -- something like that should do.
Example of how it works:
The rule for that is highlighted (plain string as it's a very simple rule -- match exact string):

how to use the recognized text with Google Assistant's hotword.py code

How do I get the spoken text from the hotword.py code & do my own actions on the recognised text rather than Google going off and reacting to the text?
I've installed GA on the Pi3 & after some initial issues with usb mic/analogue audio settings and certain Python files missing this got me going:
When installing Google Assistant, I an error "...googlesamples.assistant' is a package and cannot be directly executed..."
I then followed the Google Next steps : https://developers.google.com/assistant/sdk/prototype/getting-started-pi-python/run-sample and created a new project "myga/" with a hotword.py file that contains:
def process_event(event):
"""Pretty prints events.
Prints all events that occur with two spaces between each new
conversation and a single space between turns of a conversation.
Args:
event(event.Event): The current event to process.
"""
if event.type == EventType.ON_CONVERSATION_TURN_STARTED:
print()
#GPIO.output(25,True) see https://stackoverflow.com/questions/44219740/how-can-i-get-an-led-to-light-on-google-assistant-listening
if event.type == EventType.ON_RECOGNIZING_SPEECH_FINISHED:
print("got some work to do here with the phrase or text spoken!")
print(event)
if (event.type == EventType.ON_CONVERSATION_TURN_FINISHED and
event.args and not event.args['with_follow_on_turn']):
print()
#GPIO.output(25,False) or also see https://blog.arevindh.com/2017/05/20/voice-activated-google-assistant-on-raspberry-pi-with-visual-feedback/
I'd like code to react to the ON_RECOGNIZING_SPEECH_FINISHED event I think and at either do my own action by matching simple requests or if the phrase is not in my list then let Google handle it. How do I do that?
Eventually I'd be asking "OK Google, turn BBC1 on" or "OK Google, play my playlist" or "OK Google, show traffic" and hotword.py would run other applications to do those tasks.
Thanks, Steve
See the documentation here for all available methods -
https://developers.google.com/assistant/sdk/reference/library/python/
You can use the stop_conversation() method to stop Google Assistant handling that request and act on your own.
Here's what you need to do at a high level -
Build your own dictionary of commands that you'd like to handle -
"turn BBC1 on", "play my playlist" etc.
On EventType.ON_RECOGNIZING_SPEECH_FINISHED event check if the
recognized command exists in your dictionary.
If the recognized command exists in your dictionary call the assistant.stop_conversation() method and handle the command on your own. If not do nothing (let google handle it)
pseudo code -
local_commands = ['turnBBCOn', 'playLocalPlaylist']
function turnBBCOn() :
#handle locally
function playLocalPlaylist() :
#handle locally
def process_event(event):
if event.type == EventType.ON_CONVERSATION_TURN_STARTED:
print()
if event.type == EventType.ON_RECOGNIZING_SPEECH_FINISHED:
print(event.args['text'])
if event.args['text'] in local_commands:
assistant.stop_conversation()
if(event.args['text']='turn BBC1 on')
turnBBCOn()
elif(event.args['text']='play my playlist')
playLocalPlaylist()
if (event.type == EventType.ON_CONVERSATION_TURN_FINISHED and
event.args and not event.args['with_follow_on_turn']):
print()
I have recently integrated google assistant SDK with Raspberry Pi 3. I have taken reference from below git repository and created action.py and actionbase.py classes can handle my custom command. I found it very clean and flexible way to create your own custom commands.
You can register your custom command in action.py file like below
actor = actionbase.Actor()
actor.add_keyword(
_('ip address'), SpeakShellCommandOutput(
say, "ip -4 route get 1 | head -1 | cut -d' ' -f8",
_('I do not have an ip address assigned to me.')))
return actor
action.py
Write your custom code in action.py
"""Speaks out the output of a shell command."""
def __init__(self, say, shell_command, failure_text):
self.say = say
self.shell_command = shell_command
self.failure_text = failure_text
def run(self, voice_command):
output = subprocess.check_output(self.shell_command, shell=True).strip()
if output:
self.say(output.decode('utf-8'))
elif self.failure_text:
self.say(self.failure_text)
You can full source code here. https://github.com/aycgit/google-assistant-hotword
The text is contained within the event arguments. By calling event.args you can make use of the text. Here is an example.
https://github.com/shivasiddharth/GassistPi/blob/master/src/main.py