We decided to use JUnit xml files from our tests and upload them during CI as artifacts to Gitlab.
For elm, I followed https://www.npmjs.com/package/elm-test?activeTab=readme and used
./node_modules/.bin/elm-test --compiler ./node_modules/.bin/elm app/frontend/elm/tests/ --reporter=junit
But it actually does nothing. Output is plain text.
MacBook-Pro-6:enectiva admin$ ./node_modules/.bin/elm-test --compiler ./node_modules/.bin/elm app/frontend/elm/tests/ --reporter=junit
elm-test 0.19.0
---------------
Running 286 tests. To reproduce these results, run: elm-test --fuzz 100 --seed 369554180583103 /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/BareTime/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/ChartAndControlSet/AllEnergies/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/ChartAndControlSet/ChartPort/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/ChartAndControlSet/Data/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/ChartAndControlSet/SingleEnergy/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/Disableable/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/Enectiva/Elm/I18n.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/Enectiva/EntityTree/EntityTree.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/Exports/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/ExportStatePoller/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/ExportTemplate/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/HourFilter/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/PriceListForm/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/SelectionState/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/SourceSelection2/Tests.elm /Users/admin/go/src/enectiva.cz/enectiva/app/frontend/elm/tests/Visibility/Tests.elm
TEST RUN PASSED
Duration: 1049 ms
Passed: 286
Failed: 0
The same with json.
As I see from output from reproduce part, it looks like elm-test did not even notice the reporter flag.
Does anyone have idea for solution?
My bad.
Typo, there should be used report instead of reporter.
Related
At the time of generating the Html report getting below error.
please give the Suggestion to overcome this issue.
thanks in advance.
enter image description here
There is a problem with your .jtl results file, JMeter expects to find a long value representing a timestamp in milliseconds since beginning of Unix epoch
You should replace 1.65269E+12 with its "long" equivalent of 1652690000000
If you opened and saved JMeter's .jtl results file using Excel or equivalent you should re-save it again and configure the first column to contain numeric values without floating points.
Also be aware that you can run a JMeter test and generate HTML reporting dashboard in command-line non-GUI mode in one shot like:
jmeter -n -t /path/to/testplan.jmx -l /path/to/testresult.jtl -e -o /path/to/dashboard
More information: Generating Reports
Ok, let's save someone 8 hours of clueless debugging.
TL;DR: Apache drill cannot correctly parse CSV files generated on windows machines. That's because their EOL is set to \r\n by default unlike to unix system, where it is set to \n. And this leads to horribly undebuggable errors because the leading \r probably stays clued to the last field's value. And what's funny, you won't notice this because it's invisible.
Let's have two files, one created in linux and the second in windows: hello.linux.csv and hello.win.csv. The content is the same (at least it looks like it is ...)
field_a,field_b
Hello,0.5
Let's have a query.
SELECT * from (...)/hello.linux.csv;
---
field_a, field_b
Hello, "0.5"
SELECT * from (...)/hello.win.csv;
---
field_a, field_b
Hello, "0.5"
Fine! Let's do something with the data. Cast "0.5" to number should be fine (and necessary).
SELECT
field_a, CAST (field_b as DECIMAL(10, 2)) as test
from (...)/hello.linux.csv;
---
field_a, test
Hello, 0.5
-- ... aaand, here we go!
SELECT
field_a, CAST (field_b as DECIMAL(10, 2)) as test
from (...)/hello.win.csv;
[30038]Query execution error. Details:[
SYSTEM ERROR: NumberFormatException
Fragment 0:0
Please, refer to logs for more information. -- In the logs, there is only useless java stacktrace, of course.
[Error Id: 3551c939-3f5b-42c1-9b58-d600da5f12a0 on drill-develop-7bdb45c597-52rnz:31010]
]
...
(And now, imagine how much time would take to reveal this on a complex production setup where the queries, data and other factors are somehow more complicated.)
The question: Is there a way how to force apache drill (v 1.15) to process CSV files created with windows EOLs?
You can update csv format line delimiter to \r\n but this would apply to all csv files in the scope of your text plugin. To change delimiter per table use table function.
https://drill.apache.org/docs/plugin-configuration-basics/
I run a job on a remote server with Ansible. The stdout generates some output where sometimes errors show up. The error text is in the form of
#ERROR FMM0129E The following error was returned by the vSphere(TM) API: 'Cannot complete login due to an incorrect user name or password.'.
The thing is that some of these errors can safely be ignored and only these that are not in my false positive list should raise a fail.
My question is, can this be done in a pure Ansible way?
The only thing that comes to mind is the simple failed_when check which, in this case, falls short. I am thinking that these "complex" output checking should be done out of Ansible, invoking a python / shell / etc. script to help.
If you are remotely executing a shell command anyway then there's no reason why you couldn't wrap that in a shell script that returns a non 0 status code for the things you care about and then simply execute that via the script module.
example.sh
#!/bin/bash
randomInt=$[ 1 + $[ RANDOM % 10 ]]
echo $randomInt
if [ $randomInt == 1 ]; then
exit 1
else
exit 0
fi
And then use it like this in your playbook:
- name: run example.sh
script: example.sh
Ansible will automatically see any non 0 return codes as the task failing.
Instead of failed_when you could use ignore_errors: true which would get you into the position of passing the failing task and forwarding the stdout to another task. But I would not recommend this, since in my opinion a task should never ever report a failed state by intend. But if you feel this is an option for you, there even would be a way to reset the error counter so the Ansible stats at the end are correct.
- some: task
register: some_result
ignore_errors: true
- name: Reset errors after intentional fail
meta: clear_host_errors
when: some_result | failed
- another: task
check: "{{ some_result.stdout }}
when: some_result | failed
The last task then would check your stdout in a custom script or whatever you have and should report a failed state itself (return code != 0).
As far as I know the clear_host_errors feature is yet undocumented and the commit is about a month old, so I guess it will only be available in Ansible 2.0.1.
Another idea would be to wrap your task inside the script which checks the output or pipe it to that script. That obviously will only work if you run a shell command and not with any other ansible modules.
Other than those two options I don't think there is anything else available.
I'm trying to execute the SwingLibrary demo available in https://github.com/robotframework/SwingLibrary/wiki/SwingLibrary-Demo
After setting everything up (Jython, RobotFramework, demo app), I can run the following command:
run_demo.py startapp
, and it works (the demo app starts up).
Now if I try to run the sample tests, it fails:
run_demo.py example.txt
[ ERROR ] Error in file '/home/user1/python-scripts/gui_automation/sample-text.txt': Non-existing setting 'Library SwingLibrary'.
[ ERROR ] Error in file '/home/user1/python-scripts/gui_automation/sample-text.txt': Non-existing setting 'Suite Setup Start Test Application'.
==============================================================================
Sample-Text
==============================================================================
Test Add Todo Item | FAIL |
No keyword with name 'Insert Into Text Field description ${arg}' found.
------------------------------------------------------------------------------
Test Delete Todo Item | FAIL |
No keyword with name 'Insert Into Text Field description ${arg}' found.
------------------------------------------------------------------------------
Sample-Text | FAIL |
2 critical tests, 0 passed, 2 failed
2 tests total, 0 passed, 2 failed
==============================================================================
Output: /home/user1/python-scripts/gui_automation/results/output.xml
Log: /home/user1/python-scripts/gui_automation/results/log.html
Report: /home/user1/python-scripts/gui_automation/results/report.html
I suspect that it cannot find swinglibrary.jar, and therefore my plugin installation is probably messed up.
Any ideas?
Take a look at these error messages in the report:
[ ERROR ] Error in file '...': Non-existing setting 'Library SwingLibrary'.
[ ERROR ] Error in file '...': Non-existing setting 'Suite Setup Start Test Application'.
The first rule of debugging is to always assume error messages are telling you the literal truth.
They are telling you that you have an unknown setting. It thinks you are using a setting literally named "Library SwingLibrary" and one named "Suite Setup Start Test". Those are obviously incorrect setting names. The question is, why is it saying that?
My guess is that you are using the space-separated text format, and you only have a single space between "Library" and "SwingLibrary". Because there is one space, robot thinks that whole line is in the first column of the settings table, and whatever is in the first column is treated as the setting name.
The fix should be as simple as inserting two or more spaces after "Library", and two or more spaces after "Suite Setup".
This type of error is why I always recommend using the pipe-separated format. It makes the boundaries between cells much easier to see.
Can I safely ignore these cmake compiler warnings?
I'm learning to compile packages from source and practicing on MySQL.
Should I be searching for and installing dev libraries when I see "notices" like this (referencing specific "not found" files):
$ cmake . -LA
...
-- Looking for include file cxxabi.h
-- Looking for include file cxxabi.h - not found.
-- Looking for include file dirent.h
-- Looking for include file dirent.h - found
-- Looking for include file dlfcn.h
-- Looking for include file dlfcn.h - found
And what should I do about notices referencing these "not found" messages:
-- Looking for bmove
-- Looking for bmove - not found
-- Looking for bsearch
-- Looking for bsearch - found
-- Looking for index
-- Looking for index - found
For example, cxxabi.h can be found in libstdc++6-4.7-dev on Debian. Do I need to install libstdc++6-4.7-dev to have a proper compile of MySQL?
I also have some (constant?) warnings that I'm unsure of:
-- Performing Test TIME_T_UNSIGNED
-- Performing Test TIME_T_UNSIGNED - Failed
-- Performing Test HAVE_GETADDRINFO
-- Performing Test HAVE_GETADDRINFO - Success
Overall, my build seems to work good, but I want to be sure.
If CMake configuration process doesn't fail, it means these headers are optional and there are workarounds in the MySQL code for these cases.
It might be also, that when some headers aren't present some features are silently turned off. It make sense to provide MySQL as many optional headers as you can.
Beware that some headers are OS-specific, so you can't and don't have to provide them.