CODED UI TESTS: is there a way to add a warning in html report? - warnings

In my Coded UI Test project, I need to check if few Labels or Messages are consistent with the context. But those checks are not critical if not consistent and I need to output them only as warnings.
Note that I'm using nested ordered tests to use only one global ordered test with vstest.console.exe and get in one shot the overall test coverage report.
Till now I was creating assertions to check those consistencies, but an assertion failure leads to Test failure, then to ordered test failure and then to playback stop.
I tried to change Playback.PlaybackSettings.ContinueOnError value before and after the assertion: this works as I expect as the assertion is well reported as a warning in the html report file. But whatever, it causes the ordered test to stop and then my global ordered test chaining to fail...
I tried to use TestContext.WriteLine too instead of creating assert, but it seems that this is not output in the html report.
So my question is:
is there any way to create an assertion only as a Warning that will be output in the html report file and that doesn't lead to a test failure?
Thanks a lot for any answer and help on this ;)

So I got my solution with developping my own Warning Engine to integrate Warnings in test report, 'cause I found no existing solution for that with the current Coded UI Test Assertion engine.
I'll try to take some time to post generic parts of the code structure with comments translated in english (we're french so default comments are french for now...), but here are the main step lines :
Create a template based on the UITestActionLog.html original file
report structure of Coded UI Test engine, with only the start
bloc and the javascript functions and CSS declarations in it.
Create an assertion class with a main function to manage insertion
of Warning html bloc in the html report first created from the template.
Then create custom assert functions to call the main function
whereever on runtime, and custom Stopwatch to inject elapsed time in
the report ('cause I could'nt found a way to get back the elapsed
time directly from the Coded UI Test engine).
That's it.
Just a proposition as a way to do it, maybe not the best one but it worked for me. I'll try to take time to put blocl codes to be clearer on it.

Related

SolidJS: "computations created outside a `createRoot` or `render` will never be disposed" messages in the console log

When working on a SolidJS project you might start seeing the following warning message in your JS console:
computations created outside a `createRoot` or `render` will never be disposed
There are some information available on this in SolidJS' Github repository issues. But after reading them I was still not quite sure what this was all about and whether my code was really doing something wrong.
I managed to track down where it came from and find a fix for it based on the documentation. So I'm providing the explanation and the solution for those Googling this warning message.
In essence this is a warning about a possibility of a memory leak due to a reactive computation being created without the proper context which would dispose of it when no longer needed.
A proper context is created a couple of different ways. Here are the ones I know about:
By using the render function.
By using the createRoot function. Under the hood render uses this.
By using the createContext function.
The first is by far the most common way, because each app has at least one render function call to get the whole show started.
So what makes the code go "out of context"?
Probably the most common way is via async calls. The context creation with its dependency tree happens only when the synchronous portion of the code finishes running. This includes all the export default function in your modules and the main app function.
But code that runs at a later time because of a setTimeout or by being in an async function will be outside of this context and any reactive computations created will not be tracked and might stick around without being garbage collected.
An example
Let's say you have a data input screen and have a Save button on it that makes an API call to your server to save the data. And you want to provide a feedback to the user whether the operation succeeded or not, with a nice HTML formatted message.
[msg,setMsg] = createSignal(<></>)
async function saveForm(){
...
setMsg(<p>Saving your data.<i>Please stand by...</i></p>)
const result=await callApi('updateUser',formData)
if(result.ok){
setMsg(<p>Your changes were <b>successfully</b> saved!</p> )
} else {
setMsg(<p>There was a problem saving your data! <br>Error: </p><pre>{result.error}</pre> )
}
}
...
<div>
...
<button onClick={saveForm} >Save</button>
{msg()}
</div>
This will produce the above mentioned warning when the API call returns an error, but not the other times. Why?
The reason for this is that SolidJS considers the code inserts inside JSX to be reactive, ie: need to be watched and re-evaluated. So inserting the error message from the API call creates a reactive computation.
The solution
I found the solution at the very end of the SolidJS doc. It's a special JSX modifier: /*#once*/
It can be used at the beginning of a curly brace expression and it tells the SolidJS compiler to explicitly not to make this a reactive expression. In other words: it will evaluated once and only once when the DOM nodes are created from the JSX.
In the above example here's how to use it:
setMsg(<p>There was a problem saving your data! <br>Error: </p><pre>{/*#once*/ result.error}</pre> )
After this there will be no more warning messages :)
In my case, I had an input and when that input changed I re-created an SVG drawing. Because the SVG creation was an expensive operation, I added a debounce in the createEffect function which ran when the input changed. debounce is a technique to defer the processing until the input stops changing for at least X amount of time. It involved running the SVG generation code inside the setTimeout function, thus being outside of the main context. Using the /*#once*/ modifier everywhere where I inserted an expression in the generated JSX has fixed the problem.

Making a custom reporter for JSCS results in Gulp4

Please correct me where I'm wrong (still learning Gulp, Streams, etc.) I'd like to create a custom reporter for my gulp-jscs results. For example, let's say I have 3 files in my gulp.src() stream. To my knowledge, each is piped one at a time through jscs, which attaches a .jscs object onto the file with its results, one such variable in that object is .errorCount.
What I'd like to do is have a variable I create, ie: maxErrors which I set to, say 5. Since we're processing 3 files, let's say the first file passes with 0 errors, but the next has 3 errors. I don't want to prematurely stop processing since the maxErrors tally has not been reached (3/5 currently). So it should continue to process the next file which lets say has 3 errors as well, putting us over our max, so that we interrupt jscs from continuing to process more files and instead fail out and then let our custom reporter function gain access to the files that have been processed so I can look at their .jscs objects and customize some output.
My problem here is that I don't understand the docs when they say: .pipe(jscs.reporter('name-of-reporter')) How does a string value invoke my reporter (which currently exists as a function I've imported called libs.reporters.myJSCSReporter. I know pipe() expects Stream objects, so I can't just put a function in the .pipe() call.
I hope I've explained myself well enough (please ask for clarifications otherwise).

Custom control casting exception during DesignTime but not RunTime

Been dealing with a casting exception during design time only that getting me nuts.
I'm trying to create a custom control for winRT (a week view control just like windows phone 8.1). The control works fine during run time but in design time it gives me COMException. During my investigation of the cause of this COMException, I found out that there is a casting exception inside one of the component of my control. This component is a custom listview that implement ContainerContentChanging event. Inside this event there is a casting which raises this exception.
Here is a the custom list view class:-
The code is removed coz the source code is shared below.
The TemplatedListViewEntry cctor look like this one:-
The code is removed coz the source code is shared below.
And the AppointmentModel:-
The code is removed coz the source code is shared below.
OBS!!! while debugging using 2 instance of VS or VS+Blend and put a breakpoint before this line I can see that args.Item is of type ContentControl while during run time it is AppointmentModel.
Could it be a problem with the ItemsSource which is null at design time?
If yes how should I preceed and assign this_ If No, anyone here that can help me figure out what is the problem?
OBS!!! I anyone needs more info please ask and I would gladly share the entire code with you.
Edit 1
Even if i initiate my viewmodel in the cctor of the custom Control, it raises casting exception in Designtime and not in Run tim
Edit 2
After i wrote Edit 1 above i noticed that now i have to Cast exception in two different Styles (CustomWeekView style and TemplatedListView Style) this is if you open Generic.xaml inside Blend. It is really getting annoying and i'm out of thoughts now. That is why i decided to share the source code of this Project, hopfully someone will be able to help looking at it. Below you will see the source code.
CustomWeekView

Test framework using spreadsheet for Selenium RC / JUnit / Java

We have a smoke test that we run every morning to check a number of applications which involves logging in, executing a simple operation and logging out.
The test at the moment is a collection of Selenium IDE scripts which were imported into Selenium RC as Java and run inside JUnit running inside Netbeans.
What we would like to do is run the test from a spreadsheet i.e each line of the spreadsheet has the application URL, the login parameters, some titles and text to check and the log out sequence.
At the moment, our simple POC simply has one JUnit Test class of the form:
#Test
public void testTestSmokeCheck() throws Exception { ... }
This calls a class which loops through the spreadsheet and does:
selenium sel = new DefaultSelenium
sel.start
sel.open
...
sel.close
for each line of the spreadsheet.
This works but the problem is that many lines of the spreadsheet are compressed into one JUnit test which either all passes or all fails.
What we would like is for each line of the spreadsheet to be a separate JUnit test.
This way each line of the spreadsheet would result in either red or green which would be far more meaningful.
Any ideas how to achieve that?
Not exactly a spreadsheet, but you might want to look into FitNesse. It drives tests from tables on Wiki pages, and prints out red/green pass/fail.
You can do multiple pages and test suites, which should solve your problem.
You can use the Parameterized runner (the JavaDoc has an example). If a test fails, the failure message will include the index of the data, which could correspond to the row in the spreadsheet.
If the number of rows isn't huge, I recommend writing a test case per row and getting rid of the spreadsheet. Sooner or later you will want to do specific logic for some of the pages, and a spreadsheet will no longer model that well.

Is there a way to LOG RC Selenium test errors/failures into a database?

Im using phpunit & phpundercontrol to run the RC Selenium on every build.
PHPUnit allows you to implement your own TestListener. Custom test listeners implement the abstract methods in the PHPUnit_Framework_TestListener interface. Specifically, your listener will implement:
startTestSuite()
endTestSuite()
startTest()
endTest()
addError()
addFailure()
addSkippedTest()
addIncompleteTest()
Once you've attached the TestListner these methods will be called each time the corresponding events occur in your test suite. These methods will be written to perform the INSERTs and UPDATEs on a test results database that you'll create.
Attaching the listener class to your suite is as easy as adding a tag to the phpunit.xml configuration file. For example:
<phpunit>
<testsuites>[...]</testsuites>
<selenium>[...]</selenium>
<listeners>
<listener class="Database"
file="/usr/loocal/share/pear/PHPUnit/Util/Log/Database.php">
</listeners>
</phpunit>
That's all you need!
In fact, PHPUnit already comes with a working version of the listener I just described (PHPUnit_Util_Log_Database), as well as two different database schema definitions.
On many systems this class will live at /usr/loocal/share/pear/PHPUnit/Util/Log/Database.php, and the schemas at /usr/loocal/share/pear/PHPUnit/Util/Log/Database/MySQL.sql and /usr/loocal/share/pear/PHPUnit/Util/Log/Database/SQLite3.sql. You may have to do some tweaking depending on the DBMS you're using.
See these sections of the documentation (it wont let me post two links:
http://www.phpunit.de/manual/3.4/en/extending-phpunit.html#extending-phpunit.PHPUnit_Framework_TestListener
htp://www.phpunit.de/manual/3.4/en/api.html#api.testresult.tables.testlistener
(StackOverflow won't let me post two links, so you'll have to correct the HTTP in that second one)
I am working on the same problem.
Have asked a related question here a few days ago.
My attempt using Selenium IDE, Selenium RC and perl.
General strategy:
You can make newer releases of phpunit generate TAP output (options --tap, --log-tap).
(TAP is Test Anything Protocol - standardized output format)
Parse the logfile to obtain the suite metadata from the TAP parser object, insert into database using perl, e.g. "# Number of Passed": , "Failed", "Unexpectedly succeeded",