Anyone has a example how to access the AndroidManifest.xml to test that several properties are set ?
For Example i would like to test that an Activity has a Action as intent filter set.
Thanks a lot
If you browse Robolectric's source, and specifically [master]/src/test/java/org/robolectric/AndroidManifestTest.java, you will find inspiration. They've done a pretty good job in thorougly testing the xml.
Related
I know that currently this can't be done , I'm trying to run some integration tests on my action that requires an issue event to be passed, I have a sample json with the event data and I would like to override both:
GITHUB_EVENT_NAME
GITHUB_EVENT_PATH
So my action grabs the right event and the sample info. Is there a workaround or a better way to approach this issue? Thanks in advance
My team is just getting started with X-Ray, and we are setting up our pipelines. However, while doing this I noticed that if I submit a Junit xml file to X-Ray via the REST api, it will create new tests for any test data that isn't already in the system.
Is there a way to have X-Ray ignore test results for tests that don't exist for the test execution? I don't want it constantly creating extra tests.
For example:
(Jira/X-Ray Server) TestExecution MyExecution has test testA
From client, I submit a Junit xml file containing results for testA and testB in the MyExecution TestExecution
testB now exist on the server under MyExecution
I would like to be able to submit the Junit xml file without it creating extra tests.
Whenever you import automation results using the REST API, or any of the available CI plugins, Xray will autoprovision ("generic") Test entities.
The flow is detailed here.
Xray tries to find a unique identifier for the automated test; in the case of JUnit, it's based on the full classname plus the name of the test method; this will become part of the Generic Definition field. The process for JUnit is described in more detail here.
How it works for a different test automation framework/report formats, is similar and is detailed on respective documentation pages.
If a "generic" Test is found, then the Test is reused and a Test Run is created against it. Otherwise, the Test will be auto-provisioned.
This process isn't configurable. However, in theory, if the user that you use for the submission of automation results isn't able to create Test issues, you may have what you need.
Things like these are usually not configurable because they are normally a consequence of applying good practices usually discussed internally with the team(s).
In my Coded UI Test project, I need to check if few Labels or Messages are consistent with the context. But those checks are not critical if not consistent and I need to output them only as warnings.
Note that I'm using nested ordered tests to use only one global ordered test with vstest.console.exe and get in one shot the overall test coverage report.
Till now I was creating assertions to check those consistencies, but an assertion failure leads to Test failure, then to ordered test failure and then to playback stop.
I tried to change Playback.PlaybackSettings.ContinueOnError value before and after the assertion: this works as I expect as the assertion is well reported as a warning in the html report file. But whatever, it causes the ordered test to stop and then my global ordered test chaining to fail...
I tried to use TestContext.WriteLine too instead of creating assert, but it seems that this is not output in the html report.
So my question is:
is there any way to create an assertion only as a Warning that will be output in the html report file and that doesn't lead to a test failure?
Thanks a lot for any answer and help on this ;)
So I got my solution with developping my own Warning Engine to integrate Warnings in test report, 'cause I found no existing solution for that with the current Coded UI Test Assertion engine.
I'll try to take some time to post generic parts of the code structure with comments translated in english (we're french so default comments are french for now...), but here are the main step lines :
Create a template based on the UITestActionLog.html original file
report structure of Coded UI Test engine, with only the start
bloc and the javascript functions and CSS declarations in it.
Create an assertion class with a main function to manage insertion
of Warning html bloc in the html report first created from the template.
Then create custom assert functions to call the main function
whereever on runtime, and custom Stopwatch to inject elapsed time in
the report ('cause I could'nt found a way to get back the elapsed
time directly from the Coded UI Test engine).
That's it.
Just a proposition as a way to do it, maybe not the best one but it worked for me. I'll try to take time to put blocl codes to be clearer on it.
I'm very new to masm.
Was trying to read this source code I found online and I came about invokx,
which is not invoke. Can't find anything on it around, strange, can anybody explain? can it be just a typo?
code snippet here
invoke Install
invoke EnumProcs
invokx _ExitProcess, 0
and another snippet too in some other part of the code
#nomore:
;; Dedstroy handle
invokx _CloseHandle[ebx], hSnapshot
any help will be much appreciated , thanks
Judging by your code snippets, it's probably the macro defined here.
As the code is from the Tinba banking trojan, there's this article that talks about it:
‘GetBaseDelta’ and ‘invokx’ are macros predefined in the code. As its
name suggests, the first one calculates the delta offset and puts the
result in ‘ebx’ register [...] The second macro calls an API function
based on the contents of ‘ebx’ register (i.e. by taking into account
the same delta offset).
It seems that invokx can also work like the standard invoke.
Im using phpunit & phpundercontrol to run the RC Selenium on every build.
PHPUnit allows you to implement your own TestListener. Custom test listeners implement the abstract methods in the PHPUnit_Framework_TestListener interface. Specifically, your listener will implement:
startTestSuite()
endTestSuite()
startTest()
endTest()
addError()
addFailure()
addSkippedTest()
addIncompleteTest()
Once you've attached the TestListner these methods will be called each time the corresponding events occur in your test suite. These methods will be written to perform the INSERTs and UPDATEs on a test results database that you'll create.
Attaching the listener class to your suite is as easy as adding a tag to the phpunit.xml configuration file. For example:
<phpunit>
<testsuites>[...]</testsuites>
<selenium>[...]</selenium>
<listeners>
<listener class="Database"
file="/usr/loocal/share/pear/PHPUnit/Util/Log/Database.php">
</listeners>
</phpunit>
That's all you need!
In fact, PHPUnit already comes with a working version of the listener I just described (PHPUnit_Util_Log_Database), as well as two different database schema definitions.
On many systems this class will live at /usr/loocal/share/pear/PHPUnit/Util/Log/Database.php, and the schemas at /usr/loocal/share/pear/PHPUnit/Util/Log/Database/MySQL.sql and /usr/loocal/share/pear/PHPUnit/Util/Log/Database/SQLite3.sql. You may have to do some tweaking depending on the DBMS you're using.
See these sections of the documentation (it wont let me post two links:
http://www.phpunit.de/manual/3.4/en/extending-phpunit.html#extending-phpunit.PHPUnit_Framework_TestListener
htp://www.phpunit.de/manual/3.4/en/api.html#api.testresult.tables.testlistener
(StackOverflow won't let me post two links, so you'll have to correct the HTTP in that second one)
I am working on the same problem.
Have asked a related question here a few days ago.
My attempt using Selenium IDE, Selenium RC and perl.
General strategy:
You can make newer releases of phpunit generate TAP output (options --tap, --log-tap).
(TAP is Test Anything Protocol - standardized output format)
Parse the logfile to obtain the suite metadata from the TAP parser object, insert into database using perl, e.g. "# Number of Passed": , "Failed", "Unexpectedly succeeded",