How can I debug TestCafe to see why it stopped reporting pass or fail midway through running tests? - testcafe

Overnight I have 250+ TestCafe tests running on a server usually without any problem, however last night results stopped being reported after the 90th test but the remaining tests continued to run.
When the tests are ran the console output is recorded into a file and from looking in this file I can see the result for each test like below.
[ Γ£û ] My automated test
However after the result for the 90th test is logged there are no more logs for the results of remaining tests. I know that the remaining tests have ran because in the test I'm seeing messages that are logged in the console when the various stages of a test are hit.
Does anyone have an idea what may have caused the tests to stop reporting their results or have any suggestions for how I may be able to debug this issue?

Related

Issues with Postman Monitors

I am having an issue with running my collection on the Monitors. It runs with no issues on the runner but does not on the Monitors.
It gives me error 401 Unauthorized only when I use the Monitors. I am using two different hosts under the same monitor.
I want to automate the collection to run automatically every x amount of hours every day. I made sure my workflow is working with the runner. However, it is not with the Monitors
I’ve already tried:
I use oAuth2.0 for the request that I am having issues with.
The request has pre-requests and tests.
The token is not expired.

SSIS package finished successfully with skipping several tasks

ETL package finished successfully without execution of several last tasks:
Then I've tried to run task with the same type and skip other:
After that I've created separate package with the last five tasks and they run great as expected!
Question:
What's happen with the flow on the first two figures? Why does package skip several tasks without any warnings/errors etc.?
Thanks a lot for answers and any ideas about this strange behavior!
[UPDATE] Answered by #Peter_R:
I've changed both sp_updatestats inputs from AND to OR and everything is ok. Arrows was changed to dotted ones:
The Logical AND constraint requires all tasks to complete before running so you SP_Updatestats will not run until both ProcessFull and MeasureGroupSet Loop have completed.
I am guessing after Deploy Data the Expression is designed to split the workflow depending on a condition you have set. In doing this you will never have both ProcessFull and MeasureGroupSet Loop running in parallel, meaning that the SP_UpdateStats task will never run.
If you change both the connecting constraints to the SP_UpdateStats to Logical OR it will run after either the ProcessFull OR MeasureGroupSet Loop has completed.
This is still the case if something is disabled as well, slightly odd but still the case.

How to speed up integration tests using SQL Server Dev Edition

We have a suite of applications developed in C# and C++ and using SQL Server as the back end. Integration tests are developed with NUnit, and they take more than two minutes to run. To speed up integration tests, we are using the following:
Tests run on the same workstation, so no network delays
Test databases are created on DataRam RAM Disk, which is fast
Test fixtures run in parallel, currently up to four at a time
Most test data is bulk loaded using table-valued parameters.
What else can be done to speed up automated integration tests?
I know this question is very, very old but I'll post my answer anyway.
It may sound stupid but: Write less integration tests and more unit tests. Integration test only at your applications boundaries (as in "when you pass control to code you do not own").
My opinion on this is inspired by J.B. Reinsberger. If you want, you can listen to a talk he gave on this topic. He is way better in explaining this than I am. Here is a link to the video:
http://vimeo.com/80533536
I do not like this answer, write less integration tests as it is wrong. Our application is data heavy. Most of our code is logic around the data. So without integration tests we have just trivial unit tests (which i think should still be written).
Our Integration Tests run for 1 hour. Thousands of tests. They have brought us a tremendous value.
I think you should analyse the slow tests and why they are slow. Look if multiple tests can reuse the data without dropping and recreating it from scratch.
Divide tests into areas so you do not always need to run every test.
Use an existing database snapshot instead of recreating the database.

Complex web app multithreaded testing (not load)

I have one complex web application which intensive interact with the database. I lock db (MySQL InnoDB) within some request`s subset to prevent data integrity violation (use 'begin' ... 'commit' command sequence). Before request amount is less than N app works good. But when request amount will be greater than N locking errors has appears ('Serialization failure: 1213 Deadlock found when trying to get lock; try restarting transaction').
I have a lot of functional test. All functional tests use 'single-client schema' emulation to test various scenarious of app using. They all is passed well. But how can I test my app with multiple clients connections (I want to able verify DB state at any time while test is run)? It means this is not simple load testing AFAIK.
You can use JMeter for that using :
Http sampler at start
once you identify the queries involved, you could use db sampler if you want to reproduce more simply or rapidly to test resolution
Regards

lock wait time out exceeded Error happening only in interval of one hour

During my performance test I happen to see the Lock Wait time out error # the interval of every one hour. I have not see any issue of these kind apart from time when the performance tests are carried out.
Is this something that can be fixed in application or should I re-try to execute those update again. please suggest.
Note: My application has more read operations and very few writes only.
If this issue only occurs in test and never an instance in production then you likely have an engineering ghost, unless the lock is directly related to the new code in this release which is not in production.
engineering ghosts are common with the test artifacts set up to exercise the system under test behave differently than a real user in production. This can be related to speed of execution, session abondonment, passing appropriate information back and forth or potentially dozens of other items. The key here is to understand which of your test artifacts is causing the issue.
If you are using a traditional end user interface test tool which records the conversation between client and server, then record twice andd diff the test recordings. You may have missed a subtle element related to session or state which is directl related to the above errors. If you have hand constructed the conversation, then you will want to examine from a log or sniffer trace perspective how your hand construction varies from the actual client converation and then adjust your test artifact accordingly.