I have a running test project with jbehave which tests a software on an other machine.
So my problem is, I would like to run the same stories for the same software on an other machines.
When I summarize, the web application I'm testing is installed on 20 different hosts and I would like to run the tests for every instance.
The tests are simple smoke-test, where I daily check some installation, application and database issues.
The test project was configured for one instance and works properly, so I want to extend it for n instances.
Can I parameterize the Testrunner oder something like this? Or can I call the tests many time with different parameters?
I'm little bit confused
you could use parameterized scenario to run the same tests with different parameters
see: http://jbehave.org/reference/stable/parametrised-scenarios.html
Related
I developed two eclipse plug-ins: a server and a client.
Either can be run separately in their own application, or they run in the same application as "embedded" mode.
For each operation mode, I created an eclipse product. The embedded product is basically the same as the client, but it also contains the server plugin which is then automatically activated.
Now, when it comes to setting up integration tests, I run into a problem: the product for the client and the embedded application use the same application id. Is there any way to distinguish between the two? Basically, I want to integration-test exactly what I defined in my embedded.product (of course with additional junit testing plugins). Is this possible in any way?
Our Change Approval team is trying to get us to use Octopus to deploy SSIS packages to our Production environment. The problem is that the tool (Azure DevOps) we use to generate the build for Octopus doesn't create or populate Project parameters when it deploys a project. It does, however, create and populate Environment parameters.
Up to now we have not used Environment parameters at all because we don't use multiple environments on any one server.
The CA team is suggesting that we work around the inability to deploy Project parameters by converting them to Environment parameters, and create one Environment per project.
This feels wrong to me, but I haven't been able to come up with a reason we can't do this, nor have I found one anywhere on the internet so far. On the other hand, I also haven't found anybody saying that they are doing this and it's working fine, either.
So, does anybody have any arguments for or against using Environment parameters to replace Project parameters per se?
Assume that I don't have to worry about any possible future need for multiple environments on a server.
I am working with Hudson here and I am trying to create a single job that users with different access can run. Based on their access level, they would see different options.
For instance:
A Developer when running this job would see the build stage and be able to see the build process, and deploy it to a development server.
The Release Engineer would see the same options as the developer, but also see that he can deploy the code to a different set of servers as well.
And so forth.
Is this even possible, like role based jobs. I know I can limit the access and who can do what, but this is a little different.
I am collecting data and store this data in a MySQL database using Java. Additionally, I use Maven for building the project, TestNG as a test framework, and Spring-Jdbc for accessing the database. I've implemented a DAO layer that encapsulates the access to the database. Besides adding data using the DAO classes I want to execute some queries which aggregate the data and store the results in some other tables (like materialized views).
Now, I would like to write some testcases which check whether the DAO classes are working as they should. Therefore, I thought of using an in-memory database which will be populated with some test data. Since I am also using MySQL-specific SQL queries for aggregating data, I went into some trouble:
Firstly, I've thought of simply using the embedded-database functionality provided by Spring-Jdbc to instantiate an embedded database. I've decided to use the H2 implementation. There I ran into trouble because of the aggregation queries, which are using MySQL-specific content (e.g. time-manipulation functions like DATE()). Another disadvantage of this approach is that I need to maintain two ddl files - the actual ddl file defining the tables in MySQL (here I define the encoding and add comments to tables and columns, both features are MySQL-specific); and the test ddl file that defines the same tables but without comments etc. since H2 does not support comments.
I've found a description for using MySQL as an embedded database which I can use within the test cases (http://literatitech.blogspot.de/2011/04/embedded-mysql-server-for-junit-testing.html). That sounded really promising to me. Unfortunately, it didn't worked: A MissingResourceExcpetion occurred "Resource '5-0-21/Linux-amd64/mysqld' not found". It seems that the driver is not able to find the database daemon on my local machine. But I don't know what I have to look for to find a solution for that issue.
Now, I am a little bit stuck and I am wondering if I should have created the architecture differently. Do someone has some tips how I should setup an appropriate system? I have two other options in mind:
Instead of using an embedded database, I'll go with a native MySQL instance and setup a database that is only used for the testcases. This options sounds slow. Actually, I might want to setup a CI server later on and I thought that using an embedded database would be more appropriate since the test run faster.
I erase all the MySQL-specific stuff out of the SQL queries and use H2 as an embedded database for testing. If this option is the right choice, I would need to find another way to test the SQL queries that aggregates the data into materialized views.
Or is there a 3rd option which I don't have in mind?
I would appreciate any hints.
Thanks,
XComp
I've created Maven plugin exactly for this purpose: jcabi-mysql-maven-plugin. It starts a local MySQL server on pre-integration-test phase and shuts it down on post-integration-test.
If it is not possible to get the in-memory MySQL database to work I suggest using the H2 database for the "simple" tests and a dedicated MySQL instance to test MySQL-specific queries.
Additionally, the tests for the real MySQL database can be configured as integration tests in a separate maven profile so that they are not part of the regular maven build. On the CI server you can create an additional job that runs the MySQL tests periodically, e.g. daily or every few hours. With such a setup you can keep and test your product-specific queries while your regular build will not slow down. You can also run a normal build even if the test database is not available.
There is a nice maven plugin for integration tests called maven-failsafe-plugin. It provides pre- and post- integration test steps that can be used to setup the test data before the tests and to cleanup the database after the tests.
I am buiding an application in Java that connects to a variety of databases, MySQL, Oracle, Firebird and a few others. The user (me) can select a test for that connection (Not sure if I should predefine the tests or just allow a free form text box for the user to input a statement.)
The plan is to try the application(client) on a ubuntu, solaris and Windows 7 machine (I will infact be rebuilding a home computer several times).
On the server side I will be installing each RDBMS on a Windows, Ubuntu and Solaris Server.
The idea is I would like to collate some data on the RDBMS/Operating System client-side and server-side, times etc....
What I could use some help with is generating some informative tests. Ideally I would like them to mock a a real world enviroment. Seeing as I am using two-tier architecture and have no idea how I would go about implementing a buisness layer I would appreciate some thoughts on what kinds of tests I should run.
The application will be able to mock multiple users accessing the database so please be ruthless.
On a side note any thoughts on the scope... more o.s, rdbms suggestions would be great!
Kind regards
Simon
Assuming you aren't trying to write a whole benchmarking app yourself...
There are two tools that work well if it's a web app connecting to the database:
ab (apache bench) which is on every unix machine I've used in the past 4 years.
http://httpd.apache.org/docs/2.0/programs/ab.html
And jmeter:
http://jmeter.apache.org/