public void MyTest {
#Rule
public final TextFromStandardInputStream systemInMock
= emptyStandardInputStream();
#Test
public void shouldTakeUserInput() {
systemInMock.provideLines("add 5", "another line");
InputOutput inputOutput = new InputOutput();
assertEquals("add 5", inputOutput.getInput());
}
}
Here, I want to check for the input add 5 my output should some statement using System.out.println() statement within some method. Is It possible to simulate output statement?
You can use the SystemOutRule to get the output written to System.out and assert against it:
public class MyTest {
#Rule
public final TextFromStandardInputStream systemInMock =
emptyStandardInputStream();
#Rule
public final SystemOutRule systemOutRule = new SystemOutRule().enableLog();
#Test
public void shouldTakeUserInput() {
systemInMock.provideLines("add 5", "another line");
MyCode myCode = new MyCode();
myCode.doSomething();
assertEquals("expected output", systemOutRule.getLog());
}
}
Related
I am new to Spock and trying to achieve below scenario:
#Test
public void asynchronousMethodTest() {
JsonObject jsonObject = new JsonObject();
jsonObject.put("name", "Lilly").put("city", "Glendale");
AsyncResult<JsonObject> asyncResult = Mockito.mock(AsyncResult.class);
when(asyncResult.succeeded()).thenReturn(true);
when(asyncResult.result()).thenReturn(jsonObject);
doAnswer(new Answer<AsyncResult<JsonObject>>() {
#Override
public AsyncResult<JsonObject> answer(InvocationOnMock invocation) throws Throwable {
((Handler<AsyncResult<JsonObject>>) invocation.getArguments()[1]).handle(asyncResult);
return null;
}
}).when(someService).callSomeService(Mockito.any(), Mockito.any());
childVerticle.asynchronousMethod();
//verify(someService, times(1)).callSomeService(Mockito.any(), Mockito.any());
}
What is spock equivalent to the above code?
Yes it's possible, please have a look at this part of the documentation. Here's the relevant part:
subscriber.receive(_) >> { args -> args[0].size() > 3 ? "ok" : "fail" }
where subscriber is defined a a Mock.
I have this custom matcher:
public class CofmanStringMatcher extends TypeSafeMatcher<String> {
private List<String> options;
private CofmanStringMatcher(final List<String> options) {
this.options = Lists.newArrayList(options);
}
#Override
protected boolean matchesSafely(final String sentResult) {
return options.stream().anyMatch(option -> option.equals(sentResult));
}
public static CofmanStringMatcher isCofmanStringOnOfTheStrings(List<String> options) {
return new CofmanStringMatcher(options);
}
#Override
public void describeTo(final Description description) {
System.out.println("in describeTo");
// description.appendText("expected to be equal to of the list: "+options);
}
}
which compares a string to few options for strings.
when i run this test code:
verify(cofmanService, times(1))
.updateStgConfigAfterSimulation(argThat(isCofmanStringOnOfTheStrings(ImmutableList.of(expectedConditionsStrings , expectedConditionsStrings2))), eq(Constants.addCommitMsg+SOME_REQUEST_ID));
I get this error:
Comparison Failure: <Click to see difference>
Argument(s) are different! Wanted:
cofmanService.updateStgConfigAfterSimulation(
,
"add partner request id = 1234"
);
-> at com.waze.sdkService.services.pubsub.callback.RequestToCofmanSenderTest.localAndRtValidationSucceeds_deployCofmanStg(RequestToCofmanSenderTest.java:131)
Actual invocation has different arguments:
cofmanService.updateStgConfigAfterSimulation(
"some text"
);
The test fails even though the method updateStgConfigAfterSimulation calls with 1st arg that matches on of the list elements
I'm using
mockito 1.10 and hamcrest 1.3
here is the method's signature
void updateStgConfigAfterSimulation(String conditionsMap, String commitMsg) throws Exception
I am trying to gather some information after every test method, and would like to analyze the gathered information after the test class completes. So, I have a private member variable, a list which I would like to add to after every test method completes. However, at the end of the day, the member variable always remains null.
Note: My test class implements Callable interface.
Here is my code snippet:
{
private List<String statisticsCollector;
private JUnitCore core = null;
private int x = 0;
public MyLoadTest() {
this.core = new JUnitCore();
this.statisticsCollector = new ArrayList<String>();
}
#Override
public List<String> call() {
log.info("Starting a new thread of execution with Thread# -" + Thread.currentThread().getName());
core.run(this.getClass());
return getStatisticsCollector(); // this is always returing a list of size 0
}
#After
public void gatherSomeStatistics() {
x = x+1;
String sb = new String("Currently executing ----" + x);
log.info("Currently executing ----" + x);
addToStatisticsCollector(sb);
}
#Test
#FileParameters(value = "classpath:folder/testB.json", mapper = MyMapper.class)
public void testB(MarsTestDefinition testDefinition) {
runTests(testDefinition);
}
#Test
#FileParameters(value = "classpath:folder/testA.json", mapper = MyMapper.class)
public void testA(MyDefinition testDefinition) {
runTests(testDefinition);
}
public List<String> getStatisticsCollector() {
return this.statisticsCollector;
}
public void addToStatisticsCollector(String sb) {
this.statisticsCollector.add(sb);
}
}
So, why is it always getting reset, even though I am appending to the list in my #After annotated method?
Any help will be highly appreciated. Thanks
Try with following code, is it working ?
private static List<String> statisticsCollector = new ArrayList<String>();
private JUnitCore core = null;
private int x = 0;
public MyLoadTest() {
this.core = new JUnitCore();
}
public List<String> getStatisticsCollector() {
return statisticsCollector;
}
//DOC Datatype Constants
public enum DocDatatype {
PROFILE("Profile"),
SUPPORT_DETAIL("SupportDetail"),
MISC_PAGE("MiscPage"),
String name;
DocDatatype(String name) {
this.name = name;
}
public String getName() {
return name;
}
// the identifierMethod
public String toString() {
return name;
}
// the valueOfMethod
public static DocDatatype fromString(String value) {
for (DocDatatype type : DocDatatype.values()) {
if (type.getName().equals(value))
return type;
}
throw new java.lang.IllegalArgumentException(value
+ " is Not valid dmDataType");
}
}
I have written the junit test case in this way. Whether it is right way to write or wrong way...?
public class DocDatatypeTest {
private static final Log logger = LogFactory
.getLog(TreeConstantTest.class);
#Test
public void testDocDatatypeFromName()
{
DocDatatype d= DocDatatype.fromString("Profile");
assertTrue((d.toString().compareToIgnoreCase("PROFILE") == 0));
}
#Test
public void testDocDatatypeFromName1()
{
DocDatatype d = DocDatatype.fromString("SupportDetail");
assertTrue((d.toString().compareToIgnoreCase("SUPPORT_DETAIL") == 0 ));
}
}
}
A few things here:
Remove the logger from the test. A test should pass or fail, no need for logging
Don't use assertTrue for this. If the test fails it will give you no information about /why/ it failed.
I would change this to
#Test
public void testDocDatatypeFromName()
{
DocDatatype actualDocType = DocDatatype.fromString("Profile");
assertSame(DocDataType.PROFILE, actualDocType);
}
If you really want to assert that value of the toString then do this
#Test
public void testDocDatatypeFromName()
{
DocDatatype d= DocDatatype.fromString("Profile");
assertEquals("Profile", d.toString());
}
You're missing tests for when the lookup doesn't match anything
I wouldn't even write these tests as I see them adding no value whatsoever. The code that uses the enums should have the tests, not these.
Your tests are named very badly. There's no need to start a test with test and the fact you add a "1" to the end of the second test should tell you something. Test names should focus on action and behaviour. If you want to read more about this, get the December issue of JAX Magazine which has a snippet about naming from my forthcoming book about testing.
Can anyone please tell me if there is any way in apache spark to store a JavaRDD on mysql database? I am taking input from 2 csv files and then after doing join operations on their contents I need to save the output(the output JavaRDD) in the mysql database. I am already able to save the output successfully on hdfs but I am not finding any information related to apache Spark-MYSQL connection. Below I am posting the code for spark sql. This might serve as a reference to those who are looking for an example for spark-sql.
package attempt1;
import java.io.Serializable;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.sql.api.java.JavaSQLContext;
import org.apache.spark.sql.api.java.JavaSchemaRDD;
import org.apache.spark.sql.api.java.Row;
public class Spark_Mysql {
#SuppressWarnings("serial")
public static class CompleteSample implements Serializable {
private String ASSETNUM;
private String ASSETTAG;
private String CALNUM;
public String getASSETNUM() {
return ASSETNUM;
}
public void setASSETNUM(String aSSETNUM) {
ASSETNUM = aSSETNUM;
}
public String getASSETTAG() {
return ASSETTAG;
}
public void setASSETTAG(String aSSETTAG) {
ASSETTAG = aSSETTAG;
}
public String getCALNUM() {
return CALNUM;
}
public void setCALNUM(String cALNUM) {
CALNUM = cALNUM;
}
}
#SuppressWarnings("serial")
public static class ExtendedSample implements Serializable {
private String ASSETNUM;
private String CHANGEBY;
private String CHANGEDATE;
public String getASSETNUM() {
return ASSETNUM;
}
public void setASSETNUM(String aSSETNUM) {
ASSETNUM = aSSETNUM;
}
public String getCHANGEBY() {
return CHANGEBY;
}
public void setCHANGEBY(String cHANGEBY) {
CHANGEBY = cHANGEBY;
}
public String getCHANGEDATE() {
return CHANGEDATE;
}
public void setCHANGEDATE(String cHANGEDATE) {
CHANGEDATE = cHANGEDATE;
}
}
#SuppressWarnings("serial")
public static void main(String[] args) throws Exception {
JavaSparkContext ctx = new JavaSparkContext("local[2]", "JavaSparkSQL");
JavaSQLContext sqlCtx = new JavaSQLContext(ctx);
JavaRDD<CompleteSample> cs = ctx.textFile("C:/Users/cyg_server/Documents/bigDataExample/AssetsImportCompleteSample.csv").map(
new Function<String, CompleteSample>() {
public CompleteSample call(String line) throws Exception {
String[] parts = line.split(",");
CompleteSample cs = new CompleteSample();
cs.setASSETNUM(parts[0]);
cs.setASSETTAG(parts[1]);
cs.setCALNUM(parts[2]);
return cs;
}
});
JavaRDD<ExtendedSample> es = ctx.textFile("C:/Users/cyg_server/Documents/bigDataExample/AssetsImportExtendedSample.csv").map(
new Function<String, ExtendedSample>() {
public ExtendedSample call(String line) throws Exception {
String[] parts = line.split(",");
ExtendedSample es = new ExtendedSample();
es.setASSETNUM(parts[0]);
es.setCHANGEBY(parts[1]);
es.setCHANGEDATE(parts[2]);
return es;
}
});
JavaSchemaRDD complete = sqlCtx.applySchema(cs, CompleteSample.class);
complete.registerAsTable("cs");
JavaSchemaRDD extended = sqlCtx.applySchema(es, ExtendedSample.class);
extended.registerAsTable("es");
JavaSchemaRDD fs= sqlCtx.sql("SELECT cs.ASSETTAG, cs.CALNUM, es.CHANGEBY, es.CHANGEDATE FROM cs INNER JOIN es ON cs.ASSETNUM=es.ASSETNUM;");
JavaRDD<String> result = fs.map(new Function<Row, String>() {
public String call(Row row) {
return row.getString(0);
}
});
result.saveAsTextFile("hdfs://path/to/hdfs/dir-name"); //instead of hdfs I need to save it on mysql database, but I am not able to find any Spark-MYSQL connection
}
}
Here at the end I am saving the result successfully in HDFS. But now I want to save into MYSQL database. Kindly help me out. Thanks
There are two approaches you can use for writing your results back to the database. One is to use something like DBOutputFormat and configure that, and the other is to use foreachPartition on the RDD you want to save and pass in a function which creates a connection to MySQL and writes the result back.
Here is an example using DBOutputFormat.
Create a class that represents your table row -
public class TableRow implements DBWritable
{
public String column1;
public String column2;
#Override
public void write(PreparedStatement statement) throws SQLException
{
statement.setString(1, column1);
statement.setString(2, column2);
}
#Override
public void readFields(ResultSet resultSet) throws SQLException
{
throw new RuntimeException("readFields not implemented");
}
}
Then configure your job and write a mapToPair function. The value doesn't appear to be used. If anyone knows, please post a comment.
String tableName = "YourTableName";
String[] fields = new String[] { "column1", "column2" };
JobConf job = new JobConf();
DBConfiguration.configureDB(job, "com.mysql.jdbc.Driver", "jdbc:mysql://localhost/DatabaseNameHere", "username", "password");
DBOutputFormat.setOutput(job, tableName, fields);
// map your rdd into a table row
JavaPairRDD<TableRow, Object> rows = rdd.mapToPair(...);
rows.saveAsHadoopDataset(job);