I have a code
csvWriter.writeCsv(new PrintWriter(csvPath), someList);
Is it possible to test string ".\xyz.csv" and TestUtil.getSomeList() exactly.
Below mockito code is giving me error "Argument(s) are different! Wanted" complaining about different PrintWriter objects.
verify(csvWriter, times(1)).writeCsv(new PrintWriter(".\\xyz.csv"), TestUtil.getSomeList());
It works if I change this to
verify(csvWriter, times(2)).writeCsv(any(PrintWriter.class), any(List.class));
But I would like to verify that ".\xyz.csv" and TestUtil.getSomeList() are matched exactly. Is it possible?
You can use ArgumentCaptor to get the parameters you put to the writeCsv method:
ArgumentCaptor<PrintWriter> writerCaptor = ArgumentCaptor.forClass(PrintWriter.class);
ArgumentCaptor<> listCaptor = ArgumentCaptor.forClass(List.class);
verify(csvWriter, times(2)).writeCsv(writerCaptor.capture, listCaptor.capture());
//Then you can check captured parameters:
List<...> actualList = listCaptor.getValue();
assertEquals(expectedList, actualList);
Sample : https://www.baeldung.com/mockito-argumentcaptor
Related
I am trying to move data from a SPARQL endpoint to a JSONObject. Using RDF4J.
RDF4J documentation does not address this directly (some info about using endpoints, less about converting to JSON, and nothing where these two cases meet up).
Sofar I have:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
Map<String, String> headers = new HashMap<String, String>();
headers.put("Accept", "SPARQL/JSON");
repo.setAdditionalHttpHeaders(headers);
try (RepositoryConnection conn = repo.getConnection())
{
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
GraphQuery query = conn.prepareGraphQuery(queryString);
debug("Mark 2");
try (GraphQueryResult result = query.evaluate())
this fails because "Server responded with an unsupported file format: application/sparql-results+json"
I figured a SPARQLGraphQuery should take the place of GraphQuery, but RepositoryConnection does not have a relevant prepare statement.
If I exchange
try (RepositoryConnection conn = repo.getConnection())
with
try (SPARQLConnection conn = (SPARQLConnection)repo.getConnection())
I run into the problem that SPARQLConnection does not generate a SPARQLGraphQuery. The closest I can get is:
SPARQLGraphQuery query = (SPARQLGraphQuery)conn.prepareQuery(QueryLanguage.SPARQL, queryString);
which gives a runtime error as these types cannot be cast to eachother.
I do not know how to proceed from here. Any help or advise much appreciated. Thank you
this fails because "Server responded with an unsupported file format: application/sparql-results+json"
In RDF4J, SPARQL SELECT queries are tuple queries, so named because each result is a set of bindings, which are tuples of the form (name, value). In contrast, CONSTRUCT (and DESCRIBE) queries are graph queries, so called because their result is a graph, that is, a collection of RDF statements.
Furthermore, setting additional headers for the response format as you have done here is not necessary (except in rare circumstances), the RDF4J client handles this for you automatically, based on the registered set of parsers.
So, in short, simplify your code as follows:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
try (RepositoryConnection conn = repo.getConnection()) {
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
TupleQuery query = conn.prepareTupleQuery(queryString);
debug("Mark 2");
try (TupleQueryResult result = query.evaluate()) {
...
}
}
If you want to write the result of the query in JSON format, you could use a TupleQueryResultHandler, for example the SPARQLResultsJSONWriter, as follows:
SPARQLRepository repo = new SPARQLRepository(<My Endpoint>);
try (RepositoryConnection conn = repo.getConnection()) {
String queryString = "SELECT * WHERE {GRAPH <urn:x-evn-master:mwadata> {?s ?p ?o}}";
TupleQuery query = conn.prepareTupleQuery(queryString);
query.evaluate(new SPARQLResultsJSONWriter(System.out));
}
This will write the result of the query (in this example to standard output) using the SPARQL Query Results JSON format. If you have a non-standard format in mind, you could of course also create your own TupleQueryResultHandler implementation.
For more details on the various ways in which you can process the result (including iterating, streaming, adding to a List, or just directly sending to a result handler), see the documentation on querying a repository. As an aside, the javadoc on the RDF4J APIs is pretty extensive too, so if your Java editing environment has support for displaying that, I'd advise you to make use of it.
I just wonder if I can validate a value of certain filed in JSON response using Gatling?
currently the code only check if a field presents in the JSON response as following:
val searchTransaction: ChainBuilder = exec(http("search Transactions")
.post(APP_VERSION + "/transactionDetals")
.headers(BASIC_HEADERS)
.headers(SESSION_HEADERS)
.body(ElFileBody(ENV + "/bodies/transactions/searchTransactionByAmount.json"))
.check(status.is(200))
.check(jsonPath("$.example.current.transaction.results[0].amount.value")
If I want to verify the transaction value equals to 0.01, is it possible to achieve this?
I googled but didn't find any result, if there is a similar asked before, please let me know I will close this. Thanks.
I tried some assertion in the test, I find the assertion won't fail the performance at all.
val searchTransaction: ChainBuilder = exec(http("search Transactions")
.post(APP_VERSION + "/transactionDetals")
.headers(BASIC_HEADERS)
.headers(SESSION_HEADERS)
.body(ElFileBody(ENV + "/bodies/transactions/searchTransactionByAmount.json"))
.check(status.is(200))
.check(jsonPath("$.example.current.transaction.results[0].amount.value").saveAs("actualAmount"))).
exec(session => {
val actualTransactionAmount = session("actualAmount").as[Float]
println(s"actualTransactionAmount: ${actualTransactionAmount}")
assert(actualTransactionAmount.equals(10)) // this should fail, as amount is 0.01, but performance test still pass.
session
})
You right, this is a normal solution
.check(jsonPath("....").is("...")))
Such checks are the norm. Since the service may respond and not return for example 5xx status, but there will be an error in the response. So it’s better to check this.
Example: I have application which return status of create user and I checke it
.check(jsonPath("$.status").is("Client created"))
I figure out the way to verify, not sure if that is the best way but it works for me.
val searchTransaction: ChainBuilder = exec(http("search Transactions")
.post(APP_VERSION + "/transactionDetals")
.headers(BASIC_HEADERS)
.headers(SESSION_HEADERS)
.body(ElFileBody(ENV + "/bodies/transactions/searchTransactionByAmount.json"))
.check(status.is(200))
.check(jsonPath("$.example.current.transaction.results[0].amount.value").is("0.01")))
if I change to value 0.02, then test will fail and in the session log it will tell
something like below:
---- Errors --------------------------------------------------------------------
> jsonPath($.example.current.transaction.results[0].amount.value). 19 (100.0%)
find.is(0.02), but actually found 0.01
================================================================================
I aware that verify a value in JSON will make it like a functional test, not a load test. In a load test, maybe we are not supposed to validate every possible piece of information. But there is function there if anyone wants to verify something in JSON maybe can reference.
Just curious, I still don't know why using assert in previous code won't fail the test though?
when(/* some method call*/).thenReturn(mockFetchReturn).thenReturn(mockFetchReturn2)
.thenReturn(mockFetchReturn3);
This is working fine and I am able to call mocked method three times with different output. But my output list can change for each test scenario and I couldn't find how this can be done in a loop based on different returns.
For e.g. If I pass a list of 10 mockFetchReturn3 objects then there should be 10 return statements.
Just code for the answer provided in comment:
OngoingStubbing stubbing = when(/* some method call*/);
for (int i = 0; ...) {
stubbing = stubbing.thenReturn(mockFetchReturn(i));
}
Alternatively, you can pass a list to
List<String> answers = Arrays.asList(mockFetchReturn, mockFetchReturn, ...);
when(/* some method call*/).thenAnswer(AdditionalAnswers.returnsElementsOf(logEntryList));
Also see similar questions
I am using KeyValueTextInputFormat for reading/processing a comma separated file :
100,56
89,586
123,68
However I get all the value in key, value field is coming as null, even after giving separator as comma (,). It is not picking the separator, not sure what is the issue,, here is my driver code:
Configuration conf = new Configuration();
conf.set("key.value.separator.in.input.line", ",");
Job job = new Job(conf, "citation data");
job.setJarByClass(Citation.class);
job.setJobName("citation data");
job.setMapperClass(MapClass.class);
job.setReducerClass(ReduceClass.class);
job.setInputFormatClass(KeyValueTextInputFormat.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class);
KeyValueTextInputFormat.addInputPath(job, new Path("input/sample.txt"));
FileOutputFormat.setOutputPath(job, new Path("output2"));
System.exit(job.waitForCompletion(true)?0:1);
It works perfectly fine for me. In one of my mapred code, I just changed the following and it worked :
Changed the InputFormatClass to use KeyValueTextInputFormat
Added the config - conf.set("key.value.separator.in.input.line", ",");
Made sure than in the mapper has a look of something like Mapper<Text,Text,K,V>, which would make the map() method's signature to be something like this:
public void map(Text key, Text value, OutputCollector output, Reporter reporter)
throws IOException {}
No other change is required and you must get the first column's data as key and second column as value.
I guess the only thing you might have missed is the point 3.
I would like to query a table based on a list of KeyValuePair. With a Model-First approach, I could do the following:
var context = new DataContext();
var whereClause = new StringBuilder();
var objectParameters = new List<ObjectParameter>();
foreach(KeyValuePair<string, object> pair in queryParameters)
{
if (whereClause.Length > 0)
whereClause.Append(" AND ");
whereClause.Append(string.Format("it.[{0}] = #{0}", pair.Key));
parameters.Add(new ObjectParameter(pair.Key, pair.Value));
}
var result = context.Nodes.Where(whereClause.ToString(), parameters.ToArray());
Now I'm using a Code-First approach and this Where method is not available anymore. Fortunately, I saw an article somewhere (I can't remember anymore) which suggested that I could convert the DbContext to a IObjectContextAdapter then call CreateQuery like this:
var result = ((IObjectContextAdapter)context)
.ObjectContext.CreateQuery<Node>(whereClause.ToString(), parameters.ToArray());
Unfortunately, this throws an error:
'{ColumnName}' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly.
Where {ColumnName} is the column specified in the whereClause.
Any ideas how I can dynamically query a DbSet given a list of key/value pairs? All help will be greatly appreciated.
I think your very first problem is that in the first example you are using Where on the entity set but in the second example you are using CreateQuery so you must pass full ESQL query and not only where clause! Try something like:
...
.CreateQuery<Node>("SELECT VALUE it FROM ContextName.Nodes AS it WHERE " + yourWhere)
The most problematic is full entity set name in FROM part. I think it is defined as name of the context class and name of the DbSet exposed on the context. Another way to do it is creating ObjectSet:
...
.ObjectContext.CreateObjectSet<Node>().Where(yourWhere)