Jedis Pipeline Multi throws ClassCastException - exception

public class JedisPipeline {
private static JedisPool pool = new JedisPool(new JedisPoolConfig(), "127.0.0.1", 6379);
public static void main(String args[]){
Jedis jedis = pool.getResource();
Pipeline pipeline = jedis.pipelined();
pipeline.multi();
//pipeline.hmset("Id", new HashMap<String,String>() );
for(int i=0; i < 1000 ; i++){
pipeline.hincrBy("Id", i+"", i);
}
pipeline.exec();
pool.returnResource(jedis);
//pool.destroy();
//pool = new JedisPool(new JedisPoolConfig(), "127.0.0.1", 6379);
jedis = pool.getResource();
Map<String,String> map1 = jedis.hgetAll("Id");
System.out.println("map1------->"+map1);
pool.returnResource(jedis);
//pool.destroy();
}
}
I have a problem in the above code. It throws a ClassCastException, where as if I destroy the pool and create a new pool object it works properly. Am I using the Pipeline API properly ?. Can anyone help me ? I am using Jedis 2.1.0
Exception in thread "main" java.lang.ClassCastException: [B cannot be cast to java.util.List
at redis.clients.jedis.Connection.getBinaryMultiBulkReply(Connection.java:189)
at redis.clients.jedis.Jedis.hgetAll(Jedis.java:861)
at com.work.jedisex.JedisFactory.main(JedisFactory.java:59)
Modified code to get the Map which throws Exception
Response<Map<String,String>> map1 = pipeline.hgetAll("Id");
pipeline.exec();
pipeline.sync();
pool.returnResource(jedis);
Map<String,String> map2 = map1.get();

Looks like the pipeline doesn't close after exec() call. So when you try to reuse the same Jedis object after returnResource it still contains pipelinedResponses from previous operation.
Try to do this way:
pipeline.exec();
pipeline.sync();
pool.returnResource(jedis);
sync() call should close the pipeline.

Related

ElasticSearch Spring Data unit Test

i am trying to unit test my spring data elasticsearch operation, but everytime getting null pointers.
i want to do it for the coverage of my code.
i am only using searching operations that is why i am using the elaststic operations, is there a way to unit test it with mokito.
i will share the snipit of the code :-
public Map < String, Object > articleRecommendation(SearchDocModel searchDocModel, int start, int size,
FiltersForRecommendation filtersForRecommendation) throws IOException {
Pageable pageable = PageRequest.of(start, size);
XContentBuilder xContentBuilderTitle = XContentFactory.jsonBuilder()
.startObject()
.field("", searchDocModel.getTitle())
.field("", searchDocModel.getAbstract())
.endObject();
MoreLikeThisQueryBuilder.Item titleItem = new MoreLikeThisQueryBuilder
.Item("", xContentBuilderTitle);
MoreLikeThisQueryBuilder.Item[] items = {
titleItem
};
MoreLikeThisQueryBuilder mltQuery = QueryBuilders.moreLikeThisQuery(items)
.minTermFreq(1)
.maxQueryTerms(25)
.minimumShouldMatch("30%")
.minDocFreq(5);
BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery().must(mltQuery);
Query boolQuery = new NativeSearchQueryBuilder()
.withQuery(filters(boolQueryBuilder, filtersForRecommendation))
.withPageable(pageable)
.build();
return queryResult(boolQuery, pageable);
}
My test class
#Test
void articleRecommendation() throws IOException {
ElasticsearchOperations elasticsearchOperations = Mockito.mock(ElasticsearchOperations.class);
Pageable pageable = PageRequest.of(1, 1);
ElasticServiceImpl elasticService1 = new ElasticServiceImpl(elasticsearchOperations);
SearchHits searchHits = Mockito.mock(SearchHits.class);
SearchPage searchPage = Mockito.mock(SearchPage.class);
SearchHit searchHit = Mockito.mock(SearchHit.class);
when(searchHits.iterator()).thenReturn(singleton(searchHit).iterator());
when(elasticService1.articleRecommendation(apiModel, 1, 1, filtersForRecommendation))
.thenReturn(result);
}
exception i am getting:
java.lang.NullPointerException: Cannot invoke "org.springframework.data.elasticsearch.core.SearchHits.getSearchHits()" because "searchHits" is null
so how do i mock it property ? for code coverage

how to fix NullPointerException while loading document to Elasticsearch 7.3

I want to load a JSON string to Elasticsearch version 7.3.
Following is the code i am using for this.
private RestHighLevelClient restHighLevelClient;
String jsonString="//here the complete JSON string";
JSONObject jsonObject = new JSONObject(cojsonStringntent1.toString());
HashMap<String, Object> hashMap = new Gson().fromJson(jsonObject.toString(), HashMap.class);
IndexRequest indexRequest = new IndexRequest("index", "type").source(hashMap);
restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);
Exception :
Exception in thread "main" java.lang.NullPointerException
at line restHighLevelClient.index(indexRequest, RequestOptions.DEFAULT);
If I post the same jsonString via POSTMEN than it is being loaded to ELASTICSEARCH perfectly.
If you are not using spring(as it's not mentioned), you can use below simple code to create a resthighlevelclient.
In below code, I am reading the elasticsearch configuration from a config file, feel free to change it to the way you read the properties or config, or if you just want to quickly test it hardcode the values of host and port
RestHighLevelClient restHighLevelClient = new RestHighLevelClient(
RestClient.builder(new HttpHost(configuration.getElasticsearchConfig().getHost(),
configuration.getElasticsearchConfig().getPort(),
"http")));
Based on your sample code, your restHighLevelClient hasn't been initialized indeed at all. Please find snippet of code below how you could solve this:
#Bean
public RestHighLevelClient elasticRestClient () {
String[] httpHosts = httpHostsProperty.split(";");
HttpHost[] httpHostsAsArray = new HttpHost[httpHosts.length];
int index = 0;
for (String httpHostAsString : httpHosts) {
HttpHost httpHost = new HttpHost(httpHostAsString.split(":")[0], new Integer(httpHostAsString.split(":")[1]), "http");
httpHostsAsArray[index++] = httpHost;
}
RestClientBuilder restClientBuilder = RestClient.builder(httpHostsAsArray)
.setRequestConfigCallback(builder -> builder
.setConnectTimeout(connectTimeOutInMs)
.setSocketTimeout(socketTimeOutInMs)
);
return new RestHighLevelClient(restClientBuilder);
}
and your impl class uses the autowired RestHighLevelClient bean:
#Autowired
private RestHighLevelClient restClient;

AWS EMR File Already exists: Hadoop Job reading and writing to S3

I have a Hadoop job running in EMR and i am passing the S3 Path as input and output to this Job.
When i run locally everything is working fine.( As there is a single node)
How ever when i run in EMR with 5 node cluster i am running into File Already exists IO Exception.
The output path has a timestamp in it so the out put path doesn't exists in S3.
Error: java.io.IOException: File already exists:s3://<mybucket_name>/8_9_0a4574ca-96d0-47c8-8eb8-4deb82944d4b/customer/RawFile12.txt/1523583593585/TOKENIZED/part-m-00000
I have a very simple hadoop app (primarily my mapper) which reads each line from a file and converts it (using an existing library)
Not sure why each node is trying to write with the same file name.
Here is mapper
public static class TokenizeMapper extends Mapper<Object,Text,Text,Text>{
public void map(Object key, Text value,Mapper.Context context) throws IOException,InterruptedException{
//TODO: Invoke Core Engine to transform the Data
Encryption encryption = new Encryption();
String tokenizedVal = encryption.apply(value.toString());
context.write(tokenizedVal,1);
}
}
Any my Reducer
public static class TokenizeReducer extends Reducer<Text,Text,Text,Text> {
public void reduce(Text text,Iterable<Text> lines,Context context) throws IOException,InterruptedException{
Iterator<Text> iterator = lines.iterator();
int counter =0;
while(iterator.hasNext()){
counter++;
}
Text output = new Text(""+counter);
context.write(text,output);
}
}
And my main class
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
long startTime = System.currentTimeMillis();
try {
Configuration config = new Configuration();
String[] additionalArgs = new GenericOptionsParser(config, args).getRemainingArgs();
if (additionalArgs.length != 2) {
System.err.println("Usage: Tokenizer Input_File and Output_File ");
System.exit(2);
}
Job job = Job.getInstance(config, "Raw File Tokenizer");
job.setJarByClass(Tokenizer.class);
job.setMapperClass(TokenizeMapper.class);
job.setReducerClass(TokenizeReducer.class);
job.setNumReduceTasks(0);
job.setOutputKeyClass(Text.class);
job.setOutputKeyClass(Text.class);
FileInputFormat.addInputPath(job, new Path(additionalArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(additionalArgs[1]));
boolean status = job.waitForCompletion(true);
if (status) {
//System.exit(0);
System.out.println("Completed Job Successfully");
} else {
System.out.println("Job did not Succeed");
}
}
catch(Exception e){
e.printStackTrace();
}
finally{
System.out.println("Total Time for processing =["+(System.currentTimeMillis()-startTime)+"]");
}
}
I am passing the arguments when i am launching the cluster as
s3://<mybucket>/8_9_0a4574ca-96d0-47c8-8eb8-4deb82944d4b/customer/RawFile12.txt
s3://<mybucket>/8_9_0a4574ca-96d0-47c8-8eb8-4deb82944d4b/customer/RawFile12.txt/1523583593585/TOKENIZED
Appreciate any inputs.
Thanks
In the driver code, you have set Reducer to 0, then we do not need the reducer code.
In case you need to clear the output dir before job launch, you can use this snippet to clear the dir if it exists:-
FileSystem fileSystem = FileSystem.get(<hadoop config object>);
if(fileSystem.exists(new Path(<pathTocheck>)))
{
fileSystem.delete(new Path(<pathTocheck>), true);
}

Quasar multi fibers warning

I am new to quasar and I tried doing this.
Basically I get a warning the fiber is blocking a thread. Why ? can I not do something like below ?
Thanks
//in my my testclass I have this
String websites[] = {"http://www.google.com",""http://www.lol.com",""http://www.somenoneexistantwebsite.com"};
for(int i=0; i < websites.length ; i++){
TestApp.getWebsiteHTML(websites[i]);
}
//in TestApp
public static void getWebsiteHTML(String webURL) throws IOException, InterruptedException, Exception {
new Fiber<Void>(new SuspendableRunnable() {
#Override
public void run() throws SuspendExecution, InterruptedException {
WebInfo mywi = new WebInfo();
mywi.getHTML(webURL);
}
}).start().join();
}
//in WebInfo
public static String getHTML(String urlToRead) throws Exception {
StringBuilder result = new StringBuilder();
URL url = new URL(urlToRead);
HttpURLConnection conn = (HttpURLConnection) url.openConnection();
conn.setRequestMethod("GET");
BufferedReader rd = new BufferedReader(new InputStreamReader(conn.getInputStream()));
String line;
while ((line = rd.readLine()) != null) {
result.append(line);
}
rd.close();
return result.toString();
}
Have a look at the "Runaway fibers" sub-section in the docs.
HttpURLConnection is thread-blocking so in order to avoid stealing threads from the fiber scheduler for too much time (which risks killing your Quasar-based application's performance) you should rather use an HTTP client integrated with Quasar (or integrate one yourself).

Camel route loop not working

I am trying to insert json data in mySQL database using camel and hibernate.
Everything is working.
for (Module module : modules) {
from("timer://foo?delay=10000")
.loop(7)//not working
.to(module.getUrl() + "/api/json")
.convertBodyTo(String.class)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
int index = (Integer)exchange.getProperty("CamelLoopIndex"); // not working
ObjectMapper mapper = new ObjectMapper();
JsonNode root = mapper.readTree(exchange.getIn().getBody().toString());
String[] lijst = {"lastBuild", "lastCompletedBuild", "lastFailedBuild", "lastStableBuild", "lastSuccessfulBuild", "lastUnstableBuild", "lastUnsuccessfulBuild"};
JSONObject obj = new JSONObject();
JsonNode node = root.get(lijst[index]);
JsonNode build = node.get("number");
obj.put("description", lijst[index]);
obj.put("buildNumber", build);
exchange.getIn().setBody(obj.toString());
}
})
.unmarshal(moduleDetail)
.to("hibernate:be.kdg.teamf.model.ModuleDetail")
.end();
}
When I debug, my CamelLoopIndex remains 0 so it is not incremented every time it goes through the loop.
All help is welcome!
In your case the only first instruction is processed in scope of the loop: .to(module.getUrl() + "/api/json"). You can add more instructions into a loop using Spring DSL, but I don't know how to declare a loop scope using Java DSL explicitly. I hope experts will explain more about a loop scope in Java DSL.
As a workaround I suggest to move all iteration instructions to a separate direct: route.
I can't reproduce your problem. This works:
from("restlet:http://localhost:9010}/loop?restletMethod=get")
.loop(7)
.process(new Processor() {
#Override
public void process(Exchange exchange) throws Exception {
int index = (int) exchange.getProperty("CamelLoopIndex");
exchange.getIn().setBody("index=" + index);
}
})
.convertBodyTo(String.class)
.end();
Output:
index=6