Spring RestTemplate with JDK11 errors Posting data above certain limit - apache-httpclient-4.x

I am using OAuth2RestTemplate with JDK11 to make a POST request with Json data (860 lines and 26 KB). Strangely the code works fine with < 700 Json lines (or 20 KB) on production server and with < 500 lines (15 KB) on local machine. But as soon I increase few more data blocks in the JSON it start giving exception.
Exception is based on the HttpRequestFactory implementation used with RestTemplate.
In case I use HttpComponentsClientHttpRequestFactory then it is NoHttpResponseException XXX.XXX:443 failed to respond and if I use SimpleClientHttpRequestFactory then java.net.SocketException Unexpected end of file from server
restTemplate.postForEntity(Url, dataBytes, byte[].class);
Strangely this works with lower versions of JDK 8, 9 and 10. Also I have tried other Http client like Spring Webclient with JDK11 and same data works with it. Apart from that same data also works with Curl/Postman.
But not able to identify why it is creating issue with RestTemplate beyond certain data limit.
Below are some of the main dependencies I am using (Dependency wise can't change much in existing project).
Spring-core 5.1.6.RELEASE
org.apache.httpcomponents.httpclient 4.5.6
spring-security-core 5.1.4.RELEASE
spring-security-oauth2-client 5.1.4.RELEASE
JDK11
Any help or idea will be much appreciated. TIA

I have had the same issue with the following JDK11 versions:
IMPLEMENTOR="AdoptOpenJDK"
IMPLEMENTOR_VERSION="AdoptOpenJDK"
JAVA_VERSION="11.0.2"
IMPLEMENTOR="AdoptOpenJDK"
IMPLEMENTOR_VERSION="AdoptOpenJDK"
JAVA_VERSION="11.0.4"
but the issue no longer appears in 11.0.9.11. I have not yet found what the fix was

Related

Laravel - Json Encode Works Fine But Returns Malformed UTF-8 Error Regardless

Problem
The JSON is returned correctly encoded but json_last_error equates to 5 thus Laravel throws this exception on response creation in JsonResponse->setData().
Related References
A resolved bug report for symfony and PHP 7.3: https://github.com/symfony/symfony/issues/31447 mentions this exact issue. That is:
If a json_encode()/json_decode() without JSON_THROW_ON_ERROR encoding option set throws an error, any subsequent call to the same with that flag set will not reset the error of the previous call.
The JSON_THROW_ON_ERROR RFC mentions this behavior as by design: https://wiki.php.net/rfc/json_throw_on_error
Unfortunately Laravel's code uses the flag but does not cater for this behavior and throws exception if json_last_error() returns a non zero value. Which in this case should have been ignored as it will always reference the previous error. Perhaps a version check should be added.
Investigation
I have also searched the bowls of my application and haven't found the misbehaving json_encode/json_decode without JSON_THROW_ON_ERROR set so I am sure it is internal to Laravel before Middleware calls.
Additional Context
This suddenly started happening today in our server. It happens when users connect to the server through the web app from select systems. Even though system kind should not affect server side response generation but still web app always generates this error for specific PCs/Mobiles.
Proposed Hack
After sifting through the Symfony bug report, the easiest workaround I have found is to just clear the error through an inconsequential json_encode('1') call. Which effectively resets the error code to 0 and the later JSON methods work fine.
Fortunately we have our own response generation class so just adding this before the hand-off to Laravel works fine.
Still this is a hack. I am more interested in a better solution or any bug fix available.

503 Service Unable: No registered leader was found after waiting for 4000 ms

I've recently started using solr. I'm using the latest Solr v6.1.0. I followed the quick start tutorial to get a feel of it. Being, a windows user I had to resort to the other way of importing my .csv data using Post tool for Windows
I am primarily interested in seeing how Solr can handle and search large data sets like the one I have. It is a 522 MB my_db.csv file which properly formatted (ran various python scripts to check that).
I started the solr cloud by the usual procedure. Then, I imported a part of this dataset (to be specfic, 29 lines of my_db.csv) to see if it works.
Shell:
C:\Users\MAC\Downloads\solr-6.1.0\solr-6.1.0>java -Dc=gettingstarted -Ddata=files -Dauto=yes -jar example\exampledocs\post.jar example\exampledocs\29lines.csv
Result was:
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/gettingstarted/update...
Entering auto mode. File endings considered are xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
POSTing file 29lines.csv (text/csv) to [base]
1 files indexed.
COMMITting Solr index changes to http://localhost:8983/solr/gettingstarted/update...
Time spent: 0:01:28.106
Fortunately, it worked perfectly and I was able to use the default velocity search wrapper that they provide by going to http://localhost:8983/solr/gettingstarted_shard2_replica1/browse . It had all my data stored so far. 29 rows to be precise.
Now, I wanted to see if the whole 522 MB of data would be imported for which I used the same command (just replaced the .csv file, ofcourse) and then I run it. I did expect it to take a while, and after nearly 10 minutes it had inserted around 32,674 out of 1,300,000 and then it threw out this error.
Result was:
SimplePostTool version 5.0.0
Posting files to [base] url http://localhost:8983/solr/gettingstarted/update...
Entering auto mode. File endings considered are xml,json,jsonl,csv,pdf,doc,docx,ppt,pptx,xls,xlsx,odt,odp,ods,ott,otp,ots,rtf,htm,html,txt,log
POSTing file omdbFull.csv (text/csv) to [base]
SimplePostTool: WARNING: Solr returned an error #503 (Service Unavailable) for url: http://localhost:8983/solr/gettingstarted/update
SimplePostTool: WARNING: Response: <?xml version="1.0" encoding="UTF-8"?>
<response>
<lst name="responseHeader"><int name="status">503</int><int name="QTime">128191</int></lst><lst name="error"><lst name="metadata"><str name="error-cla
ss">org.apache.solr.common.SolrException</str><str name="root-error-class">org.apache.solr.common.SolrException</str></lst><str name="msg">No register
ed leader was found after waiting for 4000ms , collection: gettingstarted slice: shard2</str><int name="code">503</int></lst>
</response>
SimplePostTool: WARNING: IOException while reading response: java.io.IOException: Server returned HTTP response code: 503 for URL: http://localhost:89
83/solr/gettingstarted/update
1 files indexed.
COMMITting Solr index changes to http://localhost:8983/solr/gettingstarted/update...
Time spent: 0:08:36.342
Summary
This was strange. I wasn't exaclty sure why this had happened. Is it perhaps that I have to change some kind of a "timeout" parameter for it to commit in? Unfortunately I wasn't able to see any such option for the windows post tool.
I found the solution to my problem. The problem wasn't that the file was huge. Which in my case was around 500 MB csv. I'm sure it will go through for even larger files.
The thing is, I think Solr has some kind of auto recognizing the kind of values are input in an index. For instance, my CSV had a column with "Years" like "2015","2014","1970"... etc. but when this column also had improper years which I didn't know, like "2014-2015", "1980-1988".
Solr would stop and throw out an exception because this was not a year but an year range. It wasn't expecting a value of this sort.
Summary
To fix the problem, I simply filtered out the faulty year rows and volla! it processed my 500 MB csv in around 15 minutes. After thatm I had a good nice database ready to be searched!

Spinnaker Jenkins Integration unable to fetch jobs from Jenkins

We have completed all the steps as described in the hello-spinnaker example below.We have used the AWS spinnaker image to directly configure spinnaker in AWS.
www.spinnaker.io/docs/hello-spinnaker.
I am trying to create a sample pipeline as noted in the above example.But while I create trigger in the first step and select jenkins ,the jobs are not getting populated and am getting below error in browser.
GET http://localhost:8084/v2/builds/Jenkins/jobs 429 (Too Many Requests)
The actual issue looks like while retrofit is trying to map the response from jenkins getjobs into the JobList class its finding an attribute _class in jenkins response xml and which is not present in JobList groovy class.Below is how we tried finding the issue
1)Login to AWS Spinnaker instance
2)Gate service is exposed at port 8084.
curl http://localhost:8084/v2/builds/Jenkins/jobs.
{"failureCause":"retrofit.RetrofitError: 429 Too Many Requests","error":"Too Many Requests","message":"429 Too Many Requests","status":429,"url":"http://localhost:8088/jobs/Jenkins","timestamp":1462793944530}
3)Igor service is exposed at port 8088.
curl http://localhost:8088/jobs/Jenkins
{"fallbackException":"java.lang.UnsupportedOperationException: No fallback available.","failureType":"COMMAND_EXCEPTION","failureCause":"retrofit.converter.ConversionException: org.simpleframework.xml.core.AttributeException: Attribute '_class' does not have a match in class com.netflix.spinnaker.igor.jenkins.client.model.JobList at line 1","error":"Hystrix Failure","message":"jenkins-Jenkins-getJobs failed and no fallback available.","status":429,"timestamp":1462793896853}
When I check in the igor logs,there are few exceptions which are occuring during the getprojects by jenkins poll
Caused by: retrofit.converter.ConversionException: org.simpleframework.xml.core.AttributeException: Attribute '_class' does not have a match in class com.netflix.spinnaker.igor.jenkins.client.model.ProjectsList at line 2
at retrofit.converter.SimpleXMLConverter.fromBody(SimpleXMLConverter.java:38)
at retrofit.RestAdapter$RestHandler.invokeRequest(RestAdapter.java:367)
... 39 common frames omitted
Caused by: org.simpleframework.xml.core.AttributeException: Attribute '_class' does not have a match in class com.netflix.spinnaker.igor.jenkins.client.model.ProjectsList at line 2
4)Connect to jenkins and get the jobs as its being done in spinnaker code https://github.com/spinnaker/igor/blob/master/igor-web/src/main/groovy/com/netflix/spinnaker/igor/jenkins/client/JenkinsClient.groovy
resp = requests.get('http://jenkinserverip:8080/api/xml?tree=jobs[name,jobs[name,jobs[name,jobs[name,jobs[name,jobs[name,jobs[name,jobs[name,jobs[name,jobs[name]]]]]]]]]]',auth=('admin','password'))
print resp.text
<hudson _class='hudson.model.Hudson'><job _class='hudson.model.FreeStyleProject'><name>Hello Build</name></job><job _class='hudson.model.FreeStyleProject'><name>Hello Poll</name></job></hudson>
So as the jenkins response is having the _class attribute ,retrofit is throwing an error at this line http://grepcode.com/file/repo1.maven.org/maven2/com.squareup.retrofit/retrofit/1.9.0/retrofit/RestAdapter.java#383
I wanted to see how can we quickly fix this as it looks like some version in compatibility of jenkins.
I'm seeing a similar issue in spinnaker 1.8.5. I had to reformat the jenkins url from myjenkins.server.com:8080 to http://myjenkins.server.com/ and it corrected the issue.
this is a bug around the jenkins api in later version. I believe 2.2 is the last compatible version, we run 1.6 internally.

Elasticsearch does not return jsonp

im trying to connect my polymer element to my own elasticsearch-server.
My first problem was, that they are on two different ports, so it had to choose JSONP because of Cross-Domain problems.
So I found out, that I just have to add
http.jsonp.enable: true
in the elasticsearch.yml.
Im starting the server simply by executing the "elasticsearch.bat".
I've indexed data.
If I try to load the API via iron-jsonp-library, im always getting an unexpected token error.
<iron-jsonp-library id="libraryLoader"
library-url="http://127.0.0.1:9200/data/_search?pretty%%callback%%"
notify-event="api-load"
callbackName="jsonpCallback">
</iron-jsonp-library>
In Google Chrome, I'm getting following result from elasticsearch
{"took":2,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":5,"max_score":1.0,"hits":[{"_index":"data","_type":"data","_id":"5","_score":1.0,"_source":{"id":5,"name":"Meyr","manufacturer":"Meyr","weight":1.0,"price":1.0000,"popularity":1,"instock":true,"includes":"Meyr"}},{"_index":"data","_type":"data","_id":"2","_score":1.0,"_source":{"id":2,"name":"Meier","manufacturer":"Meier","weight":1.0,"price":1.0000,"popularity":1,"instock":true,"includes":"Meier"}},{"_index":"data","_type":"data","_id":"4","_score":1.0,"_source":{"id":4,"name":"Mair","manufacturer":"Mair","weight":1.0,"price":1.0000,"popularity":1,"instock":true,"includes":"Mair"}},{"_index":"data","_type":"data","_id":"1","_score":1.0,"_source":{"id":1,"name":"Maier","manufacturer":"Maier","weight":1.0,"price":1.0000,"popularity":1,"instock":true,"includes":"Maier"}},{"_index":"data","_type":"data","_id":"3","_score":1.0,"_source":{"id":3,"name":"Mayr","manufacturer":"Mayr","weight":1.0,"price":1.0000,"popularity":1,"instock":true,"includes":"Mayr"}}]}}
Due to some internet knowledge of JSONP, its not jsonp.
Why is my elasticsearch server, not formatting right?
Are you prior to v2.0? Looks like they removed jsonp in 2.0 (elastic.co/guide/en/elasticsearch/reference/2.2/…).
Alsopretty%%callback%% doesn't look right, the %%callback%% macro usually needs to be the value of name (like onload=%%callback%%). The element replaces %%callback%% with the name of a global function that is generated for you.

Fault NetConnection Failed using actionscript RemoteObject lots of datas

I have an air (4.5.1) mobile project that send an ArrayCollection to the server (Tomcat/BlazeDS)
The server manage the object and return a string containing the result (ok/error/etc)..
Everything worked fine, until:
I tried to send an ArrayCollection with length > 35000 (not sure border limit).
After sending the arraycollection the UI seems like frozen for a little time, and after that
I got a FaultEvent Error
NetConnection.Call.Failed: HTTP: Failed
The server however received the request, parsed it and returned the result string
So, because the program get the faultevent, I cannot be sure (from the client) that the request is finished correctly...
How can I fix it? and is this problem generated by the length of the arraycollection?
Other ideas?
Thanks
This is an on going issue with Flex/Air/Flash. The problem you are running into is a defualt value for requestTimeout of 30 seconds. Even if you change the value in your remoteObject, it is not getting used correctly. There are MANY MANY documented bugs on adobe regarding this issue. Below is a link to a site that has collected some info about this problem from around the web. To date adobe has yet to fix the problem even though that claim they have in previous versions.
RemoteObject Issue