java.lang.NoSuchFieldError: defaultReader (JsonSmartJsonProvider.java:39) - json

I am using json-path-2.4.0 library in spark jobs which has a dependency on json-smart 2.x , but the spark jars default classpath folder (/usr/hdp/2.6.5.0-292/spark2/jars/) has json-smart 1.x which always gets precedence and I am unable to use the json-path 2.x library.
Facing the below error everytime I run :
java.lang.NoSuchFieldError: defaultReader
at com.jayway.jsonpath.spi.json.JsonSmartJsonProvider.(JsonSmartJsonProvider.java:39)
at com.jayway.jsonpath.internal.DefaultsImpl.jsonProvider(DefaultsImpl.java:21)
at com.jayway.jsonpath.Configuration.defaultConfiguration(Configuration.java:174)
Similar issue has been reported earlier :
JSON Path 2.3.0 conflicts with hadoop 2.7 Environment JSON-smart1.2.0.jar
But havent found any working solution. Please help.

Related

Error Implementing lib module dependency by creating .aar file in build-gradle (app) file

I am facing the following issue whenever I try to implement the .aar file instead of the direct lib module dependency.
Caused by: org.gradle.api.internal.artifacts.ivyservice.DefaultLenientConfiguration$ArtifactResolveException: Could not resolve all files for configuration ':app:debugCompileClasspath'.
Caused by: org.gradle.internal.resolve.ModuleVersionNotFoundException: Could not find :xxxlib-release:.
N:B: I have configured the library project for jitpack also with the publishing config. A few days back, there was no issue but now I am getting errors for both the .aar and jitpack versions of my library which I guess is related. Additionally, I have upgraded to Android Studio Bumblebee and using targetSdkVersion 29 or 30; kotlin = 1.6.0 and classpath 'com.android.tools.build:gradle:7.0.3'
I can't find out what am I missing here. Any help will be appriciated, thanks.

mySQL Data Provider .NET Core 2.0

Looking for help for some solutions on switching out the default data provider for the project from MS SQL to mySQL. Eventually with the intent of deploying the solution to Auruora on AWS.
After installing the nuget package I get something along the lines of :
System.TypeLoadException: Method 'Clone' in type 'MySql.Data.EntityFrameworkCore.Infraestructure.MySQLOptionsExtension' from assembly 'MySql.Data.EntityFrameworkCore, Version=6.10.5.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d' does not have an implementation.
This lead me to believe that there is no .NET 2.0 Entity Framework Core extention that runs for MySQL. Do I have to rollback to a different version?
For EntityFrameworkCore, it's suggested to use Pomelo.EntityFrameworkCore.MySql.
You may refer to their Getting Started documentation.
A fellow member of the community has kindly summarised the essential steps here:
Put Pomelo.EntityFrameworkCore.MySql into the xxx.EntityFrameworkCore project's .csproj file (see step 2 in the Pomelo getting started guide)
In your xxxDbContextConfigurer class put builder.UseMySql(...) rather than builder.UseSqlServer(...)
Change the connection string found in the appsettings.json file in the xxx.Web.Host project

Adding Spark CSV dependency to Zeppelin

I'm running an EMR with a spark cluster on AWS.
Spark version is 1.6
When running the folllowing command:
proxy = sqlContext.read.load("/user/zeppelin/ProxyRaw.csv",
format="com.databricks.spark.csv",
header="true",
inferSchema="true")
I get the following error:
Py4JJavaError: An error occurred while calling o162.load.
: java.lang.ClassNotFoundException: Failed to find data source: com.databricks.spark.csv. Please find packages at
http://spark-packages.org
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.lookupDataSource(ResolvedDataSource.scala:77)
How can I solve this? I assume I should add a package but how do I install it and where?
There is many way to add packages in Zeppelin :
One of them is to actually change the conf/zeppelin-env.sh configuration file adding the packages you need e.g com.databricks:spark-csv_2.10:1.4.0 in your case to the submit options since Zeppelin uses the spark-submit command under the hood :
export SPARK_SUBMIT_OPTIONS="--packages com.databricks:spark-csv_2.10:1.4.0"
But let's say that you don't have actually access to those configuration. You can then use Dynamic Dependency Loading via %dep interpreter (deprecated) :
%dep
z.load("com.databricks:spark-csv_2.10:1.4.0")
This will require that you load the dependencies before launching or restarting the interpreter.
Another way to do it is do add the dependency you need via the interpreter dependency manager as described in the following link : Dependency Management for Interpreter.
Well,
First you need to download the CSV liv from Maven repository:
https://mvnrepository.com/artifact/com.databricks/spark-csv_2.10/1.5.0
Check the scala version that you are using. If is 2.10 or 2.11.
When you call spark-shell our spark-submit or pyspark. Or even your Zeppelin you need to add the option --jars and the path to your lib.
Like this:
pyspark --jars /path/to/jar/spark-csv_2.10-1.5.0.jar
Than you can call it as you did above.
You can see other close issue here: How to add third party java jars for use in pyspark

Getting 'java.sql.SQLException: com.mysql.jdbc.Driver' with grails run-app (when BuildConfig.groovy doesn't need to be recompiled)

I've upgraded my grails application from 1.3.9 to 2.2.3 and then to 2.3.3. I read the release and upgrade notes for 1.3.9->2.2.3 and then from 2.2.3->2.3.3
I am using OpenJDK 6, Jetty 6 and the plugin jetty 1.1, MySQL 5.5 and I have the connector library under lib
Now my issue is if I run grails clean and then grails run-app the application runs without any problems but if I stop it and run grails run-app again I get a gigantic error (see here: http://pastebin.com/36MpXhir)
I also found that changing something like adding a space somewhere in BuildConfig.groovy (anything that makes it be recompiled) makes the application run normally.
Looking at the stacktrace the first thing that puzzles me is
[02.12.13 16:13:59.919] [main] pool.ConnectionPool Unable to create initial connections of pool.
java.sql.SQLException: com.mysql.jdbc.Driver
at org.apache.tomcat.jdbc.pool.PooledConnection.connectUsingDriver(PooledConnection.java:254)
at org.apache.tomcat.jdbc.pool.PooledConnection.connect(PooledConnection.java:182)
at org.apache.tomcat.jdbc.pool.ConnectionPool.createConnection(ConnectionPool.java:701)
at org.apache.tomcat.jdbc.pool.ConnectionPool.borrowConnection(ConnectionPool.java:635)
at org.apache.tomcat.jdbc.pool.ConnectionPool.init(ConnectionPool.java:486)
at org.apache.tomcat.jdbc.pool.ConnectionPool.<init>(ConnectionPool.java:144)
at org.apache.tomcat.jdbc.pool.DataSourceProxy.pCreatePool(DataSourceProxy.java:116)
at org.apache.tomcat.jdbc.pool.DataSourceProxy.createPool(DataSourceProxy.java:103)
at org.apache.tomcat.jdbc.pool.DataSourceProxy.getConnection(DataSourceProxy.java:127)
at org.springframework.jdbc.datasource.LazyConnectionDataSourceProxy.afterPropertiesSet(LazyConnectionDataSourceProxy.java:162)
There are references to org.apache.tomcat even though I'm using jetty (and removed tomcat from BuildConfig.groovy).
Did anyone else encounter such a problem?
Don't put jar files in the lib directory if they're available in a public Maven repo. It's far better to download jars once and keep them in a local cache, and reuse them as needed.
The MySQL driver is used as the commented-out example in the generated BuildConfig.groovy - just un-comment it :) You might want to bump up the version to the latest, e.g.
dependencies {
runtime 'mysql:mysql-connector-java:5.1.27'
}
This is a good site for finding Maven artifacts: http://mvnrepository.com/artifact/mysql/mysql-connector-java
If you do have a jar that's not in a Maven repo (e.g. one with shared code at your company) then you can put it in the lib directory, but it's not auto-discovered. Run grails compile --refresh-dependencies to get it added to the classpath.
For me same error has occurred while running the Grails Application.Then I debug and view the code history of my code which was committed recently.
From that I found the issue that was:
Inside the controller file I send the instance with-out properly
Eg:
**def list=[personInstance.]---> error occurred.**
**render list as JSON**
Then I correct my mistake-->clean the app --> run the app
Now its working fine.

Jruby log4j integration

I am currently working on integrating Java application General Architecture For Text Engineering (GATE) with a Rails application using JRuby architecture. When we worked on integrating JRuby with log4j, I am getting following error:
0 [main] DEBUG Main.class - Hello world
gate/Gate.java:80:in `<clinit>': java.lang.NoClassDefFoundError: org/apache/log4
j/Logger (NativeException)
from gateapp/Main.java:86:in `main'
from test.rb:12
test.rb is the name of ruby program.
I tried importing all the log4j apache libraries, and included the class file in the test.rb file.
When I run the Java program alone its running fine. But when I generate the jar file and include them in Ruby file (test.rb) , I am getting this error
java.lang.NoClassDefFoundError: org/apache/log4j/Logger (NativeException) problem is occuring. How to deal with this problem ?
You need to make sure the log4j JAR file is in your classpath. One way to do this is to set the CLASSPATH variable in your environment. Another way would be to call require in your ruby code like
require "/some/path/MyStuff.jar"
Here is my config to set it up with couchbase Java SDK
include Java
def setup_log4j
java::lang.System.setProperty("net.spy.log.LoggerImpl", "net.spy.memcached.compat.log.Log4JLogger")
fa = Java::OrgApacheLog4j::FileAppender.new();
fa.setName("FileLogger");
fa.setFile("./log/#{Rails.env}.log");
fa.setLayout(Java::OrgApacheLog4j::PatternLayout.new("%d %-5p [%c{1}] %m%n"));
fa.setThreshold(Java::OrgApacheLog4j::Level::INFO);
fa.setAppend(true);
fa.activateOptions();
Java::OrgApacheLog4j::Logger.getRootLogger().addAppender(fa)
end
Just beware that I required the lo4j.jar file earlier.
Worth to mention that there is project named log4jruby.