Cobertura gives Argument list too long - junit

I am trying to generate code coverage reports using cobertura plugin.
I have this dependency in my pom.xml
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>cobertura-maven-plugin</artifactId>
<version>2.6</version>
<executions>
<execution>
<phase>test</phase>
<goals>
<goal>cobertura</goal>
</goals>
<configuration>
<formats>
<format>html</format>
<format>xml</format>
</formats>
</configuration>
</execution>
</executions>
<configuration>
<formats>
<format>html</format>
<format>xml</format>
</formats>
</configuration>
When I build my project using this goal -U -B clean install cobertura:cobertura, i get the below error on my jenkins CI
16:37:31 [ERROR] Failed to execute goal org.codehaus.mojo:cobertura-maven-plugin:2.6:instrument (default-cli) on project TestModule: Unable to execute Cobertura. Error while executing process. Cannot run program "/bin/sh": error=7, Argument list too long -> [Help 1]
16:37:31 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal org.codehaus.mojo:cobertura-maven-plugin:2.6:instrument (default-cli) on project TestModule: Unable to execute Cobertura.
16:37:31 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
16:37:31 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
16:37:31 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
16:37:31 at org.apache.maven.lifecycle.internal.MojoExecutor.executeForkedExecutions(MojoExecutor.java:364)
16:37:31 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:198)
16:37:31 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
16:37:31 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
16:37:31 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
16:37:31 at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
16:37:31 at org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
16:37:31 at org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
16:37:31 at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:317)
16:37:31 at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:152)
16:37:31 at org.apache.maven.cli.MavenCli.execute(MavenCli.java:555)
16:37:31 at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:214)
16:37:31 at org.apache.maven.cli.MavenCli.main(MavenCli.java:158)
16:37:31 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
16:37:31 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:76)
16:37:31 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
16:37:31 at java.lang.reflect.Method.invoke(Method.java:602)
16:37:31 at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
16:37:31 at org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
16:37:31 at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
16:37:31 at org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
16:37:31 Caused by: org.apache.maven.plugin.MojoExecutionException: Unable to execute Cobertura.
16:37:31 at org.codehaus.mojo.cobertura.tasks.AbstractTask.executeJava(AbstractTask.java:244)
16:37:31 at org.codehaus.mojo.cobertura.tasks.InstrumentTask.execute(InstrumentTask.java:139)
16:37:31 at org.codehaus.mojo.cobertura.CoberturaInstrumentMojo.execute(CoberturaInstrumentMojo.java:162)
16:37:31 at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:106)
16:37:31 at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
16:37:31 ... 23 more
16:37:31 Caused by: org.codehaus.plexus.util.cli.CommandLineException: Error while executing process.
16:37:31 at org.codehaus.plexus.util.cli.Commandline.execute(Commandline.java:656)
16:37:31 at org.codehaus.plexus.util.cli.CommandLineUtils.executeCommandLine(CommandLineUtils.java:144)
16:37:31 at org.codehaus.plexus.util.cli.CommandLineUtils.executeCommandLine(CommandLineUtils.java:107)
16:37:31 at org.codehaus.mojo.cobertura.tasks.AbstractTask.executeJava(AbstractTask.java:240)
16:37:31 ... 27 more
16:37:31 Caused by: java.io.IOException: Cannot run program "/bin/sh": error=7, Argument list too long
16:37:31 at java.lang.ProcessBuilder.start(ProcessBuilder.java:1042)
16:37:31 at java.lang.Runtime.exec(Runtime.java:615)
16:37:31 at java.lang.Runtime.exec(Runtime.java:526)
16:37:31 at org.codehaus.plexus.util.cli.Commandline.execute(Commandline.java:636)
16:37:31 ... 30 more
16:37:31 Caused by: java.io.IOException: error=7, Argument list too long
16:37:31 at java.lang.UNIXProcess.<init>(UNIXProcess.java:139)
16:37:31 at java.lang.ProcessImpl.start(ProcessImpl.java:152)
16:37:31 at java.lang.ProcessBuilder.start(ProcessBuilder.java:1023)
16:37:31 ... 33 more
The build is successful on my windows machine but fails on jenkins. When I downgrade cobertura version to 2.5.1 this error goes away but I get some parse exceptions since the parser for cobertura for 2.5.1 is not up to date with java syntax.
Can someone help me get this working for 2.6.0 versions and higher for cobertura

This borders on the lines of relevance but may offer assistance
This is caused by a linux limitation whereby the arguments can't exceed 128kb in size
See Linux Kernel constant: MAX_ARG_STRLEN
https://github.com/torvalds/linux/blob/master/include/uapi/linux/binfmts.h
Within Jenkins once you read from/to a variable where this value is exceeded, you will hit this error.
In my case, I had a github webhook that launched a Jenkins job and set a payload parameter to some string > than this limit. Attempting to read this parameter would throw this error.
In order to work around the problem, I have a child job that uses a rest-api call to read the value from parent
You can let the parent job throw a failure, but allow the child job to be launched in all cases.
Below is a slightly refined function I used to pull the information (Stripped out error checking and comments for brevity)
def get_parameter_value_from_parent():
host = 'https://[YOUR_COMPANY].ci.cloudbees.com'
this_build_url = os.environ.get('BUILD_URL')
request_auth = (JENKINS_USER, JENKINS_TOKEN)
url = '{0}/api/json'.format(this_build_url)
parameter_name = 'payload'
payload = ''
#
# Get the upstreamBuild number, and the upstreamUrl
# so we can put together a link to the upstream job
#
response = requests.get(url, auth=request_auth)
this_build = json.loads(response)
build_number = ''
short_url = ''
actions = this_build['actions']
for action in actions:
if action.get('causes'):
for cause in action.get('causes'):
build_number = cause['upstreamBuild']
short_url = cause['upstreamUrl']
parent_url = '{host}/{short_url}{build}/api/json'.format(host=host,
short_url=short_url, build=build_number)
#
# Now get the payload from the parent job by making REST api call
#
response = requests.get(parent_url, auth=request_auth)
upstream_build = json.loads(response)
actions = upstream_build['actions']
for action in actions:
if action.get('parameters'):
for parameter in action.get('parameters'):
if parameter['name'] == parameter_name:
value = parameter['value']
payload = value
return payload
print 'Error: Unable to return payload from parent jenkins job: {0}'.format(parent_url)
sys.exit(1)

All shell have a limit for the command line length. UNIX / Linux / BSD system has a limit on how many bytes can be used for the command line argument and environment variables.
When you start a new process or type a command these limitations are applied and you will see an error message as follows on screen:
Argument list too long
Cobertura is trying to execute a shell command:
getLog().debug( "Working Directory: " + cl.getWorkingDirectory() );
getLog().debug( "Executing command line:" );
getLog().debug( cl.toString() );
int exitCode;
try
{
exitCode = CommandLineUtils.executeCommandLine( cl, stdout, stderr );
}
catch ( CommandLineException e )
{
throw new MojoExecutionException( "Unable to execute Cobertura.", e );
}
In fact, the plugin is trying to execute a java process to run cobertura:
Commandline cl = new Commandline();
File java = new File( SystemUtils.getJavaHome(), "bin/java" );
cl.setExecutable( java.getAbsolutePath() );
cl.addEnvironment("CLASSPATH", createClasspath());
String log4jConfig = getLog4jConfigFile();
if ( log4jConfig != null )
{
cl.createArg().setValue( "-Dlog4j.configuration=" + log4jConfig );
}
cl.createArg().setValue( "-Xmx" + maxmem );
cl.createArg().setValue( taskClass );
if ( cmdLineArgs.useCommandsFile() )
{
String commandsFile;
try
{
commandsFile = cmdLineArgs.getCommandsFile();
}
catch ( IOException e )
{
throw new MojoExecutionException( "Unable to obtain CommandsFile location.", e );
}
if ( FileUtils.fileExists( commandsFile ) )
{
cl.createArg().setValue( "--commandsfile" );
cl.createArg().setValue( commandsFile );
}
else
{
throw new MojoExecutionException( "CommandsFile doesn't exist: " + commandsFile );
}
}
else
{
Iterator<String> it = cmdLineArgs.iterator();
while ( it.hasNext() )
{
cl.createArg().setValue( it.next() );
}
}
So, first of all, enable cobertura DEBUG traces to show the shell command executed.
I think the problem would be the classpath used in version 2.6 vs the one used in version 2.5.1.
Please, enable debug traces and post the result:
https://wiki.jenkins-ci.org/display/JENKINS/Logging

Related

Spark SQL error read JSON file : java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class

i am trying to read JSON file using Spark SQL in Java.
this is my code
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
...
JavaSparkContext jsc = new JavaSparkContext(sparkConf);
SQLContext sqlContext = new SQLContext(jsc);
DataFrame df = sqlContext.jsonFile("~/test.json");
df.printSchema();
df.registerTempTable("test");
...
i made simple JSON "test.json", to make it simple:
{
"name": "myname"
}
and when i tried to run the code, it comes error message:
efg
17/03/30 10:02:26 INFO BlockManagerMasterEndpoint: Registering block manager 10.6.86.82:36824 with 1948.2 MB RAM, BlockManagerId(driver, 10.6.86.82, 36824)
17/03/30 10:02:26 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 10.6.86.82, 36824)
17/03/30 10:02:26 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
Exception in thread "main" java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
at org.apache.spark.sql.sources.CaseInsensitiveMap.<init>(ddl.scala:344)
at org.apache.spark.sql.sources.ResolvedDataSource$.apply(ddl.scala:219)
at org.apache.spark.sql.SQLContext.load(SQLContext.scala:697)
at org.apache.spark.sql.SQLContext.jsonFile(SQLContext.scala:572)
at org.apache.spark.sql.SQLContext.jsonFile(SQLContext.scala:553)
at sugi.kau.sparkonjava.SparkSQL.main(SparkSQL.java:32)
Caused by: java.lang.ClassNotFoundException: scala.collection.GenTraversableOnce$class
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 6 more
17/03/30 10:02:26 INFO SparkContext: Invoking stop() from shutdown hook
...
thanks
in the docs spark for the function jsonFile(String path):
Loads a JSON file (one object per line), returning the result as a DataFrame. (Note tha jsonFile is replaced by read().json())
so you should have an object per line and your source file should be like this :
{"name": "myname"}
{"name": "myname2"}
.....

How to configure rsyslog template for Exception error for remote logging?

I'm using rsyslog to ship logs to a remote Logstash server, and the Logstash on that service expects input data in a json format. How can I configure an rsyslog template to json-ify a exception. For example, I want to send the following exception as a single message.
2017-02-08 21:59:51,727 ERROR :localhost-startStop-1 [jdbc.sqlonly] 1. PreparedStatement.executeBatch() batching 1 statements:
1: insert into CR_CLUSTER_REGISTRY (Cluster_Name, Url, Update_Dttm, Node_Id) values ('customer', 'rmi://ip-10-53-123.123.eu-west-1.compute.internal:1199/2', '02/08/2017 21:59:51.639', '2')
java.sql.BatchUpdateException: [Teradata JDBC Driver] [TeraJDBC 15.00.00.35] [Error 1338] [SQLState HY000] A failure occurred while executing a PreparedStatement batch request. Details of the failure can be found in the exception chain that is accessible with getNextException.
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:148)
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeBatchUpdateException(ErrorFactory.java:137)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:272)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatch(TDPreparedStatement.java:2584)
at com.teradata.tal.qes.StatementProxy.executeBatch(StatementProxy.java:186)
at net.sf.log4jdbc.StatementSpy.executeBatch(StatementSpy.java:539)
at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:70)
at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:268)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:266)
at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167)
at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321)
at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1028)
at com.teradata.tal.common.persistence.dao.SessionWrapper.flush(SessionWrapper.java:920)
at com.teradata.trm.common.persistence.dao.DaoImpl.save(DaoImpl.java:263)
at com.teradata.trm.common.service.AbstractService.save(AbstractService.java:509)
at com.teradata.trm.common.cluster.Cluster.init(Cluster.java:413)
at com.teradata.trm.common.cluster.NodeConfiguration.initialize(NodeConfiguration.java:182)
at com.teradata.trm.common.context.Initializer.onApplicationEvent(Initializer.java:73)
at com.teradata.trm.common.context.Initializer.onApplicationEvent(Initializer.java:30)
at org.springframework.context.event.SimpleApplicationEventMulticaster.multicastEvent(SimpleApplicationEventMulticaster.java:97)
at org.springframework.context.support.AbstractApplicationContext.publishEvent(AbstractApplicationContext.java:324)
at org.springframework.context.support.AbstractApplicationContext.finishRefresh(AbstractApplicationContext.java:929)
at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:467)
at org.springframework.web.context.ContextLoader.configureAndRefreshWebApplicationContext(ContextLoader.java:385)
at org.springframework.web.context.ContextLoader.initWebApplicationContext(ContextLoader.java:284)
at org.springframework.web.context.ContextLoaderListener.contextInitialized(ContextLoaderListener.java:111)
at org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:4973)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5467)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
at org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
at org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
at org.apache.catalina.core.StandardHost.addChild(StandardHost.java:632)
at org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1247)
at org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1898)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.sql.SQLException: [Teradata Database] [TeraJDBC 15.00.00.35] [Error -2801] [SQLState 23000] Duplicate unique prime key error in CIM_META.CR_CLUSTER_REGISTRY.
at com.teradata.jdbc.jdbc_4.util.ErrorFactory.makeDatabaseSQLException(ErrorFactory.java:301)
at com.teradata.jdbc.jdbc_4.statemachine.ReceiveInitSubState.action(ReceiveInitSubState.java:114)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.subStateMachine(StatementReceiveState.java:311)
at com.teradata.jdbc.jdbc_4.statemachine.StatementReceiveState.action(StatementReceiveState.java:200)
at com.teradata.jdbc.jdbc_4.statemachine.StatementController.runBody(StatementController.java:137)
at com.teradata.jdbc.jdbc_4.statemachine.PreparedBatchStatementController.run(PreparedBatchStatementController.java:58)
at com.teradata.jdbc.jdbc_4.TDStatement.executeStatement(TDStatement.java:387)
at com.teradata.jdbc.jdbc_4.TDPreparedStatement.executeBatchDMLArray(TDPreparedStatement.java:252)
... 37 more
I have the following rsyslog configuration file. The startmsg.regex aims to "flag" the start of a new message when it sees the "YYYY-mm-dd" date format, and until it sees that format, it should treat any text following the date format as part of the current message.
input(type="imfile"
File="/usr/share/tomcat/dist/logs/trm-error.log*"
Facility="local3"
Tag="trm-error:"
Severity="error"
startmsg.regex="^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}"
escapeLF="on"
)
if $programname == 'trm-error:' then {
action(
type="omfwd"
Target="10.53.234.234"
Port="5514"
Protocol="udp"
template="textLogTemplate"
)
stop
}
..and the following template.
# Template for non json logs, just sends the message wholesale with extra
# # furniture.
template(name="textLogTemplate" type="list") {
constant(value="{ ")
constant(value="\"type\":\"")
property(name="programname")
constant(value="\", ")
constant(value="\"host\":\"")
property(name="hostname")
constant(value="\", ")
constant(value="\"timestamp\":\"")
property(name="timestamp" dateFormat="rfc3339")
constant(value="\", ")
constant(value="\"#version\":\"1\", ")
constant(value="\"customer\":\"customer\", ")
constant(value="\"role\":\"app2\", ")
constant(value="\"sourcefile\":\"")
property(name="$!metadata!filename")
constant(value="\", ")
constant(value="\"message\":\"")
property(name="rawmsg" format="json")
constant(value="\"}\n")
}
However, Logstash complains about a "jsonparseerror" when it tries to parse the log as a json file. Any clues?
The rsyslog configuration files I'm using are correct, that is, Java exception log is indeed wrapped into a valid JSON file. However, Logstash is complaining about a _jsonparsefailure, so this problem is probably related to Logstash Ruby code, and not on the rsyslog side.

spark throws exception when querying large amount of data in mysql

## when i submit my task to do some query from mysql to yarn by using spark's cluster mode like below: ##
./spark-submit --class org.com.scala.test.ScalaTestFile --master yarn --deploy-mode cluster --driver-memory 8g --executor-memory 5g --jars /usr/local/spark/lib/datanucleus-api-jdo-3.2.6.jar,/usr/local/spark/lib/datanucleus-core-3.2.10.jar,/usr/local/spark/lib/datanucleus-rdbms-3.2.9.jar,/usr/local/spark/lib/mysql-connector-java-5.1.26-bin.jar /data/tmp/snodawn/svn/scalaScript/scalaMavenTest/out/artifacts/scalaMavenTest_jar/scalaMavenTest.jar
- org.com.scala.test.ScalaTestFile is to query large amount of data in mysql(which for about 1 billion lines), and save it to hive :
val conf = new SparkConf().setAppName("ScalaTestFile")
val spark = new SparkContext(conf)
val sqlContext = new SQLContext(spark)
val hiveContext = new HiveContext(spark);
val reader = hiveContext.read.format("jdbc")
val url="jdbc:mysql://xx.xx.xx.xx:3307/databases"
reader.option("url",url)
reader.option("driver","com.mysql.jdbc.Driver")
reader.option("user","admin")
reader.option("password","admin")
reader.option("dbtable","(select * from gold) as a")
val df = reader.load()
val nTable = df.toDF();
val nWrite = nTable.write
hiveContext.sql("use testment")
nWrite.saveAsTable("gold_test")
- the task will be failed after running with such error:
16/02/24 18:46:43 INFO DAGScheduler: Job 0 failed: saveAsTable at ScalaTestFile.scala:88, took 1697.970350 s
16/02/24 18:46:43 ERROR InsertIntoHadoopFsRelation: Aborting job.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1922)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:150)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.datasources.ResolvedDataSource$.apply(ResolvedDataSource.scala:256)
at org.apache.spark.sql.hive.execution.CreateMetastoreDataSourceAsSelect.run(commands.scala:258)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:251)
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:221)
at org.com.scala.test.ScalaTestFile$.main(ScalaTestFile.scala:88)
at org.com.scala.test.ScalaTestFile.main(ScalaTestFile.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
16/02/24 18:46:44 ERROR DefaultWriterContainer: Job job_201602241818_0000 aborted.
16/02/24 18:46:44 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: Job aborted.
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:156)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation.run(InsertIntoHadoopFsRelation.scala:108)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.hive.execution.CreateMetastoreDataSourceAsSelect.run(commands.scala:258)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:55)
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:251)
at org.apache.spark.sql.DataFrameWriter.saveAsTable(DataFrameWriter.scala:221)
at org.com.scala.test.ScalaTestFile$.main(ScalaTestFile.scala:88)
at org.com.scala.test.ScalaTestFile.main(ScalaTestFile.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, slave4.4399data.com): ExecutorLos
tFailure (executor 4 exited caused by one of the running tasks) Reason: Executor heartbeat timed out after 122713 ms
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1922)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelation$$anonfun$run$1.apply$mcV$sp(InsertIntoHadoopFsRelation.scala:150)
... 33 more
16/02/24 18:46:44 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.SparkException: Job aborted.)
it seems that because it needs a log of time for querying from mysql and there returns no responses for a long time, this application finishes with failed status.
so, how can i solve my problem of querying from mysql for getting large amount of data by using spark?
ps, when i use java to do such query, i will do it like this:
Connection conn = DriverManager.getConnection(hiveConnectString, username, password);
com.mysql.jdbc.Statement statement = (com.mysql.jdbc.Statement)conn.createStatement();
statement.enableStreamingResults();
statement.executeUpdate("select * from gold")
so, is there a solution in spark to handling big data querying?

neo4j creating multiple nodes json

Please let me know where I am going wrong
I am using neo4j to store my routers interface and link information. The link is to be created between 2 interfaces.
I have successfully created nodes and interfaces but finding issues in creating links.
This is the query I use to create link
MATCH (I:Interface), (I2:Interface)
FOREACH(p in FILTER(z in {props} WHERE z.OrigIPAddress = I.IfIPAddress or z.TermIPAddress = I.IfIPAddress) |
MERGE (I {IfIPAddress:p.OrigIPAddress})-[r:link]->(I2 {IfIPAddress:p.TermIPAddress})
ON CREATE SET r = p
ON MATCH SET r = p)
I have an array of maps called props which I am passing in the json as params which contains the properties of link i.e OrigIPAddress (Originating interface IP), TermIPAddress (Terminating interface ip).
In the foreach I first filter all those link which have their source or destination interfaces already present. Now after doing this I am creating links out of the props.
When I run this it runs properly but no links are created. There are both source and destination interfaces present.
EDIT 1:
I modified the query and when I run this query
MATCH (I:Interface), (I2:Interface)
FOREACH(p in FILTER(z in {props} WHERE z.OrigIPAddress = I.IfIPAddress and z.TermIPAddress = I2.IfIPAddress) |
MERGE (I {IfIPAddress:p.OrigIPAddress})-[r:link]->(I {IfIPAddress:p.TermIPAddress})
ON CREATE SET r = p
ON MATCH SET r = p)
I do not get ant response back from the neo4j and in neo4j web console I can see this meesage
"Neo4j disconnected, check you socket..."
Here are the logs
Apr 09, 2014 11:06:46 AM org.neo4j.server.logging.Logger log
WARNING: You are using an unsupported Java runtime. Please use Oracle(R) Java(TM) Runtime Environment 7.
SEVERE: The response of the WebApplicationException cannot be utilized as the response is already committed. Re-throwing to the HTTP container
javax.ws.rs.WebApplicationException: javax.ws.rs.WebApplicationException: org.eclipse.jetty.io.EofException
at org.neo4j.server.rest.repr.OutputFormat$1.write(OutputFormat.java:174)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:698)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1506)
at org.neo4j.server.rest.security.SecurityFilter.doFilter(SecurityFilter.java:112)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1477)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:503)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:211)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1096)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:432)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:175)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1030)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:136)
at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:445)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:268)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:229)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.run(AbstractConnection.java:358)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:601)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:532)
at java.lang.Thread.run(Thread.java:744)
Caused by: javax.ws.rs.WebApplicationException: org.eclipse.jetty.io.EofException
at org.neo4j.server.rest.repr.formats.StreamingJsonFormat$StreamingRepresentationFormat.flush(StreamingJsonFormat.java:401)
at org.neo4j.server.rest.repr.formats.StreamingJsonFormat$StreamingRepresentationFormat.complete(StreamingJsonFormat.java:389)
at org.neo4j.server.rest.repr.MappingRepresentation.serialize(MappingRepresentation.java:43)
at org.neo4j.server.rest.repr.OutputFormat$1.write(OutputFormat.java:160)
... 30 more
Caused by: org.eclipse.jetty.io.EofException
at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:186)
at org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:335)
at org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:125)
at org.eclipse.jetty.server.HttpConnection$ContentCallback.process(HttpConnection.java:784)
at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:79)
at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:356)
at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:631)
at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:661)
at org.eclipse.jetty.server.HttpOutput.flush(HttpOutput.java:151)
at com.sun.jersey.spi.container.servlet.WebComponent$Writer.flush(WebComponent.java:315)
at com.sun.jersey.spi.container.ContainerResponse$CommittingOutputStream.flush(ContainerResponse.java:145)
at org.codehaus.jackson.impl.Utf8Generator.flush(Utf8Generator.java:1091)
at org.neo4j.server.rest.repr.formats.StreamingJsonFormat$StreamingRepresentationFormat.flush(StreamingJsonFormat.java:397)
... 33 more
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.writev0(Native Method)
at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:51)
at sun.nio.ch.IOUtil.write(IOUtil.java:148)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:524)
at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:167)
... 45 more
Apr 09, 2014 11:10:17 AM com.sun.jersey.server.impl.application.WebApplicationImpl _handleRequest
SEVERE: The response of the WebApplicationException cannot be utilized as the response is already committed. Re-throwing to the HTTP container
javax.ws.rs.WebApplicationException: javax.ws.rs.WebApplicationException: org.eclipse.jetty.io.EofException
at org.neo4j.server.rest.repr.OutputFormat$1.write(OutputFormat.java:174)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:71)
at com.sun.jersey.core.impl.provider.entity.StreamingOutputProvider.writeTo(StreamingOutputProvider.java:57)
at com.sun.jersey.spi.container.ContainerResponse.write(ContainerResponse.java:306)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1437)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1349)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1339)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:416)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:537)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:699)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:848)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:698)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1506)
at org.neo4j.server.rest.security.SecurityFilter.doFilter(SecurityFilter.java:112)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1477)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:503)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:211)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1096)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:432)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:175)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1030)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:136)
at org.eclipse.jetty.server.handler.HandlerList.handle(HandlerList.java:52)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)
at org.eclipse.jetty.server.Server.handle(Server.java:445)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:268)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:229)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.run(AbstractConnection.java:358)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:601)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:532)
at java.lang.Thread.run(Thread.java:744)
Caused by: javax.ws.rs.WebApplicationException: org.eclipse.jetty.io.EofException
at org.neo4j.server.rest.repr.formats.StreamingJsonFormat$StreamingRepresentationFormat.flush(StreamingJsonFormat.java:401)
at org.neo4j.server.rest.repr.formats.StreamingJsonFormat$StreamingRepresentationFormat.complete(StreamingJsonFormat.java:389)
at org.neo4j.server.rest.repr.MappingRepresentation.serialize(MappingRepresentation.java:43)
at org.neo4j.server.rest.repr.OutputFormat$1.write(OutputFormat.java:160)
... 30 more
Caused by: org.eclipse.jetty.io.EofException
at org.eclipse.jetty.io.ChannelEndPoint.flush(ChannelEndPoint.java:186)
at org.eclipse.jetty.io.WriteFlusher.write(WriteFlusher.java:335)
at org.eclipse.jetty.io.AbstractEndPoint.write(AbstractEndPoint.java:125)
at org.eclipse.jetty.server.HttpConnection$ContentCallback.process(HttpConnection.java:784)
at org.eclipse.jetty.util.IteratingCallback.iterate(IteratingCallback.java:79)
at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:356)
at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:631)
at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:661)
at org.eclipse.jetty.server.HttpOutput.flush(HttpOutput.java:151)
at
Let me know where I am I going wrong.
Is this closer to what you are trying to do?
MATCH (I:Interface)
FOREACH(
p in FILTER(z in {props}
WHERE z.OrigIPAddress = I.IfIPAddress or z.TermIPAddress = I.IfIPAddress) |
MERGE (I)-[r:link]->(:Interface {IfIPAddress:p.TermIPAddress})
SET I.IfIPAddress = p.OrigIPAddress, r = p
);
Finally after googling I found the issue.
Its server timeout. Since the query is taking some time to execute, the server is timing out and sending a response.
I increased the timeout and it worked
Here is the reference on how to increase it.
http://docs.neo4j.org/chunked/stable/server-configuration.html

How to connect to MySQL using OpenJDK in Linux Mint

I am having issues trying to get a database connection using the code below:
import java.lang.*;
import java.sql.*;
public class Demo {
public static void main(String[] args){
try{
Connection conn = DriverManager.getConnection("jdbc:mysql://127.0.0.1:3306/test?" + "user=test&password=123456");
}catch(SQLException ex){
System.out.println("SQLException: " + ex.getMessage());
System.out.println("SQLState: " + ex.getSQLState());
System.out.println("VendorError: " + ex.getErrorCode());
}catch(Exception ex){
System.out.println("Exception: " + ex.getMessage());
}
}
}
The error message that is outputted is:
SQLException: No suitable driver found for jdbc:mysql://127.0.0.1:3306/test?user=test&password=123456
SQLState: 08001
VendorError: 0
java.lang.ClassNotFoundException: com.mysql.jdbc.Driver
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:186)
at Demo.main(Demo.java:9)
My java version is below:
java -> /usr/lib/jvm/java-6-openjdk-i386/bin/java
javac -> /usr/lib/jvm/java-6-openjdk-i386/bin/javac
javac.1.gz -> /usr/lib/jvm/java-6-openjdk-i386/man/man1/javac.1.gz
javadoc -> /usr/lib/jvm/java-6-openjdk-i386/bin/javadoc
javadoc.1.gz -> /usr/lib/jvm/java-6-openjdk-i386/man/man1/javadoc.1.gz
javah -> /usr/lib/jvm/java-6-openjdk-i386/bin/javah
javah.1.gz -> /usr/lib/jvm/java-6-openjdk-i386/man/man1/javah.1.gz
javap -> /usr/lib/jvm/java-6-openjdk-i386/bin/javap
javap.1.gz -> /usr/lib/jvm/java-6-openjdk-i386/man/man1/javap.1.gz
javaws -> /usr/lib/jvm/java-6-openjdk-i386/jre/bin/javaws
javaws.1.gz -> /usr/lib/jvm/java-6-openjdk-i386/jre/man/man1/javaws.1.gz
I've, literally, no idea how to troubleshoot this error message. The database exists. The username and password exists. I've currently not added any tables to the database but I don't think that can be the issue, since I'm only making a connection after all...
Driver is installed under /usr/share/java
-rw-r--r-- 1 root root 822524 10月 20 2011 mysql-connector-java-5.1.16.jar
-rw-r--r-- 1 root root 827942 9月 9 22:40 mysql-connector-java-5.1.21-bin.jar
lrwxrwxrwx 1 root root 35 9月 9 22:40 mysql-connector-java.jar -> mysql-connector-java-5.1.21-bin.jar
lrwxrwxrwx 1 root root 24 10月 20 2011 mysql.jar -> mysql-connector-java.jar
Does anybody know how to fix it..
Thanks for your help!
:)
Make sure you have mysql connector driver (i.e. jar file) in your project library. This error message appears when there is no appropriate driver.
You can download the MySQL connector driver from this site.
I was having the same problem and the curiouss answer helped me to figure out an new solution.
Note: I was using Maven.
So I decided to try this dependency: http://mvnrepository.com/artifact/mysql/mysql-connector-java
After I added into the pom.xml and run mvn compile exec:javait worked.