Could not start Jmeter properly - mysql

I am trying to run jmeter on linux machine. Its throwing this warning and jmeter is
not working properly.
[warning] /usr/bin/jmeter: No JAVA_CMD set for run_java, falling back to JAVA_CMD = java
java.lang.Throwable: Could not access /usr/share/jmeter/lib/junit
at org.apache.jmeter.NewDriver.<clinit>(NewDriver.java:97)
I have already installed JUNIT and JDBC.MYSQL Connector.
Also, When i am setting TEST USERS , it is not showing JDBC Configuration window.
I want to know what i am missing.

you may have to create a symbolic link from /usr/share/jmeter/lib/junit to your installation of junit.

Related

DAML Sand box error: Error: Registry key 'Software\JavaSoft\Java Runtime Environment'\CurrentVersion' has value '1.8', but '1.7' is required

Getting below error when I start sandbox. I think we need use jdk8 as per the DAML doc.
daml sandbox --scenario Main:setup .daml/dist/quickstart-0.0.1.dar
Error: Registry key 'Software\JavaSoft\Java Runtime Environment'\CurrentVersion'
has value '1.8', but '1.7' is required.
Error: could not find java.dll
Error: Could not find Java SE Runtime Environment.
daml-helper: Received ExitFailure 2 when running
Raw command: java -jar "C:\Users\santh\AppData\Roaming\daml\sdk\0.13.21\sandbox/sandbox.jar" --scenario Main:setup .daml/dist/quickstart-0.0.1.dar
During my initial setup, I have faced the similar issue. It may be because of an existing Java installation in your machine.
You can try the below options to fix this issue.
Option 1:
1. Check if you have more than one version of JAVA in machine.
2. If yes, uninstall everything and do the clean installation. Make sure your Environment Path variable(s) are got set properly.
or
Option 2:
1) Download the latest JDK in Zip format "jdk-12.0.2_windows-x64_bin.zip"
2) Extract it manually in your local drive.
3) Manually update your Environment Path variables (ref. https://javatutorial.net/set-java-home-windows-10).
I hope it will help you to fix your problem.
Cheers,
Augustine

Dropped rows in Spark when modifying database in MySQL

I've been following the 5 min how to for setting up an htap databse with tidb_tispark and everything works until I get to the section Launch TiSpark. My first issue occurs when executing the line:
docker-compose exec tispark-master /opt/spark-2.1.1-bin-hadoop2.7/bin/spark-shell
But I got around that by modifying the spark version to the version I found inside the container:
docker-compose exec tispark-master /opt/spark-2.3.3-bin-hadoop2.7/bin/spark-shell
My second issue occurs when executing the three line block:
import org.apache.spark.sql.TiContext
val ti = new TiContext(spark)
ti.tidbMapDatabase("TPCH_001")
When I run the last statement I get the following output
scala> ti.tidbMapDatabase("TPCH_001")
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-core-3.2.10.jar."
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-api-jdo-3.2.6.jar."
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-rdbms-3.2.9.jar."
2019-07-11 16:14:36 WARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException
This doesn't prevent me from running the query:
spark.sql("select * from nation").show(30);
But when I follow the further steps of the tutorial to modify the db from MySQL, the changes are not reflected immediately in Spark. Furthermore, at some point in the future (I believe > 5 minutes later), the row that was modified stops showing up in Spark SQL queries.
I'm rather new to this kind of setup and don't really know how to debug this issue. Searches for the warnings I received weren't illuminating.
I don't know if it's helpful but when I connect MySQL this is the server version I get:
Server version: 5.7.25-TiDB-v3.0.0-rc.1-309-g8c20289c7 MySQL Community Server (Apache License 2.0)
I'm one of the main dev of TiSpark. Sorry for your bad experience with it.
Due to my docker problem, I cannot directly reproduce your issue but it seems you hit one of the bug fixed recently.
https://github.com/pingcap/tispark/pull/862/files
The tutorial document is not quite up-to-date and points to an older version. That's why it didn't work with spark 2.1.1 as in tutorial. We will update it ASAP.
Newer version of TiSpark doesn't use tidbMapDatabase anymore but hooks with catalog directly instead. Method tidbMapDatabase remains for backward compatibility. Unfortunately, the tidbMapDatabase had a bug(when we ported it from older version) that it retrieves timestamp for query only once you call the function. That causes TiSpark always uses old timestamp to do snapshot reading and newer data would never be seen by it.
In newer version of TiSpark (TiSpark 2.0+ with Spark 2.3+), databases and tables are directly hooked into catalog services and you can directly call
spark.sql("use TPCH_001").show
spark.sql("select * from nation").show
This should give you fresh data.
So try restart your Spark driver, just try the two lines of code above and see if it works.
Let me know if this fix your problem. On the other hand, we will check our docker image to make sure if it contains the fix already.
If things still get wrong, would you please help to run below code and let us know the version of TiSpark.
spark.sql("select ti_version()").show
Again, sorry for causing you trouble and thanks for trying.
EDIT
To address your comment:
The warning is due to spark itself will try to locate the database in its native catalog first and this will cause a Failed to get warning. But the failover process will delegate the search to tispark and then behave correctly. So this warning can be ignored. It's recommended that add below lines to your log4j.properties in conf folder of your spark.
log4j.logger.org.apache.hadoop.hive.metastore.ObjectStore=ERROR
We will polish the docker tutorial image soon. Thank you so much for trying.

Play Framework 2.3 on OpenShift database token substitution not working

I am trying to deploy a Play Framework 2.3 application to OpenShift.
I am following this example: https://github.com/JamesSullivan/play2-openshift-quickstart
Building and deploying the application is working (by that I mean the push to the git repository is working and the build is completing successfully), but during startup I see this error in play.log:
AbstractConnectionHook -
Failed to obtain initial connection Sleeping for 0ms and trying again.
Attempts left: 0. Exception: null.
Message:No suitable driver found for jdbc:${OPENSHIFT_POSTGRESQL_DB_URL}
Oops, cannot start the server.
Configuration error: Configuration error[Cannot connect to database [default]]
at play.api.Configuration$.play$api$Configuration$$configError(Configuration.scala:94)
at play.api.Configuration.reportError(Configuration.scala:743)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:247)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:238)
at scala.collection.immutable.List.map(List.scala:272)
at play.api.db.BoneCPPlugin.onStart(DB.scala:238)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:91)
at play.api.Play$$anonfun$start$1$$anonfun$apply$mcV$sp$1.apply(Play.scala:91)
at scala.collection.immutable.List.foreach(List.scala:381)
at play.api.Play$$anonfun$start$1.apply$mcV$sp(Play.scala:91)
at play.api.Play$$anonfun$start$1.apply(Play.scala:91)
at play.api.Play$$anonfun$start$1.apply(Play.scala:91)
at play.utils.Threads$.withContextClassLoader(Threads.scala:21)
at play.api.Play$.start(Play.scala:90)
at play.core.StaticApplication.<init>(ApplicationProvider.scala:55)
at play.core.server.NettyServer$.createServer(NettyServer.scala:244)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:280)
at play.core.server.NettyServer$$anonfun$main$3.apply(NettyServer.scala:275)
at scala.Option.map(Option.scala:145)
at play.core.server.NettyServer$.main(NettyServer.scala:275)
at play.core.server.NettyServer.main(NettyServer.scala)
Caused by: java.sql.SQLException: No suitable driver found for jdbc:${OPENSHIFT_POSTGRESQL_DB_URL}
at java.sql.DriverManager.getConnection(DriverManager.java:596)
at java.sql.DriverManager.getConnection(DriverManager.java:215)
at com.jolbox.bonecp.BoneCP.obtainRawInternalConnection(BoneCP.java:363)
at com.jolbox.bonecp.BoneCP.<init>(BoneCP.java:416)
at com.jolbox.bonecp.BoneCPDataSource.getConnection(BoneCPDataSource.java:120)
at play.api.db.BoneCPPlugin$$anonfun$onStart$1.apply(DB.scala:240)
... 18 more
So it looks like the ${OPENSHIFT_POSTGRESQL_DB_URL} environment variable token-substitution is not working.
If I log in to my application, I see this via env (obviously I replaced the username, password, IP and port for the purposes of posting here):
OPENSHIFT_POSTGRESQL_DB_URL=postgresql://xxxx:yyyy#ip:port
I have also tried using the other environment variables, like OPENSHIFT_POSTGRESQL_DB_HOST but those too do not get substituted.
The relevant part of my openshift.conf looks like this:
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:${OPENSHIFT_POSTGRESQL_DB_URL}"
db.default.user=myappuser
db.default.password=myapppassword
From the linked quickstart project, the following command is used to start the Play server (again, I replaced server-ip for the purposes of this post):
/app-root/runtime/repo/target/universal/stage/bin/myapp
"-DapplyEvolutions.default=true"
-Dhttp.port=8080 -Dhttp.address=server-ip
-Dconfig.resource=openshift.conf
You can see the openshift.conf file being referenced.
I tried a lot of things, eventually I found something that worked:
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:postgresql://"${OPENSHIFT_POSTGRESQL_DB_HOST}":"${OPENSHIFT_POSTGRESQL_DB_PORT}/mydb
db.default.user=(((db-user)))
db.default.password=(((dp-password)))
The upshot is, it seems, you need to watch out very carefully for correct usage of the quotation characters.
It looks "wrong" (at first glance) since the last quotation character closes the string prior to the OPENSHIFT_POSTGRESQL_DB_PORT variable.

xcode: Mysql connector library not work on iphone5 (armv7s) ,any solution?

I built my App using Mysql Connector/C to connect a remote Mysql database, its works fine on the simulator (no errors, no warnings) but when i try to run it on my device (iphone5) i got this error:
No architectures to compile for (ARCHS=armv7 armv7s, VALID_ARCHS=armv7 armv7s)
i tried -as in some answers- to change setting (Architectures - Build Active Architectures- Valid Architectures) but the error still, only when i change the setting (Architectures & Valid Architectures) to "armv6" its build without error but many warnings appears says:
warning: no rule to process file '(my App dir)/main.m' of type sourcecode.c.objc for architecture armv6
and also for all .m files, when i tried to start the App i got message:
Xcode cannot run using selected device
I know that the Connector library need to update , but are there any solution ?
your need compile the connector lib in xcode for iOS (armv6, armv7, armv7s, i386), then use lipo tool to combine output libs.
direct connect your mysql in app is not safe, a suggest way is setup a Apache+PHP+MySQL server, then on iPhone useing ASIHTTPRequest to connect your server.

Connecting a MySQL database to Glassfish classpath is not set or classname is wrong

I'm swapping out a derby database for a MySQL one. I had everything working before but after what I thought was the proper configuration I'm getting the error:
Caused by: javax.resource.ResourceException: Class name is wrong or classpath is not set for : com.mysql.jdbc.jdbc2.optional.MysqlDataSource
Full error output from console:
Caused by: javax.resource.ResourceException: Class name is wrong or classpath is not set for : com.mysql.jdbc.jdbc2.optional.MysqlDataSource
at com.sun.gjc.common.DataSourceObjectBuilder.getDataSourceObject(DataSourceObjectBuilder.java:292)
at com.sun.gjc.common.DataSourceObjectBuilder.constructDataSourceObject(DataSourceObjectBuilder.java:114)
at com.sun.gjc.spi.ManagedConnectionFactory.getDataSource(ManagedConnectionFactory.java:1292)
at com.sun.gjc.spi.DSManagedConnectionFactory.getDataSource(DSManagedConnectionFactory.java:148)
at com.sun.gjc.spi.DSManagedConnectionFactory.createManagedConnection(DSManagedConnectionFactory.java:101)
at com.sun.enterprise.resource.allocator.LocalTxConnectorAllocator.createResource(LocalTxConnectorAllocator.java:87)
I've double checked some of the names, the connection pool and other resources.I've also added the MySQL driver .jars to the library of glassfish in both projects. The database was definitely working correctly through eclipse because I was able to view tables and display the resources inside the database context of eclipse. So I know that at least THOSE drivers are working correcly. Also the persistence.xml file looks good. it references the jdbc/mydatabase jndi reference like it should and default JTA is selected as the manament type.
Does anyone have another suggestion? Thank you
I've also added the MySQL driver .jars to the library of glassfish in both projects.
It was apparently not done right. The JAR has to go in /glassfish/domains/[domainname]/lib/ext folder of the Glassfish installation where [domainname] usually defaults to domain1. You can and should not configure it from the Eclipse side on.
Looks like I am replying very late, but however people who would be referring to this thread may find the following information being useful. So I am posting it here:
Download the connector Jar from http://dev.mysql.com/downloads/connector/j/5.0.html
unzip the pack and copy mysql-connector-java-verno-bin.jar
past the same at [GlassFish Installation Directory]/domains/[domain name]/lib folder
restart your domain and ping to check your connection in JDBC Connection Pools
There you go. If your MySql is running then it will ping the DB successfully
I copied the jar file to $glassfish_install_folder\glassfish\lib, after that it worked. I use glassfish 4.0.
Check this link from oracle.
I encountered this issue in 2018 and would like to note that if you are using glassfish 4 (current is 5) then it seems you have to use the Connector/J 5.1.47 version for it to work. If you use the current version (Connector/J 8.0.13) then exception mentioned in the original question keeps appearing, no matter where you place the .jar file.
With Connector/J 5.1.47 it works perfectly.
Solution is
> asadmin add-library /path/to/mysql-connector-java-bin.jar
Check this link:
https://blog.payara.fish/using-mysql-with-payara
I encountered this issue in 2019 and would like to note that if you are using docker image payara/server-full (I am using 5.194 so far) as I do the location to place the driver jar is:
/opt/payara/appserver/glassfish/domains/production/lib/
In the end I am doing something like this in the Dockerfile of payara server:
RUN wget http://central.maven.org/maven2/org/mariadb/jdbc/mariadb-java-client/2.2.0/mariadb-java-client-2.2.0.jar \
-O /opt/payara/appserver/glassfish/domains/production/lib/mariadb-java-client-2.2.0.jar