why Connection test failed? - mysql

When i select custom setting and select sql and enter all fields
Connection test failed: Cannot find module\AppData\Local\Temp\strapi9a2b8146f759\node_modules\strapi-connector-bookshelf\lib\utils\connectivity.js'

I know It's old but I just want to reply for future people who have the same issue.
Try to use yarn create strapi-app --quickstart or using npx npx create-strapi-starter --quickstart, and It'll use the default template
and then when It finish getting dependencies and finish setting up your project go to a file called database.js and add your database set of infos.
Got this info from issue opened in github : Here

Related

I have a problem importing a Bit compiler. How can I resolve the following error?

I'm working with 'Bit' to create reusable React components. I have created my 'Bit' account and followed tutorials on the web to log on to Bit from the terminal and I have initialized the Bit workspace. I am encountering the following error when importing the React compiler.
$ bit import bit.envs/compilers/react --compiler
fatal: unable to connect to a remote legacy SSH server from Harmony client
Any advice would be appreciated.
the compiler flag was removed in 0.0.537
harmony became default in 0.0.438 (so legacy was last default in 0.0.437)
but ... if you try to run this on 0.0.437 now, you're likely to get an error:
server responded with: "Please update your Bit client.
For additional information: https://docs.bit.dev/docs/installation#latest-version"
the best place to get bit.dev answers is their community Slack here: https://join.slack.com/t/bit-dev-community/shared_invite/zt-o2tim18y-UzwOCFdTafmFKEqm2tXE4w.

Dropped rows in Spark when modifying database in MySQL

I've been following the 5 min how to for setting up an htap databse with tidb_tispark and everything works until I get to the section Launch TiSpark. My first issue occurs when executing the line:
docker-compose exec tispark-master /opt/spark-2.1.1-bin-hadoop2.7/bin/spark-shell
But I got around that by modifying the spark version to the version I found inside the container:
docker-compose exec tispark-master /opt/spark-2.3.3-bin-hadoop2.7/bin/spark-shell
My second issue occurs when executing the three line block:
import org.apache.spark.sql.TiContext
val ti = new TiContext(spark)
ti.tidbMapDatabase("TPCH_001")
When I run the last statement I get the following output
scala> ti.tidbMapDatabase("TPCH_001")
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-core-3.2.10.jar."
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-api-jdo-3.2.6.jar."
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-rdbms-3.2.9.jar."
2019-07-11 16:14:36 WARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException
This doesn't prevent me from running the query:
spark.sql("select * from nation").show(30);
But when I follow the further steps of the tutorial to modify the db from MySQL, the changes are not reflected immediately in Spark. Furthermore, at some point in the future (I believe > 5 minutes later), the row that was modified stops showing up in Spark SQL queries.
I'm rather new to this kind of setup and don't really know how to debug this issue. Searches for the warnings I received weren't illuminating.
I don't know if it's helpful but when I connect MySQL this is the server version I get:
Server version: 5.7.25-TiDB-v3.0.0-rc.1-309-g8c20289c7 MySQL Community Server (Apache License 2.0)
I'm one of the main dev of TiSpark. Sorry for your bad experience with it.
Due to my docker problem, I cannot directly reproduce your issue but it seems you hit one of the bug fixed recently.
https://github.com/pingcap/tispark/pull/862/files
The tutorial document is not quite up-to-date and points to an older version. That's why it didn't work with spark 2.1.1 as in tutorial. We will update it ASAP.
Newer version of TiSpark doesn't use tidbMapDatabase anymore but hooks with catalog directly instead. Method tidbMapDatabase remains for backward compatibility. Unfortunately, the tidbMapDatabase had a bug(when we ported it from older version) that it retrieves timestamp for query only once you call the function. That causes TiSpark always uses old timestamp to do snapshot reading and newer data would never be seen by it.
In newer version of TiSpark (TiSpark 2.0+ with Spark 2.3+), databases and tables are directly hooked into catalog services and you can directly call
spark.sql("use TPCH_001").show
spark.sql("select * from nation").show
This should give you fresh data.
So try restart your Spark driver, just try the two lines of code above and see if it works.
Let me know if this fix your problem. On the other hand, we will check our docker image to make sure if it contains the fix already.
If things still get wrong, would you please help to run below code and let us know the version of TiSpark.
spark.sql("select ti_version()").show
Again, sorry for causing you trouble and thanks for trying.
EDIT
To address your comment:
The warning is due to spark itself will try to locate the database in its native catalog first and this will cause a Failed to get warning. But the failover process will delegate the search to tispark and then behave correctly. So this warning can be ignored. It's recommended that add below lines to your log4j.properties in conf folder of your spark.
log4j.logger.org.apache.hadoop.hive.metastore.ObjectStore=ERROR
We will polish the docker tutorial image soon. Thank you so much for trying.

Telescope Error when I run "meteor add my-custom-package" command

I get this error below when I run "meteor add my-custom-package" command, and I am not sure what the problem is.
=> Errors while parsing arguments:
While adding package my-custom-package:
error: no such package
I experienced the same problem while trying to create a private package. I was following guides online and creating all the file manually, a better way is running:
meteor create --package example
and then adding the package will work:
meteor add example

Could not start Jmeter properly

I am trying to run jmeter on linux machine. Its throwing this warning and jmeter is
not working properly.
[warning] /usr/bin/jmeter: No JAVA_CMD set for run_java, falling back to JAVA_CMD = java
java.lang.Throwable: Could not access /usr/share/jmeter/lib/junit
at org.apache.jmeter.NewDriver.<clinit>(NewDriver.java:97)
I have already installed JUNIT and JDBC.MYSQL Connector.
Also, When i am setting TEST USERS , it is not showing JDBC Configuration window.
I want to know what i am missing.
you may have to create a symbolic link from /usr/share/jmeter/lib/junit to your installation of junit.

pentaho integration with mysql-5.X

When I was trying to replace the hsqldb with the mysql-5.X I get the following error with quartz error failed to initialize:-
Pentaho Initialization Exception
The following errors were detected
One or more system listeners failed. These are set in the systemListeners.xml.
PentahoSystem.ERROR_0014 - Error while trying to execute startup sequence for org.pentaho.platform.scheduler.QuartzSystemListener
Please see the server console for more details on each error detected.
Did you run the quartz scripts which setup the quartz db? they are provided in the solution repository.
Otherwise pastebin the full log, it's impossible to tell without more info. suspect somewhere you'll either have a authentication issue, or no mysql driver in your classpath.
For a clear guide on how to do this, follow here:
http://www.prashantraju.com/2010/12/pentaho-3-7-with-mysql-postgresql-oracle-and-sql-server/
I also received the error message
PentahoSystem.ERROR_0014 – Error while trying to execute startup sequence for org.pentaho.platform.scheduler.QuartzSystemListener
when trying to bring up the service. I found this solution after searching a few different threads:
Remove commented out portion of these properties (or copy and paste from here, and modify as necessary) in quartz.properties (located in pentaho-solutions/system/quartz):
org.quartz.dataSource.quartz.driver = com.mysql.jdbc.Driver
org.quartz.dataSource.quartz.URL = jdbc:mysql://localhost:3306/quartz
org.quartz.dataSource.quartz.user = pentaho_user
org.quartz.dataSource.quartz.password = password
org.quartz.dataSource.quartz.maxConnections = 5
org.quartz.dataSource.quartz.validationQuery= select 1
Also comment out the JNDI Url:
#org.quartz.dataSource.myDS.jndiURL = Quartz