I am using the most recent version of drill on windows machines in embedded mode. However, I always get "no current connection" errors when sending any queries. Also, the web server localhost:8047 does not work either. Here is what I see when I try to start drill.
I have tried both Java JDK 8 and 9, on two separate windows machines and got the same error. I searched about this matter but could not get any workarounds so far.
Any fix to this? Thanks a lot!!
Problem solved by setting JAVA_HOME correctly.
Here's another clue that might help others!! I had the same problem "No current connection" on Windows 10 and went around in MANY circles trying to solve it by:
Making sure JAVA_HOME was set properly - ok
Checking in the cmd window to make sure I could cd %JAVA_HOME% - ok
Making sure I had added %JAVA_HOME%\bin to the PATH - ok
Checking in the cmd window to make sure I could cd %JAVA_HOME%\bin - ok
Trying to find conflicting JARs - there were none anywhere
Finally, I solved the problem: USE JDK VERSION 7 or 8
I blew away my version 9, reinstalled version 8, set JAVA_HOME again, and then started Drill with sqlline.bat -u "jdbc:drill:zk=local"
I tested it and got this!!!
0: jdbc:drill:zk=local> SELECT version FROM sys.version;
+----------+
| version |
+----------+
| 1.12.0 |
+----------+
1 row selected (1.326 seconds)
Hadoop-3.1.2: Datanode and Nodemanager shuts down
Stuxen in the above thread provided the exact solution and it worked for me.
Related
I am getting below error whenever I open the terminal. This began happening after upgrading Fedora Workstation 32 to 33.
ERROR: ld.so: object '/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.265.b01-1.fc32.x86_64/jre/lib/amd64/libzip.so' from LD_PRELOAD cannot be preloaded (cannot open shared object file): ignored.
I still have OpenJDK 8 installed and even when uninstalled, above error keeps popping up making my terminal untidy
The answer here helped me to get a solution. I had the below the line in .bash_profile
./home/admin/Adempiere/postgresql/8.1.5/PGSQL.env
. I simply commented it out as below and my issue was solved. Read the above answer for the details of why this caused me issues. In my case, the program "Adempiere" had been uninstalled but it didn't remove this path which no longer existed.
#./home/admin/Adempiere/postgresql/8.1.5/PGSQL.env
If you get any error related to the above, review paths set in your .bash_profile
or /.bashrc , if you have a missing path; definitely this will throw this error
I've been following the 5 min how to for setting up an htap databse with tidb_tispark and everything works until I get to the section Launch TiSpark. My first issue occurs when executing the line:
docker-compose exec tispark-master /opt/spark-2.1.1-bin-hadoop2.7/bin/spark-shell
But I got around that by modifying the spark version to the version I found inside the container:
docker-compose exec tispark-master /opt/spark-2.3.3-bin-hadoop2.7/bin/spark-shell
My second issue occurs when executing the three line block:
import org.apache.spark.sql.TiContext
val ti = new TiContext(spark)
ti.tidbMapDatabase("TPCH_001")
When I run the last statement I get the following output
scala> ti.tidbMapDatabase("TPCH_001")
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-core-3.2.10.jar."
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-api-jdo-3.2.6.jar."
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-rdbms-3.2.9.jar."
2019-07-11 16:14:36 WARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException
This doesn't prevent me from running the query:
spark.sql("select * from nation").show(30);
But when I follow the further steps of the tutorial to modify the db from MySQL, the changes are not reflected immediately in Spark. Furthermore, at some point in the future (I believe > 5 minutes later), the row that was modified stops showing up in Spark SQL queries.
I'm rather new to this kind of setup and don't really know how to debug this issue. Searches for the warnings I received weren't illuminating.
I don't know if it's helpful but when I connect MySQL this is the server version I get:
Server version: 5.7.25-TiDB-v3.0.0-rc.1-309-g8c20289c7 MySQL Community Server (Apache License 2.0)
I'm one of the main dev of TiSpark. Sorry for your bad experience with it.
Due to my docker problem, I cannot directly reproduce your issue but it seems you hit one of the bug fixed recently.
https://github.com/pingcap/tispark/pull/862/files
The tutorial document is not quite up-to-date and points to an older version. That's why it didn't work with spark 2.1.1 as in tutorial. We will update it ASAP.
Newer version of TiSpark doesn't use tidbMapDatabase anymore but hooks with catalog directly instead. Method tidbMapDatabase remains for backward compatibility. Unfortunately, the tidbMapDatabase had a bug(when we ported it from older version) that it retrieves timestamp for query only once you call the function. That causes TiSpark always uses old timestamp to do snapshot reading and newer data would never be seen by it.
In newer version of TiSpark (TiSpark 2.0+ with Spark 2.3+), databases and tables are directly hooked into catalog services and you can directly call
spark.sql("use TPCH_001").show
spark.sql("select * from nation").show
This should give you fresh data.
So try restart your Spark driver, just try the two lines of code above and see if it works.
Let me know if this fix your problem. On the other hand, we will check our docker image to make sure if it contains the fix already.
If things still get wrong, would you please help to run below code and let us know the version of TiSpark.
spark.sql("select ti_version()").show
Again, sorry for causing you trouble and thanks for trying.
EDIT
To address your comment:
The warning is due to spark itself will try to locate the database in its native catalog first and this will cause a Failed to get warning. But the failover process will delegate the search to tispark and then behave correctly. So this warning can be ignored. It's recommended that add below lines to your log4j.properties in conf folder of your spark.
log4j.logger.org.apache.hadoop.hive.metastore.ObjectStore=ERROR
We will polish the docker tutorial image soon. Thank you so much for trying.
I'm seeing dozens of entries, daily, in my cgi_error_log file like this:
20151214T183710: www.example/index.php
Failed loading /usr/local/lib/ioncube/ioncube_loader_lin_5.2.so: /usr/local/lib/ioncube/ioncube_loader_lin_5.2.so: wrong ELF class: ELFCLASS32
Failed loading /usr/local/Zend/lib/ZendExtensionManager.so: /usr/local/Zend/lib/ZendExtensionManager.so: wrong ELF class: ELFCLASS32
I changed the URL in the error message because the site is not yet ready for the public and I don't want any links to the site or even text in posts like this laying around.
As the site is still in testing, these messages are coming from my accessing pages and all of the pages have loaded correctly.
What does this message mean? I have no clue other than that ioncube is some sort of PHP Encoder.
Could this have something to do with the 32bit version of ioncube being run on a 64bit system? I've found references to that sort of problem.
Did some more digging - could this be happening because the version of PHP on the server is not 5.2? Error about ioncube shows 4.2 in the file name.
The version I'm using on the server if 5.5. They have 5.3 and 5.6.
If they had 5.2 I'd give it a try but it is not available.
More digging - found this set in my php.ini file for all my sites.
[Zend]
zend_extension = /usr/local/lib/ioncube/ioncube_loader_lin_5.2.so
zend_extension_manager.optimizer=/usr/local/Zend/lib/Optimizer-3.2.0
zend_extension_manager.optimizer_ts=/usr/local/Zend/lib/Optimizer_TS-3.2.0
zend_optimizer.version=3.2.0
zend_extension=/usr/local/Zend/lib/ZendExtensionManager.so
zend_extension_ts=/usr/local/Zend/lib/ZendExtensionManager_TS.so
So - do I need to install newer versions of one or more of the items listed above?
If so - where do I start and can I do it myself or does the hosting company staff have to do it?
I keep getting the error while running functional tests using runner with following:
-selenium 2.44
-Chrome Driver
-Windows Server 2008 R2 Enterprise
Error Description: Listening on 0.0.0.0:7000
Starting tunnel...
UnknownError: [POST http://test.com/wd/hub/session / {"de
siredCapabilities":{"browserName":"chrome","name":"tests/intern","idle-timeout":
60,"selenium-version":"2.44.0"}}] unknown error: failed to write prefs file
(Driver info: chromedriver=2.12.301325 (962dea43ddd90e7e4224a03fa3c36a421281ab
b7),platform=Windows NT 6.1 SP1 x86_64) (WARNING: The server did not provide any
stacktrace information)
Command duration or timeout: 1.06 seconds
Anyone have ever come across such issue? How do i fix this? Suggestions please
I've recently had the same issue. The problem was caused by full C drive. Apparently chromedriver needs some space in C drive (or the drive where chrome binary file is located) to create temporary profile files and so on.
One of the solutions could be to move chrome installation to some other drive. You could use mklink command in command line window.
It can be caused by executing ChromeDriver in parallel. Other errors as "failed to write first run file" or "cannot create default profile directory" may happen in that case.
My solution was to specify option user-data-dir. Two concurrent Chromedriver should not use same user data directory.
chromeOptions.AddArgument("--user-data-dir=C:\\tmp\\chromeprofiles\\profile" + someKindOfIdOrIndex);
You can of course change the path for whatever you want :)
This issue occurs if C drive disk runs out of space.The best solution to clear temp files.This solution worked for me.
Open Run command
2.Type % tmp%
3.Click on OK
4.Select all files.Delete all the files permanently.
you have different versions of chrome on server and on node
In my case, it was a consol application which should run as Administrator to gain access to the HDD
Follow These Steps
1)Press Window key+R
Type RUN
Type %temp%
4)Click Ok
5)Press Ctrl+A
Press Shift Delete
So, I've tried everything I can do right now, not really getting anywhere with this, so I am turning to the guys on SO for some assistance.
System Details:
Fedora 17 x86_64
Intel® Pentium(R) Dual CPU E2160 # 1.80GHz × 2
1.9 GiB memory
KCacheGrind 0.7.1
KDE Platform Version 4.9.4
Procedure Details:
I get an XDEBUG log from the server or from a Chrome Ext called Xdebug Helper. And I run it, either directly from the icon or from a Shell Script I created.
#!/bin/bash
export $(dbus-launch)
kcachegrind
And I get an error "No Profile Data Loaded"
Any ideas?
SORRY: error reads "(No function selected)"
Forgive me. I am a n00b at Linux and KCacheGrind.
I used root. Fixed. Adding 10 or more characters.