out-of-core messages config in giraph - configuration

I was working with pregel and i noticed that when the jobs finish, They don't deallocate memory. so i searched and knew that the latest version "Version: 1.1.0-SNAPSHOT" doesn't have this problem so i downloaded the latest version of giraph from apache giraph. the number of produced message is a lot in my algorithm and because all of them are in memory, my ram becomes full. so i searched the problem and found the out-of-core message config. i want to set "giraph.useOutOfCoreMessages=true" but this config doesn't exixt in the latest version. What;s the problem?
Thanks.

Related

Could not start a new session. Response code 500. Message: session not created: This version of ChromeDriver only supports Chrome version 100

I am in the learning phase in the automation.
In my first program, when I am trying to run it, getting below error.
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Starting ChromeDriver 100.0.4896.60 (6a5d10861ce8de5fce22564658033b43cb7de047-refs/branch-heads/4896#{#875}) on port 57583
Only local connections are allowed.
Please see https://chromedriver.chromium.org/security-considerations for suggestions on keeping ChromeDriver safe.
ChromeDriver was started successfully.
Exception in thread "main" org.openqa.selenium.SessionNotCreatedException: Could not start a new session. Response code 500. Message: session not created: This version of ChromeDriver only supports Chrome version 100
Current browser version is 102.0.5005.63 with binary path C:\Program Files\Google\Chrome\Application\chrome.exe
Build info: version: '4.2.0', revision: '86eb611648'
It will be a great help if anybody help me in this to resolve.
Thanks in advance.
So first thing you should do is update your google chorme.
Click on the 3 dots > help > about google chrome > and check if it is already up to date.
After updating, search for webdrive for google chrome.
Download the version that is the same as yours, for example.
Mine is 103. Then download driver 103.
When you've downloaded the driver, unzip it and insert it into your drivers folder on your user's C:.
Example:
C:\Users\lucas\driver
After putting it in, try to run your program again.
Note * I found this out by going to my system variables and accessing the Paths that were in there.
You can probably find and check beforehand. If there is something in one of your paths like this:
C:\Users\lucas\driver
and paste exactly in that path. It will work.

Dropped rows in Spark when modifying database in MySQL

I've been following the 5 min how to for setting up an htap databse with tidb_tispark and everything works until I get to the section Launch TiSpark. My first issue occurs when executing the line:
docker-compose exec tispark-master /opt/spark-2.1.1-bin-hadoop2.7/bin/spark-shell
But I got around that by modifying the spark version to the version I found inside the container:
docker-compose exec tispark-master /opt/spark-2.3.3-bin-hadoop2.7/bin/spark-shell
My second issue occurs when executing the three line block:
import org.apache.spark.sql.TiContext
val ti = new TiContext(spark)
ti.tidbMapDatabase("TPCH_001")
When I run the last statement I get the following output
scala> ti.tidbMapDatabase("TPCH_001")
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-core-3.2.10.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-core-3.2.10.jar."
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus.api.jdo" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-api-jdo-3.2.6.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-api-jdo-3.2.6.jar."
2019-07-11 16:14:32 WARN General:96 - Plugin (Bundle) "org.datanucleus.store.rdbms" is already registered. Ensure you dont have multiple JAR versions of the same plugin in the classpath. The URL "file:/opt/spark/jars/datanucleus-rdbms-3.2.9.jar" is already registered, and you are trying to register an identical plugin located at URL "file:/opt/spark-2.3.3-bin-hadoop2.7/jars/datanucleus-rdbms-3.2.9.jar."
2019-07-11 16:14:36 WARN ObjectStore:568 - Failed to get database global_temp, returning NoSuchObjectException
This doesn't prevent me from running the query:
spark.sql("select * from nation").show(30);
But when I follow the further steps of the tutorial to modify the db from MySQL, the changes are not reflected immediately in Spark. Furthermore, at some point in the future (I believe > 5 minutes later), the row that was modified stops showing up in Spark SQL queries.
I'm rather new to this kind of setup and don't really know how to debug this issue. Searches for the warnings I received weren't illuminating.
I don't know if it's helpful but when I connect MySQL this is the server version I get:
Server version: 5.7.25-TiDB-v3.0.0-rc.1-309-g8c20289c7 MySQL Community Server (Apache License 2.0)
I'm one of the main dev of TiSpark. Sorry for your bad experience with it.
Due to my docker problem, I cannot directly reproduce your issue but it seems you hit one of the bug fixed recently.
https://github.com/pingcap/tispark/pull/862/files
The tutorial document is not quite up-to-date and points to an older version. That's why it didn't work with spark 2.1.1 as in tutorial. We will update it ASAP.
Newer version of TiSpark doesn't use tidbMapDatabase anymore but hooks with catalog directly instead. Method tidbMapDatabase remains for backward compatibility. Unfortunately, the tidbMapDatabase had a bug(when we ported it from older version) that it retrieves timestamp for query only once you call the function. That causes TiSpark always uses old timestamp to do snapshot reading and newer data would never be seen by it.
In newer version of TiSpark (TiSpark 2.0+ with Spark 2.3+), databases and tables are directly hooked into catalog services and you can directly call
spark.sql("use TPCH_001").show
spark.sql("select * from nation").show
This should give you fresh data.
So try restart your Spark driver, just try the two lines of code above and see if it works.
Let me know if this fix your problem. On the other hand, we will check our docker image to make sure if it contains the fix already.
If things still get wrong, would you please help to run below code and let us know the version of TiSpark.
spark.sql("select ti_version()").show
Again, sorry for causing you trouble and thanks for trying.
EDIT
To address your comment:
The warning is due to spark itself will try to locate the database in its native catalog first and this will cause a Failed to get warning. But the failover process will delegate the search to tispark and then behave correctly. So this warning can be ignored. It's recommended that add below lines to your log4j.properties in conf folder of your spark.
log4j.logger.org.apache.hadoop.hive.metastore.ObjectStore=ERROR
We will polish the docker tutorial image soon. Thank you so much for trying.

Trying to decr ref count of Tcl_Obj allocated in another thread

I compiled the sourceforge tcl executable, it passes all the tests supplied, and it runs with the same segfault I see in my downloaded executable, 8.6.9. I'm running on Ubuntu 16.04 (for legacy reasons) on an AMD processor. ( I have run on ubuntu 18.04 on my laptop, it has the same failure. )
So, next I recompiled with "--enable-symbols=mem" turned on to see if a memory leak is causing the segfault, and now it fails immediately with:
Trying to decr ref count of Tcl_Obj allocated in another thread
./runMeg.sh: line 3: 29972 Aborted (core dumped) ../source/main_megatron.
I'm not seeing any answer on what to do with this response, can someone advise on what this means I need to fix?
All my threads are of the form:
set graphDisplayThread [ thread::create {
after [expr {int(1000) }]
.....
puts "...Initialized graphDisplayUpdate_02 ID $c update."
thread::wait
}]
and:
thread::send $::graphDisplayThread {
incr b
graphDisplayUpdate .c
}
All shared variables are referenced AFTER mutex is captured, and through TSV variables. There are 5 threads in the application, which has no C-code in it at all. Around 2000 lines of code, in total.
The app runs thousands of cycles and then segfaults at random points with a prior ActiveState 8.6.9 pre-compiled version. So, now I'm trying to isolate the failure point with compiled SourceForge 8.6.9 memory checks as a first step, but the issue above is the first one I encounter - and it occurs immediately after starting.
Update (5/16/19 8:28 EST): New Detail to answer comments below.... This application has no C-code in it, and the Tcl_Obj error ONLY appears in the sourceForge-based, 8.6.9 compiles (2) I did myself, not the ActiveState 8.6.9 pre-built download. And the error in the sourceForge code occurs in both the twin "MEM_DEBUG" and NO-"MEM_DEBUG" builds I made in tandem and tested. Both passed all install tests.
To summarize:
sourceForge 8.6.9 compile w/MEM_DEBUG option: Tcl_Obj Abort error
sourceForge 8.6.9 compile w/o MEM_DEBUG option: Tcl_Obj Abort error
ActiveState 8.6.9 build: does not Abort, random seg fault
Why should I trust the sourceForge build I made myself, more than the ActiveState pre-built executable which does not exhibit the problem? And if we do trust the sourceForge compiled version, how do I isolate where the TclObject error is created by the offending TCL code?
Update 5/16/19#13:34EST: The same segfault appears with ActiveState 8.6.9 on Ubuntu 18.04. Haven't checked my builds of SourceForge yet to see how they behave.
By methodically hacking out code blocks and watching to see if the Tcl_Obj error dissappeared or not, I found 2 errors:
I had re-declared my mutex and cond variables more than once. Now it is declared once and referenced in all other places.
A code remnant removing a TSV was found in a place I no longer wanted it.
This also fixed the segfault.
Thanks for all the help and hints, mrcalvin.

I'm getting dozens of "Failed loading" errors in cgi_error_log daily - what do they mean?

I'm seeing dozens of entries, daily, in my cgi_error_log file like this:
20151214T183710: www.example/index.php
Failed loading /usr/local/lib/ioncube/ioncube_loader_lin_5.2.so: /usr/local/lib/ioncube/ioncube_loader_lin_5.2.so: wrong ELF class: ELFCLASS32
Failed loading /usr/local/Zend/lib/ZendExtensionManager.so: /usr/local/Zend/lib/ZendExtensionManager.so: wrong ELF class: ELFCLASS32
I changed the URL in the error message because the site is not yet ready for the public and I don't want any links to the site or even text in posts like this laying around.
As the site is still in testing, these messages are coming from my accessing pages and all of the pages have loaded correctly.
What does this message mean? I have no clue other than that ioncube is some sort of PHP Encoder.
Could this have something to do with the 32bit version of ioncube being run on a 64bit system? I've found references to that sort of problem.
Did some more digging - could this be happening because the version of PHP on the server is not 5.2? Error about ioncube shows 4.2 in the file name.
The version I'm using on the server if 5.5. They have 5.3 and 5.6.
If they had 5.2 I'd give it a try but it is not available.
More digging - found this set in my php.ini file for all my sites.
[Zend]
zend_extension = /usr/local/lib/ioncube/ioncube_loader_lin_5.2.so
zend_extension_manager.optimizer=/usr/local/Zend/lib/Optimizer-3.2.0
zend_extension_manager.optimizer_ts=/usr/local/Zend/lib/Optimizer_TS-3.2.0
zend_optimizer.version=3.2.0
zend_extension=/usr/local/Zend/lib/ZendExtensionManager.so
zend_extension_ts=/usr/local/Zend/lib/ZendExtensionManager_TS.so
So - do I need to install newer versions of one or more of the items listed above?
If so - where do I start and can I do it myself or does the hosting company staff have to do it?

Hudson | NullPointerException while loading jobs

We are using hudson version 2.0.0.
Few days back after restarting hudson, i found that some of the jobs were missing.
Found that its due to the following NPE.
Dec 9, 2011 11:39:34 AM hudson.model.Hudson$5 onTaskFailed
SEVERE: Failed Loading job ABC
java.lang.NullPointerException
at hudson.model.Project.createTransientActions(Project.java:206)
at hudson.model.AbstractProject.updateTransientActions(AbstractProject.java:627)
at hudson.model.AbstractProject.onLoad(AbstractProject.java:287)
at hudson.model.Project.onLoad(Project.java:87)
at hudson.model.Items.load(Items.java:109)
at hudson.model.Hudson$13.run(Hudson.java:2376)
at org.jvnet.hudson.reactor.TaskGraphBuilder$TaskImpl.run(TaskGraphBuilder.java:146
at org.jvnet.hudson.reactor.Reactor.runTask(Reactor.java:259)
at hudson.model.Hudson$4.runTask(Hudson.java:707)
at org.jvnet.hudson.reactor.Reactor$2.run(Reactor.java:187)
at org.jvnet.hudson.reactor.Reactor$Node.run(Reactor.java:94)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTaskThreadPoolExecutor.java
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Upgrading to 2.1.2 didnt help. I removed all the plugins and re-installed, but in vain.
Has anybody else faced this?
Thanks,
Gayathri
OK I found the problem.
To fix this issue, I had to add the line that contains <publishers/> to each config.xml file in projects that failed to load (towards the end of XML file):
</builders>
<buildWrappers/>
<publishers/>
</project>
Somehow without this, Hudson/Jenkins barfs and hides in the corner crying.
Jobs will go missing from the dashboard when there is a parse error in the config.xml.
You can inspect these errors in Hudson / Manage Hudson / System Log looking for
Caused by: com.thoughtworks.xstream.converters.ConversionException: Could not call hudson.scm.CVSSCM.readResolve() : null :
It means the auto parameter migration failed and you have to edit the config.xml in the Jobs root by hand, the debug information in the log provides the offending line number. Commenting out the item usually works. Sometimes manual reconfiguring is needed.
Jobs will go missing from the history between restarts when the file build.xml is not generated in the build/<jobref> directory. If there is an exception in any plugin, it may cause the creation of this file to be skipped, and it will not show in history after a restart. Again looking in the log using Hudson / Manage Hudson / System Log is the quickest way to find the offending plugin. Disabling usually resolves the problem.
Please note that searching for "Missing Hudson Builds" will point to many different plugins/versions that all cause this problem. You should look in YOUR log to find what is causing the problem in YOUR installation, and only disabling the plugin that is causing YOUR problem.
A warning about "failed to generate build.xml" would save a lot of hours. I should open a bug on that.