Unable to restart Hadoop daemons after quit from hive - mysql

I have installed Hadoop and Hive with MySQL metastore. I started Hadoop daemons then I started Hive shell. The problem that I am facing is that, when I quit hive shell, using "quit" command, my Hadoop daemons are also get stopped. After then, when I restart my Hadoop daemons, using start-dfs.sh and start-yarn.sh, then the NameNode, DataNode and ResourceManager are not starting. What is the problem with my configurations? Can anyone help me out?

Oh! I got it!
The thing is, first I started the Hadoop Daemons, then I checked to ensure, using JPS, and I got:
and after then, I issued a query using hive, to check out the available tables:
After the hive query, I again checked for the Hadoop Daemons using jps, but I got nothing on the terminal:
So, whenever I issue the Hive related stuff, daemons goes off from the terminal. I am not able to see them using jps.
Despite the daemons are not showing on the terminal, they are running at the background in fact. I confirmed this when I issued a command to create a repository in the HDFS and it got created:
I checked the user interface of the NameNode and Cluster also, It was showing all the information.
Okay! But my concern is, how to stop those background running Hadoop daemons without restarting my machine?

Related

Sqoop Access Control of mysql issue

In our network, we have very clear access control for mysql database. After writing a sqoop command we discovered that sqoop is trying to connect with mysql from one of the servers in hadoop cluster. Servers in hadoop cluster will not be able to connect to mysql database.
Is there any way to tell sqoop to connect with our mysql from the local machine where we are executing our sqoop command?
Sqoop naturally runs as a map reduce job. Therefore the process which is copying the data from mysql, could run on any host on the cluster and often there will be several processes reading data from mysql concurrently.
However, I think you can run Sqoop in "local" mode to stop it running on map reduce by passing -jt local to the command, eg:
sqoop [tool-name] -jt local [tool-arguments]
However if you need to run the exports in parallel, this may be much slower.
Note, I have not tested this myself.

Hive Meta Store Failure on Cloudera QuickStart VM 5.12 with Cloudera Manager

Cloudera claim to have a Quick Start approach. That is not working for me I note.
When I invoke spark-shell I get:
... WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version
I find it confusing, this is after all a Quick Start and this looks odd.
So:
I see that there is mysql running with metastore db. I can access this fine.
Do I need to start hive metastore if using mysql as hive metastore? I think so, but ...
Do I need hive server 2 now to run locally? Or can I run without?
The Cloudera Manager on the Hive Tab tells me I am using mysql and I see an auto generated hive-site.xml.
In short I am not sure how nto proceed to fix this. One of the logs is talking about failure to create derby e.g. ...
Caused by: java.sql.SQLException: Failed to create database 'metastore_db', see the next exception for details.
In short I am seeking guidance on how to fix this.
Before one of the numerous crashes I have had, I had an sbt assembly of SPARK / SCALA working fine accessing a remote MYSQL db, so I am wondering if that is the way to go and that the spark-shell and the local Cloudera VM are all to unstable.
Seeking guidance amidst frustration. Data Bricks works like a dream.
Thanks in advance.
Install 5.13, other problems but these ones disappear. Noted however what the cause is.
When a clean install is done and
sudo jps
executed, then all Hadoop services are fine and working. Checked this.
What is then noted is that the Cloudera Manager Console (CMS) never shows. Advice on Internet is to execute the command to invoke CM Express.
Once you do that, then the CMS shows, but many Hadoop Services need to be (re-)started. Point then is that spark-shell goes haywire and the metastore no longer accessible. All in all a sorry mess for which the solution is not so obvious.
Manual install of Hadoop may well be the best option, but a definitive integrated spec is needed. Then also have issues with Spark 2.x not being supported and KUDU not there, parcel vs. packages.

Error 2003 (HY000): Can't connect to MySQL server on 'localhost' (10061)

Please don't post this question as duplicate. I am trying to configure mysql for about 3 weeks now. Someone should really help me.
I recently installed MySQL 5.1 in a Lenovo laptop to do my project. The laptop is running on Windows 8.
The installation was fine but when I tried to configure MySQL it worked till the last page.
There am getting Error Nr 2003.
I tried it through the command prompt, through services in the control panel. But the problem is that the mysql service is not starting at all. Why it is not starting ? What will be blocking it from starting ?
First you need to start mysql service it is the problem for this above
error.
In case you cant start mysql service means you need to install mysql service.
Steps for install mysql service
Step 1: open command prompt and go to the mysql installed location (for example c:\Program Fiels\MYSQL\Mysql Server5.0\bin\)
Step 2: mysql --install
Step 3: start mysql service using the command NET START MYSQL command
then connect mysql using username and password.
Assuming the service is already running and you still get this error connecting to the localhost using the mysql client, then make sure you have an entry for "localhost" in your hosts file. This was the case I experienced.
I resolved this situation following the following process. After adding the MySQL path to the environment, I kept invoking the program and then checking Event Viewer in the Application Log for MySQL errors that referenced old commands in the ini file. After removing them, what was hanging me up was that the installer was looking for errmsg.sys in a folder that didn't exist, \bin\share. Those folders DO exist, but on the same level, not nested. So I added the folder share to bin and copied errmsg.sys from share to the new nested share, and it worked.
Now that its running, I intend to redo a proper configuration using the workbench, just to gets my ducks in a row.
hth
Go to Run type services.msc. Check whether MySQL services is running or not. If not, start it manually. Once it started, type mysqlshow to test the service.

How To start multiple mySQL instances on boot

I currently have 3 mySQL instances on my Linux Centos 64 bit server. When the server boots up it only starts the first mySQL instance. At that point I have to "mysqld_multi stop" then "mysqld_multi start" to ensure all 3 are started. Is there anyway Linux can start all 3 up at run time so I don't have to do this every time I reboot the server.
you can use crontab #reboot option:
http://unixhelp.ed.ac.uk/CGI/man-cgi?crontab+5
After researching this a little bit, it looks like you don't have too many options and they aren't as elegant as I would like.
Creating a separate mysql server instance
Running Multiple MySQL 5.6 Instances on one server in CentOS 6/RHEL 6/Fedora
This link explains pretty well how to create another MySQL server instance that starts at boot time. You could definitely do things better than he describes, but this is a start. Essentially he copies the /etc/init.d/mysqld startup script and the /etc/my.cnf configuration file and has the new startup script reference the new configuration file.
Creating a unified startup script
You could also chkconfig mysqld off to not use mysql's built-in startup script and create your own that runs your mysqld_multi commands at boot time.
Let me know if you are looking for more information.

What is forcing my system to look for MySQL in "/opt/local/var/run/mysqld/mysqld.sock"

I have been trying to get a ruby on rails app going on my machine for a little while now for a project I need to work on for work. The issue is, my system must be setting the default location of MySQL somewhere because every time I start the rails webserver (webrick) and run localhost:3000, I get the following error.
"Can't connect to local MySQL server through socket '/opt/local/var/run/mysqld/mysqld.sock' (2)"
I have deleted the installed version of mysql and removed all folders. I reinstalled mysql using BREW. I now have a running mysql instance. The output of "which mysql" prints "/usr/local/bin/mysql" which is actually a sim link to "/usr/local/Cellar/mysql/5.5.14/bin/mysql".
Does anyone know what might be forcing my rails apps to look for mysql in "/opt/local/var/run/mysqld/mysqld.sock"
NOTE: I deleted my current my.cnf because it was messing up the brew version of mysql I currently have running, so I know that is has nothing to do with that.
/opt/local is the default location for MacPorts-installed packages. Chances are you installed rails and or MySQL via MacPorts. Try:
sudo /opt/local/bin/port installed
to see what is installed.