ShardingSphere-Proxy DistSQL, Invalid data source: Communications link failure - mysql

I am trying to connect to the existing tables in mysql databases db0 and db1 by registering them as storage units using distsql on a container running ShardingSphere-Proxy using the following command:
REGISTER STORAGE UNIT ds_0 (
HOST="127.0.0.1",
PORT=3306,
DB="db0",
USER="root",
PASSWORD="blah"
),ds_1 (
HOST="127.0.0.1",
PORT=3306,
DB="db1",
USER="root",
PASSWORD="blah"
);
When I try to connect to the mysql instance info above via terminal, it works, but through the ShardingSphere-Proxy that runs in a docker, it shows the error as shown.
ERROR 19000 (44000): Can not process invalid storage units, error message is: [Invalid data source ds_0, error message is: Communications link failure, Invalid data source ds_1, error message is: Communications link failure]
Steps to reproduce
On my local DB:
mysql -h 127.0.0.1 --user=root --password=blah
mysql>create database db0;
mysql>create database db1;
Create & Connect to ShardingSphere-Proxy,
docker run -d -v /Users/pavankrn/Documents/tech/sspheredock/pgsphere/apache-shardingsphere-5.3.1-shardingsphere-proxy-bin/conf:/opt/shardingsphere-proxy/conf -v /Users/pavankrn/Documents/tech/sspheredock/pgsphere/apache-shardingsphere-5.3.1-shardingsphere-proxy-bin/ext-lib:/opt/shardingsphere-proxy/ext-lib -e PORT=3308 -p13308:3308 apache/shardingsphere-proxy:latest
mysql --host=127.0.0.1 --user=root -p --port=13308 sharding_db
On ShardingSphere-Proxy's mysql terminal
use sharding_db;
REGISTER STORAGE UNIT ds_0 (
HOST="127.0.0.1",
PORT=3306,
DB="db0",
USER="root",
PASSWORD="blah"
),ds_1 (
HOST="127.0.0.1",
PORT=3306,
DB="db1",
USER="root",
PASSWORD="blah"
);

As I've been using mac, I replaced HOST="127.0.0.1", with HOST="host.docker.internal", in the register storage dist sql command.
For other workarounds refer to the thread here.

Related

IM004 error when using unixodbc to connect to database (macos)

On my Mac I'm trying to connect to databases with unixodbc (v. 2.3.7 from Homebrew).
odbcinst -j shows:
DRIVERS............: /usr/local/etc/odbcinst.ini
SYSTEM DATA SOURCES: /usr/local/etc/odbc.ini
FILE DATA SOURCES..: /usr/local/etc/ODBCDataSources
USER DATA SOURCES..: /Users/homer/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
Partial contents of ~/.odbc.ini and /usr/local/etc/odbc.ini:
[mysql-local]
description = local server
Driver = MySQLDriver
SERVER = localhost
USER = testuser
PASSWORD = testpass
DATABASE = testdb
Partial contents of /usr/local/etc/odbcinst.ini
[MySQLDriver]
Driver = /usr/local/lib/libodbc.dylib
Setup = /usr/local/lib/libodbc.dylib
FileUsage = 1
The Driver/Setup file links to a file that links to actual driver: /usr/local/Cellar/unixodbc/2.3.7/lib/libodbc.2.dylib. I have set the perms on this file to 755.
Then I try to connect:
isql mysql-local testuser testpass -v
The result is:
[IM004][unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed
[ISQL]ERROR: Could not SQLConnect
For some reason I have osql, which the Web tells me is used to connect to SQL Server. (Perhaps it comes with the Brew's unixodbc?) I can use it to verify that the .ini files are being parsed correctly. Thus
osql -I /usr/local/etc -S mysql-local -U testuser -P testpass
results in:
"" is NOT a directory, overridden by
"/usr/local/etc".
checking odbc.ini files
reading /Users/homer/.odbc.ini
[mysql-local] found in /Users/homer/.odbc.ini
found this section:
[mysql-local]
description = local server
Driver = MySQLDriver
Server = 127.0.0.1
USER = testuser
PASSWORD = testpass
DATABASE = testdb
looking for driver for DSN [mysql-local] in /Users/homer/.odbc.ini
found driver line: " Driver = MySQLDriver"
driver "MySQLDriver" found for [mysql-local] in .odbc.ini
found driver named "MySQLDriver"
"MySQLDriver" is not an executable file
looking for entry named [MySQLDriver] in /usr/local/etc/odbcinst.ini
found driver line: " Driver = /usr/local/lib/libodbc.dylib"
found driver /usr/local/lib/libodbc.dylib for [MySQLDriver] in odbcinst.ini
/usr/local/lib/libodbc.dylib is an executable file
"Server" found, not using freetds.conf
Server is "127.0.0.1"
looking up hostname for ip address 127.0.0.1
Configuration looks OK. Connection details:
DSN: mysql-local
odbc.ini: /Users/homer/.odbc.ini
Driver: /usr/local/lib/libodbc.dylib
Server hostname: localhost
Address: 127.0.0.1
Attempting connection as testuser ...
+ isql mysql-local testuser testpass -v
[IM004][unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed
[ISQL]ERROR: Could not SQLConnect
sed: /tmp/osql.dump.44362: No such file or directory
Everything I try always comes down to the same error:
[IM004][unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed
For good measure, here are the logs from isql mysql-local testuser testpass:
[ODBC][54953][1538867223.117217][__handles.c][460]
Exit:[SQL_SUCCESS]
Environment = 0x7f9829010400
[ODBC][54953][1538867223.117416][SQLAllocHandle.c][377]
Entry:
Handle Type = 2
Input Handle = 0x7f9829010400
[ODBC][54953][1538867223.117521][SQLAllocHandle.c][493]
Exit:[SQL_SUCCESS]
Output Handle = 0x7f982903e800
[ODBC][54953][1538867223.117601][SQLConnect.c][3721]
Entry:
Connection = 0x7f982903e800
Server Name = [mysql-local][length = 11 (SQL_NTS)]
User Name = [testuser][length = 8 (SQL_NTS)]
Authentication = [********][length = 8 (SQL_NTS)]
UNICODE Using encoding ASCII 'UTF-8' and UNICODE 'UCS-2-INTERNAL'
[ODBC][54953][1538867223.126854][SQLConnect.c][1380]Error: IM004
[ODBC][54953][1538867223.127046][SQLFreeHandle.c][290]
Entry:
Handle Type = 2
Input Handle = 0x7f982903e800
[ODBC][54953][1538867223.127191][SQLFreeHandle.c][339]
Exit:[SQL_SUCCESS]
[ODBC][54953][1538867223.127276][SQLFreeHandle.c][220]
Entry:
Handle Type = 1
Input Handle = 0x7f9829010400
Notes:
I have seen the same error discussed elsewhere, where odbc-mediated connections to other databases (e.g., SQL Server) are desired. Solutions proposed in those cases do not appear to apply to MySQL connections.
I am hoping to make the connections this with linuxodbc, as this is the instrument said to be required for maximum SQLintegration in the R Studio IDE.
On Linux I find that unixodbc works fine.
Much thanks in advance to anyone who can point me in the right direction.

2002 Code Error When Trying Import / Export MariaDB Databases in Bitnami's WordPress Multi-tier Stack

I'm trying to do database management via SSH for Bitnami's WordPress Multi-tier Stack/ Specifically I want to export and do an initial import (though I will probably just create a new database).
When I run the following commands, I get the following errors:
Command: mysqldump -u root -p bitnami_wordpress > bitnami_wordpress.sql
Output: mysqldump: Got error: 2002: "Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2)" when trying to connect
This also creates an 0B SQL file in my home directory.
Command: mysqladmin -u root -p status (I enter my password)
Output:
mysqladmin: connect to server at 'localhost' failed
error: 'Can't connect to local MySQL server through socket .
'/opt/bitnami/mariadb/tmp/mysql.sock' (2)'
Check that mysqld is running and that the socket:
'/opt/bitnami/mariadb/tmp/mysql.sock' exists!
Command: cd /opt/bitnami/mariadb/ + ls
Output:
bin CREDITS include licenses README-wsrep
COPYING data INSTALL-BINARY plugin sbin
COPYING.thirdparty EXCEPTIONS-CLIENT lib README.md share
Command: sudo find . -name mysql
Output:
./root/.nami/components/com.bitnami.mysql-client/lib/databases/mysql
./root/.nami/components/com.bitnami.mysql-client/lib/handlers/databases/mysql
./root/.nami/components/com.bitnami.libphp/lib/databases/mysql
./root/.nami/components/com.bitnami.libphp/lib/handlers/databases/mysql
./root/.nami/components/com.bitnami.wordpress/lib/databases/mysql
./root/.nami/components/com.bitnami.wordpress/lib/handlers/databases/mysql
./root/.nami/components/com.bitnami.php/lib/databases/mysql
./root/.nami/components/com.bitnami.php/lib/handlers/databases/mysql
./root/.nami/components/com.bitnami.apache/lib/databases/mysql
./root/.nami/components/com.bitnami.apache/lib/handlers/databases/mysql
./root/.nami/components/com.bitnami.mariadb/lib/databases/mysql
./root/.nami/components/com.bitnami.mariadb/lib/handlers/databases/mysql
./opt/bitnami/mysql
./opt/bitnami/mysql/bin/mysql
./opt/bitnami/mariadb/include/mysql
./opt/bitnami/mariadb/include/mysql/server/mysql
./opt/bitnami/mariadb/include/mysql/mysql
./opt/bitnami/mariadb/bin/mysql
./usr/share/bash-completion/completions/mysql
Commands:
find /opt/bitnami/mysql/ -name "*.cnf"
Output: Nothing
find /opt/bitnami/mariadb/ -name "my.cnf"
Output:
/opt/bitnami/mariadb/share/my-medium.cnf
/opt/bitnami/mariadb/share/my-small.cnf
/opt/bitnami/mariadb/share/my-large.cnf
/opt/bitnami/mariadb/share/my-innodb-heavy-4G.cnf
/opt/bitnami/mariadb/share/my-huge.cnf
/opt/bitnami/mariadb/share/wsrep.cnf
Command: nano /opt/bitnami/mariadb/share/my-medium.cnf (what's the difference between my-medium, my-small, and my-large)?
Output:
# The following options will be passed to all MariaDB clients
[client]
#password = your_password
port = 3306
socket = /opt/bitnami/mariadb/tmp/mysql.sock
NOTE: /opt/bitnami/mariadb/tmp/mysql.sock does not exist.
I've poked around a bit and came across MariaDB's Documentation about 2002 errors, but I don't seem to have the same .conf file (nor do I know where to look)
...from here I have no idea where to go, I've only done limited database management via shell.
Concise questions:
How do I export my database without getting the 2002 error?
How do I overwrite / update my database?
Any help would be much appreciated and thanks in advance!
The folks at Bitnami came through. I was connecting to the wrong host.
Find the host:
sudo cat /opt/bitnami/wordpress/wp-config.php | grep 'DB_HOST'
To export:
mysqldump -h provisioner-peer -u root -p bitnami_wordpress > bitnami_wordpress.sql

How to Truncate(Delete all rows) table on remote SQL server via Local Linux machine

In my application(on Linux) I receive a .DAT file daily which I'm supposed to Load into a remote SQL server Table. As per requirement, I should Truncate(or Delete all rows) from this table before I load the .DAT file into the table.
I have tried below command
mysql -h <servername>,<port-number> -u <username> -p <password> -e "DELETE from <DB_name>.<Schema_name>.<Table_name>"
but it fails with error
-ksh: mysql: not found [No such file or directory]
I reached out to a Linux expert in my team and he did a find for "mysql" and from the result he interpreted that MYSQL is not installed in the Local Linux machine.
Further I tried BCP
bcp "Delete from [DB_name].[schema_name].[table_name]" queryout <output_file_full_path> -S <servername>,<port-number> -m 0 -e <error_file_full_path> -T -c -t '|'
which failed with below message
Starting copy... SQLState = S1000, NativeError = 0 Error =
[Microsoft][ODBC Driver 11 for SQL Server]BCP host-files must contain
at least one column SQLState = S1000, NativeError = 0 Error =
[Microsoft][ODBC Driver 11 for SQL Server]Unable to resolve column
level collations
I understand BCP queryout requires some result to be stored into file but Delete does not return any thus the BCP failure.
Thus I want to know if there is any alternate to Delete data from Remote SQL Table via Local Linux machine.
If you are trying to connect to Microsoft SQL Server, I recommend you use sqlcmd/bcp + ODBC Driver.
To do so:
Download the ODBC Driver from here: https://www.microsoft.com/en-us/download/details.aspx?id=53339
Download the sqlcmd/bcp tools from here: https://www.microsoft.com/en-us/download/details.aspx?id=53591
Restart your Windows machine and connect to SQL Server using the following:
sqlcmd -S myserver -d Adventure_Works -U myuser -P myP#ssword -I
If you would like to use bcp, check out some examples here:
https://msdn.microsoft.com/en-us/library/ms162802.aspx
If you are on Linux, follow this: https://blogs.msdn.microsoft.com/sqlnativeclient/2017/02/04/odbc-driver-13-1-for-linux-released/

Use Sqoop to import data from mysql to Hadoop but fail

I tried to import data through Sqoop using the following command.
sqoop import -connect jdbc:mysql://localhost/test_sqoop --username root --table test
but I got the connection refuse error.
And I found out I can't connect to mysql and got this error:
Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock'
And I also found out if I don't execute start-dfs.sh,mysql.sock exists in /var/lib/mysql/mysql.sock.
mysql
After I executed start-dfs.sh,mysql.sock would be gone and I can't connect to mysql.
start-dfs.sh
Below is /etc/my.cnf configuration.
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
jdbc string should be: jdbc:mysql://localhost:3306/test_sqoop, best practice is to use server name intesad of localhost or 127.0.0.1. you can get the server name from this command hostname -f. so jdbc string should be jdbc:mysql://servername:3306/test_sqoop - replace the server name by out put of hostname -f command.
you need -P or --password or --connection-param-file to pass the password to the sqoop command. sqoop doesn't read from .my.cnf file. - see usage here

Sqoop imports into secure hbase fails

I am using hadoop-2.6.0 with kerberos security. I have installed hbase with kerberos security and could able to create table and scan it.
I could run sqoop job as well to import data from mysql into hdfs but sqoop job fails when trying to import from mysql into HBase.
Sqoop Command
sqoop import --hbase-create-table --hbase-table newtable --column-family ck --hbase-row-key id --connect jdbc:mysql://localhost/sample --username root --password root --table newtable -m 1
Exception
15/01/21 16:30:24 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x734c0647, quorum=localhost:2181, baseZNode=/hbase
15/01/21 16:30:24 INFO zookeeper.ClientCnxn: Opening socket connection to server 127.0.0.1/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknownerror)
15/01/21 16:30:24 INFO zookeeper.ClientCnxn: Socket connection established to 127.0.0.1/127.0.0.1:2181, initiating session
15/01/21 16:30:24 INFO zookeeper.ClientCnxn: Session establishment complete on server 127.0.0.1/127.0.0.1:2181, sessionid = 0x14b0ac124600016, negotiated timeout = 40000
15/01/21 16:30:25 ERROR tool.ImportTool: Error during import: Can't get authentication token
Could you please try the following :
In the connection string add the port number as :
jdbc:mysql://localhost:3306/sample
Remove --table newtable. Create the required table on Hbase first with the column family.
mention --split-by id
Finally mention a specific --fetch-size , as the sqoop client for MySQL have an error internally which attempts to set the default MIN fetch size which will run into an exception.
Could you attempt the import again and let us know ?