Export data from FOUNDRY to external source - palantir-foundry

my aim is to export a dataset saved as a CSV in FOUNDRY to an external sftp server.
I have created a source following the documentation but I keep on getting the error:
Could not load file listing
The agent failed to execute this explorer command: com.palantir.magritte.plugin.sftp.SftpClientException:Borrowing connection for ls failed:
{exception message=UnknownHostKey: XXXXXXXXXXXXXXXX. RSA key fingerprint is XXXXXXXXXXXXXXXXXXXXXXX, path=/}
My YAML looks like:
type: magritte-sftp
hostname: XXXX.XXXX.XXX.XXX.XXX
port: 22
username: XXXXXX
password: '{{PASSWORD}}'
rootDirectory: /

Related

Django Google App Engine Server Error 500

I have deployed my Django app to Google Cloud. It worked fine when I hosted it locally and throughout the steps outlined in this post.
It raises a server Error(500) when I try to view the live link.
When I enable Debug in the settings.py, this is the full traceback. (Torque is the name of my project), and showroom is my app.
The traceback refers to a views attribute (num_manufactureres) which I never had a problem with when hosting it locally.
OperationalError at /showroom/
(2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
Request Method: GET
Request URL: https://torque-256805.appspot.com/showroom/
Django Version: 2.2.5
Exception Type: OperationalError
Exception Value:
(2002, "Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)")
Exception Location: /env/lib/python3.7/site-packages/MySQLdb/connections.py in __init__, line 166
Python Executable: /env/bin/python3.7
Python Version: 3.7.4
Python Path:
['/srv',
'/env/bin',
'/opt/python3.7/lib/python37.zip',
'/opt/python3.7/lib/python3.7',
'/opt/python3.7/lib/python3.7/lib-dynload',
'/env/lib/python3.7/site-packages']
Server time: Thu, 24 Oct 2019 09:45:29 +0300
...
/env/lib/python3.7/site-packages/MySQLdb/__init__.py in Connect
return Connection(*args, **kwargs) …
▶ Local vars
/env/lib/python3.7/site-packages/MySQLdb/connections.py in __init__
super(Connection, self).__init__(*args, **kwargs2) …
▶ Local vars
...
num_manufacturers = Manufacturer.objects.all().count() …
▶ Local vars
I'm new to Google Cloud, so I don't know how to start debugging this.
Here are some possible issues:
The server instance I created on the cloud.google.com uses europe-west3 as a region. But when I was deploying, I thought that it created a completely new server and chose europe-west6 as a better option. (Close proximity, better reliability etc...)
I changed my project settings.py for better security according to the check --deploy Django command.
Otherwise, I can't think of anything else. Can anyone help?
Check out this Django example on App Engine settings.py file:
if os.getenv('GAE_APPLICATION', None):
# Running on production App Engine, so connect to Google Cloud SQL using
# the unix socket at /cloudsql/<your-cloudsql-connection string>
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'HOST': '/cloudsql/[YOUR-CONNECTION-NAME]',
'USER': '[YOUR-USERNAME]',
'PASSWORD': '[YOUR-PASSWORD]',
'NAME': '[YOUR-DATABASE]',
}
}
else:
# Running locally so connect to either a local MySQL instance or connect to
# Cloud SQL via the proxy. To start the proxy via command line:
#
# $ cloud_sql_proxy -instances=[INSTANCE_CONNECTION_NAME]=tcp:3306
#
# See https://cloud.google.com/sql/docs/mysql-connect-proxy
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'HOST': '127.0.0.1',
'PORT': '3306',
'NAME': '[YOUR-DATABASE]',
'USER': '[YOUR-USERNAME]',
'PASSWORD': '[YOUR-PASSWORD]',
}
}
Last but not least, if you're running your app locally, make sure you have ALLOWED_HOSTS= ['localhost'] if your database host is HOST='localhost', or else you can just use 'HOST': '127.0.0.1', as shown in the code sample above.

How do I load a CSV file into a Db2 Event Store remotely using a Db2 client?

I see in the documentation for Db2 Event Store that a CSV file can be loaded into the system when the file is within the system in this document https://www.ibm.com/support/knowledgecenter/en/SSGNPV_2.0.0/local/loadcsv.html. I also found that you can connect to a Db2 Event Store database using the standard Db2 client in How do I connect to an IBM Db2 Event Store instance from a remote Db2 instance?. What I am trying to do now is load a CSV file using that connection. Is it possible to load it remotely ?
This should be doable with an extra keyword specified REMOTESOURCE YES, e.g:
db2 "INSERT INTO import_test SELECT * FROM EXTERNAL '/home/db2v111/data.del' USING (DELIMITER ',' REMOTESOURCE YES)"
see an example here:
IMPORT script on IBM DB2 Cloud using RUN SQL Interface
With other answers mentioned the connection and loading using the traditional db2. I have to add some more details that are required specifically for Db2 Event Store.
Assuming we are using a Db2 Client container, which can be found at docker hub with tag ibmcom/db2.
Basically we have to go through following steps:
1/ establish a remote connection from db2 client container to the remote db2 eventstore database
2/ use db2 CLP commands to load the csv file using the db2's external table load feature, which will load csv file from db2 client container to the remote eventstore database.
Step 1:
Run the following commands, or run the it in a script. Note that the commands need to be run as the db2 user in the db2 client container. The db2 user name is typically db2inst1
#!/bin/bash -x
NODE_NAME=eventstore
. /database/config/db2inst1/sqllib/db2profile
### create new keydb used for authentication
# remote old keydb files
rm -rf $HOME/mydbclient.kdb $HOME/mydbclient.sth $HOME/mydbclient.crl $HOME/mydbclient.rdb
$HOME/sqllib/gskit/bin/gsk8capicmd_64 -keydb -create -db $HOME/mydbclient.kdb -pw ${SSL_KEY_DATABASE_PASSWORD} -stash
KEYDB_PATH=/var/lib/eventstore/clientkeystore
# get the target eventstore cluster's SSL public certificate using REST api
bearerToken=`curl --silent -k -X GET "https://$IP/v1/preauth/validateAuth" -u $EVENT_USER:$EVENT_PASSWORD | python -c "import sys, json; print (json.load(sys.stdin)['accessToken']) "`
curl --silent -k -X GET -H "authorization: Bearer $bearerToken" "https://${IP}:443/com/ibm/event/api/v1/oltp/certificate" -o $HOME/server-certificate.cert
# insert eventstore cluster's SSL public cert into new gskit keydb
$HOME/sqllib/gskit/bin/gsk8capicmd_64 -cert -add -db $HOME/mydbclient.kdb -pw ${SSL_KEY_DATABASE_PASSWORD} -label server -file $HOME/server-certificate.cert -format ascii -fips
# let db2 client use the new keydb
$HOME/sqllib/bin/db2 update dbm cfg using SSL_CLNT_KEYDB $HOME/mydbclient.kdb SSL_CLNT_STASH $HOME/mydbclient.sth
# configure connection from db2Client to remote EventStore cluster.
$HOME/sqllib/bin/db2 UNCATALOG NODE ${NODE_NAME}
$HOME/sqllib/bin/db2 CATALOG TCPIP NODE ${NODE_NAME} REMOTE ${IP} SERVER ${DB2_CLIENT_PORT_ON_EVENTSTORE_SERVER} SECURITY SSL
$HOME/sqllib/bin/db2 UNCATALOG DATABASE ${EVENTSTORE_DATABASE}
$HOME/sqllib/bin/db2 CATALOG DATABASE ${EVENTSTORE_DATABASE} AT NODE ${NODE_NAME} AUTHENTICATION GSSPLUGIN
$HOME/sqllib/bin/db2 terminate
# Ensure to use correct database name, eventstore user credential in remote
# eventstore cluster
$HOME/sqllib/bin/db2 CONNECT TO ${EVENTSTORE_DATABASE} USER ${EVENT_USER} USING ${EVENT_PASSWORD}
Some important variables:
EVENTSTORE_DATABASE: database name in the remote eventstore cluster
EVENT_USER: EventStore user name remote eventstore cluster
EVENT_PASSWORD: EventStore user password remote eventstore cluster
IP: Public IP of remote eventstore cluster
DB2_CLIENT_PORT_ON_EVENTSTORE_SERVER: JDBC port of remote eventstore cluster, which is typically 18730
SSL_KEY_DATABASE_PASSWORD: keystore's password of the gskit keydb file in the db2 client container, you can set it as you like
After running the commands above, you should have established the connection between local db2 client container and the remote eventstore cluster
2/ Load csv file using external table feature of db2
After the connection between db2 client and remote eventstore cluster is established, we can issue db2 CLP commands like issuing command to any local db2 database.
For example:
// establish remote connection to eventstore database
// replace the same variables in ${} with what you used above.
CONNECT TO ${EVENTSTORE_DATABASE} USER ${EVENT_USER} USING ${EVENT_PASSWORD}
SET CURRENT ISOLATION UR
// create table in the remote eventstore database
CREATE TABLE db2cli_csvload (DEVICEID INTEGER NOT NULL, SENSORID INTEGER NOT NULL, TS BIGINT NOT NULL, AMBIENT_TEMP DOUBLE NOT NULL, POWER DOUBLE NOT NULL, TEMPERATURE DOUBLE NOT NULL, CONSTRAINT "TEST1INDEX" PRIMARY KEY(DEVICEID, SENSORID, TS) INCLUDE (TEMPERATURE)) DISTRIBUTE BY HASH (DEVICEID, SENSORID) ORGANIZE BY COLUMN STORED AS PARQUET
// external table load to remote eventstore database
INSERT INTO db2cli_csvload SELECT * FROM EXTERNAL '${DB_HOME_IN_CONTAINER}/${CSV_FILE}' LIKE db2cli_csvload USING (delimiter ',' MAXERRORS 10 SOCKETBUFSIZE 30000 REMOTESOURCE 'YES' LOGDIR '/database/logs' )
CONNECT RESET
TERMINATE
For more information, you can check on the Db2 EventStore's public github repo.
https://github.com/IBMProjectEventStore/db2eventstore-IoT-Analytics/tree/master/db2client_remote/utils

IM004 error when using unixodbc to connect to database (macos)

On my Mac I'm trying to connect to databases with unixodbc (v. 2.3.7 from Homebrew).
odbcinst -j shows:
DRIVERS............: /usr/local/etc/odbcinst.ini
SYSTEM DATA SOURCES: /usr/local/etc/odbc.ini
FILE DATA SOURCES..: /usr/local/etc/ODBCDataSources
USER DATA SOURCES..: /Users/homer/.odbc.ini
SQLULEN Size.......: 8
SQLLEN Size........: 8
SQLSETPOSIROW Size.: 8
Partial contents of ~/.odbc.ini and /usr/local/etc/odbc.ini:
[mysql-local]
description = local server
Driver = MySQLDriver
SERVER = localhost
USER = testuser
PASSWORD = testpass
DATABASE = testdb
Partial contents of /usr/local/etc/odbcinst.ini
[MySQLDriver]
Driver = /usr/local/lib/libodbc.dylib
Setup = /usr/local/lib/libodbc.dylib
FileUsage = 1
The Driver/Setup file links to a file that links to actual driver: /usr/local/Cellar/unixodbc/2.3.7/lib/libodbc.2.dylib. I have set the perms on this file to 755.
Then I try to connect:
isql mysql-local testuser testpass -v
The result is:
[IM004][unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed
[ISQL]ERROR: Could not SQLConnect
For some reason I have osql, which the Web tells me is used to connect to SQL Server. (Perhaps it comes with the Brew's unixodbc?) I can use it to verify that the .ini files are being parsed correctly. Thus
osql -I /usr/local/etc -S mysql-local -U testuser -P testpass
results in:
"" is NOT a directory, overridden by
"/usr/local/etc".
checking odbc.ini files
reading /Users/homer/.odbc.ini
[mysql-local] found in /Users/homer/.odbc.ini
found this section:
[mysql-local]
description = local server
Driver = MySQLDriver
Server = 127.0.0.1
USER = testuser
PASSWORD = testpass
DATABASE = testdb
looking for driver for DSN [mysql-local] in /Users/homer/.odbc.ini
found driver line: " Driver = MySQLDriver"
driver "MySQLDriver" found for [mysql-local] in .odbc.ini
found driver named "MySQLDriver"
"MySQLDriver" is not an executable file
looking for entry named [MySQLDriver] in /usr/local/etc/odbcinst.ini
found driver line: " Driver = /usr/local/lib/libodbc.dylib"
found driver /usr/local/lib/libodbc.dylib for [MySQLDriver] in odbcinst.ini
/usr/local/lib/libodbc.dylib is an executable file
"Server" found, not using freetds.conf
Server is "127.0.0.1"
looking up hostname for ip address 127.0.0.1
Configuration looks OK. Connection details:
DSN: mysql-local
odbc.ini: /Users/homer/.odbc.ini
Driver: /usr/local/lib/libodbc.dylib
Server hostname: localhost
Address: 127.0.0.1
Attempting connection as testuser ...
+ isql mysql-local testuser testpass -v
[IM004][unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed
[ISQL]ERROR: Could not SQLConnect
sed: /tmp/osql.dump.44362: No such file or directory
Everything I try always comes down to the same error:
[IM004][unixODBC][Driver Manager]Driver's SQLAllocHandle on SQL_HANDLE_HENV failed
For good measure, here are the logs from isql mysql-local testuser testpass:
[ODBC][54953][1538867223.117217][__handles.c][460]
Exit:[SQL_SUCCESS]
Environment = 0x7f9829010400
[ODBC][54953][1538867223.117416][SQLAllocHandle.c][377]
Entry:
Handle Type = 2
Input Handle = 0x7f9829010400
[ODBC][54953][1538867223.117521][SQLAllocHandle.c][493]
Exit:[SQL_SUCCESS]
Output Handle = 0x7f982903e800
[ODBC][54953][1538867223.117601][SQLConnect.c][3721]
Entry:
Connection = 0x7f982903e800
Server Name = [mysql-local][length = 11 (SQL_NTS)]
User Name = [testuser][length = 8 (SQL_NTS)]
Authentication = [********][length = 8 (SQL_NTS)]
UNICODE Using encoding ASCII 'UTF-8' and UNICODE 'UCS-2-INTERNAL'
[ODBC][54953][1538867223.126854][SQLConnect.c][1380]Error: IM004
[ODBC][54953][1538867223.127046][SQLFreeHandle.c][290]
Entry:
Handle Type = 2
Input Handle = 0x7f982903e800
[ODBC][54953][1538867223.127191][SQLFreeHandle.c][339]
Exit:[SQL_SUCCESS]
[ODBC][54953][1538867223.127276][SQLFreeHandle.c][220]
Entry:
Handle Type = 1
Input Handle = 0x7f9829010400
Notes:
I have seen the same error discussed elsewhere, where odbc-mediated connections to other databases (e.g., SQL Server) are desired. Solutions proposed in those cases do not appear to apply to MySQL connections.
I am hoping to make the connections this with linuxodbc, as this is the instrument said to be required for maximum SQLintegration in the R Studio IDE.
On Linux I find that unixodbc works fine.
Much thanks in advance to anyone who can point me in the right direction.

Create mysql database from a config file for deploying Spring App on AWS OpsWork with Chefbook via S3

I am deploying a Spring Application on AWS OpsWork via a Chefbook read from an S3 bucket.Currently, I first have to create the mysql database called messagegateway. My application.yml file is as follows:
# database configuration
spring:
jpa:
show-sql: false
generate-ddl: false
hibernate:
ddl-auto: none
datasource:
url: jdbc:mysql:thin://localhost:3306/messagegateway
username: root
password: mysql
driver-class-name: org.drizzle.jdbc.DrizzleDriver
However, I want to create the messagegateway database from a script instead of manually creating it. I tried adding the following code snippet at the top of the InitialSetup.sql script (which creates the tables required by the application):
CREATE DATABASE IF NOT EXISTS messagegateway;
However, I get the following error:
java.lang.IllegalStateException
Caused by: org.springframework.beans.factory.BeanCreationException
Caused by: org.flywaydb.core.internal.dbsupport.FlywaySqlScriptException
Caused by: org.h2.jdbc.JdbcSQLException
Any leads on how can I create the database via a script or configuration file?

Ruby SSH MySQL Sequel (or DataMapper) remote connection with keys using Net::SSH gem

How would I connect to my VPS based MySQL database remotely (from a cloud based app) using the Ruby Net::SSH or Net::SSH::Gateway gems and key, not password, authentication?
And then connect to the database with Sequel or DataMapper. I'm assuming that after I manage to get the SSH connection working, I would just setup a Sequel/DM connection to 'sql_user#localhost:3306/database'.
I did locate a couple of similar question here, but they all use password authentication, not keys, and only demonstrate executing raw commands to query the database.
UPDATE: I just cannot seem to get this (Net::SSH with key manager) to work.
UPDATE2: Alright I have managed to get authorization when logging in from a computer that has authorized keys stored in the users local .ssh folder, with the following (port is my custom SQL port on the VPS):
sql_gate = Net::SSH::Gateway.new('192.xxx.xxx.xx','sqluser', port: 26000)
However, I will not be able to create a .ssh folder in the app's VM, so I need to somehow pass the path and filename (I will be creating a public key just for SQL access for specified user) as an option ... but haven't been able to figure out how.
UPDATE: Just need to figure out DataMapper access now. Current code being tested (remote_user_sql is my Ubuntu user, sql_user is the MySQL database user with localhost/127.0.0.1 privileges):
require 'net/ssh/gateway'
require 'data_mapper'
require 'dm-mysql-adapter'
class User
include DataMapp......
.
.
end
ssh_gate = Net::SSH::Gateway.new('192.n.n.n','remote_user_sql', {port: 25000, keys: ["sql_rsa"], keys_only: true})
port = ssh_gate.open('localhost',3306,3307)
child = fork do
DataMapper.setup(:default, {
adapter: 'mysql',
database: 'sql_test',
username: 'sql_user',
password: 'passwd',
host: 'localhost',
port: port})
DataMapper.auto_upgrade!
exit
end
puts "child: #{child}"
Process.wait
ssh_gate.close(port)
My solution, in two parts:
Well I have figured how to make the Net::SSH::Gateway gem using a specified keyfile, and then connect to the VPS through ssh via a port other than 22:
Part 1: Net::SSH::Gateway key authentication
First you must generate the keyfiles you want to use, copy the .pub to the remove server and append it to the ~/.ssh/authorized_keys file (cat sql_rsa.pub >> authorized_keys), and then make sure user_sql (the user I created on the VPS to be used only for this purpose) has been added to AllowUsers list in sshd_config. Make note of port used for ssh (25000 for this example) and use the following code to establish the connection:
ssh_gate = Net::SSH::Gateway.new('192.n.n.n','user_sql', {port: 25000, keys: ["sql_rsa"], keys_only: true})
That will read the keyfile sql_rsa in the same directory as script file, then create a new ssh gateway for 'user_sql'#'192.n.n.n' on port 25000.
I can successfully execute raw shell commands on the remove VPS with:
ssh_gate.exec("ls -la")
To close:
ssh_gate.shutdown!
Unfortunately I am still having problems using DataMapper (do-mysql-adapter) to use the gateway. I will update this answer if I figure that part out, but at least the first half of the problem has been solved.
These are the errors that DataMapper::Logger has reported:
When 127.0.0.1 was used:
Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2) (code: 2002, sql state: HY000, query: , uri: )
When localhost was used:
Access denied for user 'user_sql'#'localhost' (using password: YES) (code: 1045, sql state: 28000, query: , uri: )
When the VPS hostname was used:
Unknown MySQL server host 'hostname' (25) (code: 2005, sql state: HY000, query: , uri: )
UPDATE (No success yet): So far the only way I can access the remote MySQL database is by using Net::SSH::Gateway to establish a gateway, and then use the .sshmethod to open a new Net::SSH connection over that gateway, like so:
ssh_gate.ssh('192.n.n.n','user_sql',{port: 25000, keys: ["sql_rsa"], keys_only: true}) do |ssh|
ssh.exec("mysql -u sql_user -p'passwd' -h localhost -P 3306 -e 'SELECT DATABASE();'")
end
In other words, I can only execute SQL commands using the mysql command line. I cannot figure out how to get Sequel or DataMapper to use the gateway to connect.
Part 2: DataMapper/Sequel/mysql2 connection through Net::SSH::Gateway
Make sure your MySQL server is bound to 127.0.0.1 in /etc/mysql/my.cnf, setup your connection - DataMapper example:
DataMapper.setup(:default, {
adapter: 'mysql',
database: 'DATABASE',
username: 'username',
password: 'passwd',
host: '127.0.0.1',
port: 3307}) # local port being forwarded via Net::SSH:Gateway
Followed by any class table definitions and DataMapper.finalize if required. Note that DataMapper doesn't actually connect to the remote MySQL server until either an auto_upgrade!, auto_migrate!, or query is executed, so no need to create the forwarded port yet.
Then create a new Net::SSH::Gateway, and then whenever you need DataMapper/Sequel to access the remote database, just open a port for the process, like so:
port = ssh_gate.open('127.0.0.1',3306,3307)
child = fork do
DataMapper.auto_upgrade! # DM call that accesses MySQL server
exit
end
Process.wait
ssh_gate.close(port)
You may want to put the Net::SSH::Gateway/.open code in a begin..ensure..end block, ensure'ing the port closure and gateway shutdown.
I had to use a fork and Process.wait to establish the connection, without it the method just hangs.