Connecting from rails to an external azure database to import data - mysql

Hello Stackoverflow members,
I need to import data from an external database which is an azure database. And i am getting the following error: TinyTds::Error: USE statement is not supported to switch between databases. Use a new connection to connect to a different database.
class Exact
require 'tiny_tds'
def connect
dbadmin = ""
password = "!"
server = ""
database = ""
a = true
client=TinyTds::Client.new(:username=>'', :password=> password, :dataserver=> server , :port=>1433, :database=>database, :azure=> 'true')
puts "connecting"
results = client.execute("select * from table")
puts "results opgehaald"
puts results.first
client.close
puts "client is closed"
end
end
I think i need to open a new connection to connect to the database but i am having problems figuring this out. Could anyone point me in the right direction or assist me with the problems i am having.
Kind regards,
Yoeri Huitema

I fixed the issue as described in this thread: https://github.com/rails-sqlserver/tiny_tds/issues/249
client=TinyTds::Client.new(:username=>'', :password=> password, :dataserver=> server , :port=>1433, :database=>database, :azure=> 'true')
should have been
client=TinyTds::Client.new(:username=>'', :password=> password, :dataserver=> server , :port=>1433, :database=>database, :azure=> TRUE)

Related

Python script won't quit because SSHTunnelForwarder hangs

Python:3.8.5
sshtunnel:0.2.1
mysqlclient:1.4.6
mysql-connector:2.2.9
I am using SSHTunnelForwarder to retrieve data from a Mysql database.
Here is the script I use to connect via SSH to the DB:
elif self._remote == 1:
with SSHTunnelForwarder(
(self._host, 22),
ssh_password = self._ssh_password,
ssh_username = self._ssh_login,
remote_bind_address = (self._remote_bind_address, 3306)) as server:
print('Connection:',server.local_bind_address)
cnx = MySQLdb.connect(host = '127.0.0.1',
port = server.local_bind_port,
user = self._db_user,
passwd = self._db_password,
db = self._db_name)
cursor = cnx.cursor()
res = pd.read_sql(request, con = cnx)
cursor.close()
cnx.close()
An example request could be in the following form:
request = 'SELECT * FROM conjunctions AS c LEFT JOIN events AS e ON e.eventId=c.eventId ORDER BY e.eventId;'
The script returns me a valid response, but will not exit to shell.
a threading.enumerate() will print this:
[<_MainThread(MainThread, started 139701046208320)>, <paramiko.Transport at 0x74850ac0 (unconnected)>, <paramiko.Transport at 0xae9e4e80 (unconnected)>]
I have found this issue relating to the same problem, however suggested solutions are not working for me.
Manually closing the tunnel with a server.stop() does not work.
Adding ssh_server.daemon_forward_servers = True as suggested in the issue mentioned above does not work.
Most of all, this problem appears approx 4/5 times the script is launched.
Any help to understand what is going on would be greatly appreciated.
Thank you.

How to use sparklyr spark_write_jdbc to connect to MySql

spark_write_jdbc(members_df,
name = "Mbrs",
options = list(
url = paste0("jdbc:mysql://",mysql_host,":",mysql_port,"/",dbname),
user = mysql_user,
password = mysql_password),
mode = "append")
Results in the following exception:
Error: java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(DriverManager.java:315)
The .jar file is in a folder on the server where RStudio is running, config details below. We're able to access MySql via the RMySql package so MySql is working and accessible.
config$`spark.sparklyr.shell.driver-class-path` <- "/dev/shm/temp/mysql-connector-java-5.1.44-bin.jar"
Even if the question is different, I think the answer still applies also to this:
How to use a predicate while reading from JDBC connection?
Code to connect to JDBC MySQL through sparklyr (I made some slight changes to simplify the code a bit, written by Jake Russ)
library(sparklyr)
library(dplyr)
config <- spark_config()
#config$`sparklyr.shell.driver-class-path` <- "E:\\spark232_hadoop27\\jars\\mysql-connector-java-5.1.47-bin.jar"
#in my case, using RStudio and Sparkly this seemed to be optional
sc <- spark_connect(master = "local")
db_tbl <- spark_read_jdbc(sc,
name = "table_name",
options = list(url = "jdbc:mysql://localhost:3306/schema_name",
user = "root",
password = "password",
dbtable = "table_name"))

How to store MQTT Mosquitto publish events into MySQL? [duplicate]

This question already has an answer here:
Is there a way to store Mosquitto payload into an MySQL database for history purpose?
(1 answer)
Closed 4 years ago.
I've connected a device that communicates to my mosquitto MQTT server (RPi) and is sending out publications to a specified topic. What I want to do now is to store the messages published on that topic on the MQTT server into a MySQL database. I know how MySQL works, but I don't know how to listen for these incoming publications. I'm looking for a light-weight solution that runs in the background. Any pointers or ideas on libraries to use are very welcome.
I've done something similar in the last days:
live-collecting weatherstation-data with pywws
publishing with pywws.service.mqtt to mqtt-Broker
python-script on NAS collecting the data and writing to MariaDB
#!/usr/bin/python -u
import mysql.connector as mariadb
import paho.mqtt.client as mqtt
import ssl
mariadb_connection = mariadb.connect(user='USER', password='PW', database='MYDB')
cursor = mariadb_connection.cursor()
# MQTT Settings
MQTT_Broker = "192.XXX.XXX.XXX"
MQTT_Port = 8883
Keep_Alive_Interval = 60
MQTT_Topic = "/weather/pywws/#"
# Subscribe
def on_connect(client, userdata, flags, rc):
mqttc.subscribe(MQTT_Topic, 0)
def on_message(mosq, obj, msg):
# Prepare Data, separate columns and values
msg_clear = msg.payload.translate(None, '{}""').split(", ")
msg_dict = {}
for i in range(0, len(msg_clear)):
msg_dict[msg_clear[i].split(": ")[0]] = msg_clear[i].split(": ")[1]
# Prepare dynamic sql-statement
placeholders = ', '.join(['%s'] * len(msg_dict))
columns = ', '.join(msg_dict.keys())
sql = "INSERT INTO pws ( %s ) VALUES ( %s )" % (columns, placeholders)
# Save Data into DB Table
try:
cursor.execute(sql, msg_dict.values())
except mariadb.Error as error:
print("Error: {}".format(error))
mariadb_connection.commit()
def on_subscribe(mosq, obj, mid, granted_qos):
pass
mqttc = mqtt.Client()
# Assign event callbacks
mqttc.on_message = on_message
mqttc.on_connect = on_connect
mqttc.on_subscribe = on_subscribe
# Connect
mqttc.tls_set(ca_certs="ca.crt", tls_version=ssl.PROTOCOL_TLSv1_2)
mqttc.connect(MQTT_Broker, int(MQTT_Port), int(Keep_Alive_Interval))
# Continue the network loop & close db-connection
mqttc.loop_forever()
mariadb_connection.close()
If you are familiar with Python the Paho MQTT library is simple, light on resources, and interfaces well with Mosquitto. To use it simply subscribe to the topic and set up a callback to pass the payload to MySQL using peewee as shown in this answer. Run the script in the background and call it good!

Multiple DB connection in R

I was wondering if someone could help with this annoying issue.
I'm trying to create/make multiple connections to different database.
I have a data.frame with 3 connection credentials named conf - It works if I manually enter the connections variable like so:
conn <- dbConnect(MySQL(), user=conf$user, password=conf$passws, host=conf$host, dbname=conf$db)
which ends up creating a single connection.
However, what I want is to be able to refer to the connection as:
conf$conn <- dbConnect(MySQL(), user=conf$user, password=conf$passws, host=conf$host, dbname=conf$db)
here is the error message I'm getting.
Error in rep(value, length.out = nrows) :
attempt to replicate an object of type 'S4'
I think the problem is how I'm adding conf$conn
I used a combination of the pool and config package to solve a similar problem to set up a number of simultaneous PostgreSQL connections. Note that this solution needs a config.yml file with the connection properties for db1 and db2.
library(pool)
library(RPostgreSQL)
connect <- function(cfg) {
config <- config::get(config = cfg)
dbPool(
drv = dbDriver("PostgreSQL", max.con = 100),
dbname = config$dbname,
host = config$host,
port = config$port,
user = config$user,
password = config$password
)
}
conn <- lapply(c("db1", "db2"), connect)

Dashing: Ruby: CentOS: Not closing MySQL processes

I am having trouble with my server.
It is a CentOS RedHat Linux server and runs "Dashing" a Ruby/Sinatra-based dashboard.
I am trying to close the active connections as defined by my MySQL database "SHOW PROCESSLIST;"
Example.rb File
require 'mysql2'
SCHEDULER.every '10s'do
db = Mysql.new('host_name', 'database_name', 'password', 'table')
mysql1 = "SELECT `VAR` from `TABLE` ORDER BY `VAR` DESC LIMIT 1"
result1 = db.query(mysql1)
result1.each do |row|
strrow1 = row[0]
$num1 = strrow1.to_i
end
...
db.close
LINK[0] = { label: 'LABEL', value: $num1}
...
send_event('LABEL FOR HTML', { items: LINK.values })
end
However, after a few clicks back and forth, it is clear that the database does not drop the connections, but instead keeps them. This causes the browser to slow down to the point that loading a page becomes impossible and the output of the log reads:
"max_user_connections" reached
Can anyone think of a way to fix this?
It is a best practice for DB/File/handle stuff to be in a begin/rescue/ensure block. It could be that something is happening and Rufus/Dashing is just being quiet about the error since they trap exceptions and go on their merry way. This would prevent your db connection from closing. The symptoms you are having could be from a similar problem, either way it's a good idea.
SCHEDULER.every '10s'do
begin
db = Mysql.new('host_name', 'database_name', 'password', 'table')
# .... stuff ....
rescue
# what happens if an error happens? log it, toss it, ignore it?
ensure
db.close
end
# ... more stuff if you want ...
end