sqlalchemy database query timing out everytime - sqlalchemy

Hoping someone can help out!
python3.4
I have run the following script on the local machine that's hosting several mysql databases:
from sqlalchemy import create_engine, MetaData, inspect
username = ***
password = ***
port = '22'
database = db_name
engine = create_engine('mysql+mysqlconnector://' + username +':' + password + '#127.0.0.1:' + port + '/' + database)
Whatever I do from here on that interacts with the database, it will time out (with the same error every time). For example:
inspector = inspect(engine) or metadata.reflect(engine) (run separately)
will both return:
Traceback (most recent call last):
File "/usr/local/lib/python3.4/dist-packages/SQLAlchemy-1.2.0b2dev-py3.4-linux-x86_64.egg/sqlalchemy/pool.py", line 1165, in _do_get
File "/usr/local/lib/python3.4/dist-packages/SQLAlchemy-1.2.0b2dev-py3.4-linux-x86_64.egg/sqlalchemy/util/queue.py", line 145, in get
self.not_full.wait(remaining)
sqlalchemy.util.queue.Empty
/*whole bunch other error messages here*/
sqlalchemy.exc.InterfaceError: (mysql.connector.errors.InterfaceError) 2013: Lost connection to MySQL server during query
I have also tried
engine = create_engine('mysql+pymysql://username:password#127.0.0.1:port/database')
with the same results.
Many thanks!

Related

How to connect MySQL instance from GCP project to AWs Lambda function?

I've hosted my MySQL instance in GCP project and I want to use it's database in AWS Lambda Function. I've tried all the ways to connect to my DB in MySQL instance in GCP but the Lambda Function give me Timeout Error even though I've kept my Timeout period enough to run the function.
I've also Zipped the Package with MySQL and pymysql installed and then uploaded to Lambda but the issues still persists.
Here's the code that I've written for connecting to my DB:
import json
import boto3
import mysql.connector
import MySQLdb
def lambda_handler(event, context):
mydb = MySQLdb.connect(
host="Public Ip of MySQL Instance",
user="Username",
password="Password",
db="DbName"
)
cur = db.cursor()
cur.execute("SELECT * FROM budget")
for row in cur.fetchall():
print(row[0])
db.close()
Here's the Error that I receive:
{
"errorMessage": "(2003, \"Can't connect to MySQL server on '36.71.43.131' (timed out)\")",
"errorType": "OperationalError",
"stackTrace": [
" File \"/var/lang/lib/python3.8/imp.py\", line 234, in load_module\n return load_source(name, filename, file)\n",
" File \"/var/lang/lib/python3.8/imp.py\", line 171, in load_source\n module = _load(spec)\n",
" File \"<frozen importlib._bootstrap>\", line 702, in _load\n",
" File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\n",
" File \"<frozen importlib._bootstrap_external>\", line 783, in exec_module\n",
" File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\n",
" File \"/var/task/lambda_function.py\", line 10, in <module>\n connection = pymysql.connect(host='36.71.43.131',\n",
" File \"/var/task/pymysql/connections.py\", line 353, in __init__\n self.connect()\n",
" File \"/var/task/pymysql/connections.py\", line 664, in connect\n raise exc\n"
]
}
Please help me to resolve this. I've tried all different ways to connect to my SQL instance but nothing works.
According to the error message, AWS Lambdathe tried to connect the Public IP address of MySQL instance directly.
You have to configure your MySQL instance to have a public IPv4 address, and to accept connections from specific IP addresses or a range of addresses by adding authorized addresses to your instance.
To configure access to your MySQL instance:
From the client machine, use What's my IP to see the IP address of the client machine.
Copy that IP address.
Go to the Cloud SQL Instances page in the Google Cloud Console.
Click the instance to open its Overview page, and record its IP address.
Select the Connections tab.
Under Authorized networks, click Add network and enter the IP address of the machine where the client is installed.
Note: The IP addresses must be IPv4. That is, the IP addresses of the instance, and of the client machine that you authorize, both must be IPv4.
Click Done. Then click Save at the bottom of the page to save your changes.

Error Code: 23 Out of resources when opening file

When I execute a query in MySQL, I get this error:
Error Code: 23
Out of resources when opening file '.\test\sample_table#P#p364.MYD' (Errcode: 24 - Too many open files)
MySQL version details:
VERSION 5.6.21
version_comment MySQL Community SERVER (GPL)
version_compile_machine x86_64
version_compile_os Win64
How to solve this problem?
The mysql error: Out of resources when opening file... (Errcode: 24) indicates that the number of files that msyql is permitted to open has been exceeded.
This limit is controlled by the variable open_files_limit. You can read this in phpMyAdmin (or the MySQL command line utility) with the statement:
SHOW VARIABLES LIKE 'open%'
To set this variable to a higher number, edit the /etc/my.cnf file and add the lines:
[mysqld]
open_files_limit = 5000
This answer explains the error code 24 (which is at the end of your error message).
If you happen (like me) to be doing some hacky server maintenance by running a single insert query for 35k rows of data, upping the open_files_limit is not the answer. Try breaking up your statement into multiple bite-sized pieces instead.
Here's some python code for how I solved my problem, to illustrate:
headers = ['field_1', 'field_2', 'field_3']
data = [('some stuff', 12, 89.3), ...] # 35k rows worth
insert_template = "\ninsert into my_schema.my_table ({}) values {};"
value_template = "('{}', {}, {})"
start = 0
chunk_size = 1000
total = len(data)
sql = ''
while start < total:
end = start + chunk_size
values = ", \n".join([
value_template.format(*row)
for row in data[start:end]
])
sql += template.format(headers, values)
start = end
Note that I do not recommend running statements like this as a rule; mine was a quick, dirty job, neglecting proper cleansing and connection management.

Trac hotcopy backup fails due to MySql login failure

I'm trying to move a trac 1.0 instance from one machine to another. I used the Trac Backup and Restore Process described here, which uses the hotcopy command.
http://trac.edgewall.org/wiki/TracBackup
I then created a fresh MySql database, with a new user for trac, assigned appropriate permissions, and then ran the tracadmin initenv command to create the new trac environment. I deployed this using tracd, and it seemed to work fine.
When I attempt to replace this fresh environment with the contents of the hotcopy backup, I get the following error when I try to connect to the server....
Am I missing some step? I've changed MySql permissions to make sure they match the password and username that I passed to trac in the database string. Is it possible that this was overwritten when I copied the new environment over and that trac is using the wrong password to connect to the MySql?
Any help is much appreciated!
Traceback (most recent call last):
File "build/bdist.linux-x86_64/egg/trac/web/api.py", line 502, in send_error
data, 'text/html')
File "build/bdist.linux-x86_64/egg/trac/web/chrome.py", line 955, in render_template
message = req.session.pop('chrome.%s.%d' % (type_, i))
File "build/bdist.linux-x86_64/egg/trac/web/api.py", line 304, in __getattr__
value = self.callbacks[name](self)
File "build/bdist.linux-x86_64/egg/trac/web/main.py", line 268, in _get_session
return Session(self.env, req)
File "build/bdist.linux-x86_64/egg/trac/web/session.py", line 206, in __init__
self.get_session(sid)
File "build/bdist.linux-x86_64/egg/trac/web/session.py", line 229, in get_session
super(Session, self).get_session(sid, authenticated)
File "build/bdist.linux-x86_64/egg/trac/web/session.py", line 76, in get_session
with self.env.db_query as db:
File "build/bdist.linux-x86_64/egg/trac/db/api.py", line 165, in __enter__
db = DatabaseManager(self.env).get_connection(readonly=True)
File "build/bdist.linux-x86_64/egg/trac/db/api.py", line 250, in get_connection
db = self._cnx_pool.get_cnx(self.timeout or None)
File "build/bdist.linux-x86_64/egg/trac/db/pool.py", line 213, in get_cnx
return _backend.get_cnx(self._connector, self._kwargs, timeout)
File "build/bdist.linux-x86_64/egg/trac/db/pool.py", line 134, in get_cnx
raise TimeoutError(errmsg)
TimeoutError: Unable to get database connection within 0 seconds. (OperationalError: (1045, "Access denied for user 'trac_user'#'localhost' (using password: YES)"))
I never could get those "hotcopy" instructions to work. They seem like they were written with sqlite in mind and not MySQL. When I migrated my Trac instance to a new server, I had to do the following to get my database working:
# On old server
$ mysqldump -u admin -padmin_password trac >backup.sql
# On new server
$ mysql -u admin -padmin_password
mysql> CREATE DATABASE trac DEFAULT CHARACTER SET utf8 COLLATE utf8_bin;
mysql> GRANT ALL ON trac.* TO trac_account#localhost IDENTIFIED BY 'topsecret';
mysql> exit
$ mysql -u admin -padmin_password trac <backup.sql
Replace "trac_account" and "topsecret" with the username and password that Trac will use. After that, I was able to get Trac up and running. Essentially, forget that Trac was involved and treat the whole thing like a normal database backup and restore operation. As far as the rest of Trac goes, I just created a new Trac instance on the new server using the imported database and then copied over files from the old server as needed (config files, attachments, custom templates, etc).
Note: In case it's relevant, my old and new installations used the same MySQL database credentials for Trac. If you're changing credentials as part of the move, YMMV.

Unable to obtain JDBC connection from datasource

I was trying to run the following command in gradle and it gave me the following error :
c:\gsoc\mifosx\mifosng-provider>gradle migrateTenantListDB -PdbName=mifosplatfor
m-tenants
Listening for transport dt_socket at address: 8005
:migrateTenantListDB FAILED
FAILURE: Build failed with an exception.
* Where:
Build file 'C:\gsoc\mifosx\mifosng-provider\build.gradle' line: 357
* What went wrong:
Execution failed for task ':flywayMigrate'.
> Unable to obtain Jdbc connection from DataSource
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug
option to get more log output.
BUILD FAILED
Total time: 13.843 secs
The script file is here and the line no. of error is shown as 357 but I dont know why it is showing me an error. Is it something about incorrect configuration in mysql server please help me out here:
script:
task migrateTenantListDB<<{
description="Migrates a Tenant List DB. Optionally can pass dbName. Defaults to 'mifosplatform-tenants' (Example: -PdbName=someDBname)"
def filePath = "filesystem:$projectDir" + System.properties['file.separator'] + '..' + System.properties['file.separator'] + 'mifosng-db' + System.properties['file.separator'] + 'migrations/list_db'
def tenantsDbName = 'mifosplatform-tenants';
if (rootProject.hasProperty("dbName")) {
tenantsDbName = rootProject.getProperty("dbName")
}
flyway.url= "jdbc:mysql://localhost:3306/$tenantsDbName"
flyway.locations= [filePath]
flywayMigrate.execute()
}
The gradle script for this project has got mysql password hard-coded to mysql. You need to set your localhost password to mysql and check the connection before trying this command.

How do I configure pyodbc to correctly accept strings from SQL Server using freeTDS and unixODBC?

I can not get a valid string from an MSSQL server into python. I believe there is an encoding mismatch somewhere. I believe it is between the ODBC layer and python because I am able to get readable results in tsql and isql.
What character encoding does pyodbc expect? What do I need to change in the chain to get this to work?
Specific Example
Here is a simplified python script as an example:
#!/usr/bin/env python
import pyodbc
dsn = 'yourdb'
user = 'import'
password = 'get0lddata'
database = 'YourDb'
def get_cursor():
con_string = 'DSN=%s;UID=%s;PWD=%s;DATABASE=%s;' % (dsn, user, password, database)
conn = pyodbc.connect(con_string)
return conn.cursor()
if __name__ == '__main__':
c = get_cursor()
c.execute("select id, name from recipe where id = 4140567")
row = c.fetchone()
if row:
print row
The output of this script is:
(Decimal('4140567'), u'\U0072006f\U006e0061\U00650067')
Alternatively, if the last line of the script is changed to:
print "{0}, '{1}'".format(row.id, row.name)
Then the result is:
Traceback (most recent call last):
File "/home/mdenson/projects/test.py", line 20, in <module>
print "{0}, '{1}'".format(row.id, row.name)
UnicodeEncodeError: 'ascii' codec can't encode characters in position 0-2: ordinal not in range(128)
A transcript using tsql to execute the same query:
root#luke:~# tsql -S cmw -U import -P get0lddata
locale is "C"
locale charset is "ANSI_X3.4-1968"
using default charset "UTF-8"
1> select id, name from recipe where id = 4140567
2> go
id name
4140567 orange2
(1 row affected)
and also in isql:
root#luke:~# isql -v yourdb import get0lddata
SQL> select id, name from recipe where id = 4140567
+----------------------+--------------------------+
| id | name |
+----------------------+--------------------------+
| 4140567 | orange2 |
+----------------------+--------------------------+
SQLRowCount returns 1
1 rows fetched
So I have worked at this for the morning and looked high and low and haven't figured out what is amiss.
Details
Here are version details:
Client is Ubuntu 12.04
freetds v0.91
unixodbc 2.2.14
python 2.7.3
pyodbc 2.1.7-1 (from ubuntu package) & 3.0.7-beta06 (compiled from source)
Server is XP with SQL Server Express 2008 R2
Here are the contents of a few configuration files on the client.
/etc/freetds/freetds.conf
[global]
tds version = 8.0
text size = 64512
[cmw]
host = 192.168.90.104
port = 1433
tds version = 8.0
client charset = UTF-8
/etc/odbcinst.ini
[FreeTDS]
Description = TDS driver (Sybase/MS SQL)
Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so
Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so
CPTimeout =
CPReuse =
FileUsage = 1
/etc/odbc.ini
[yourdb]
Driver = FreeTDS
Description = ODBC connection via FreeTDS
Trace = No
Servername = cmw
Database = YourDB
Charset = UTF-8
So after continued work I am now getting unicode characters into python. Unfortunately the solution I've stumbled upon is about as satisfying as kissing your cousin.
I solved the problem by installing the python3 and python3-dev packages and then rebuilding pyodbc with python3.
Now that I've done this my scripts now work even though I am still running them with python 2.7.
So I don't know what was fixed by doing this, but it now works and I can move on to the project I started with.
Any chance you're having a problem with a BOM (Byte Order Marker)? If so, maybe this snippet of code will help:
import codecs
if s.beginswith( codecs.BOM_UTF8 ):
# The byte string s begins with the BOM: Do something.
# For example, decode the string as UTF-8
if u[0] == unicode( codecs.BOM_UTF8, "utf8" ):
# The unicode string begins with the BOM: Do something.
# For example, remove the character.
# Strip the BOM from the beginning of the Unicode string, if it exists
u.lstrip( unicode( codecs.BOM_UTF8, "utf8" ) )
I found that snippet on this page.
If you upgrade the pyodbc to version 3 the problem will be solved.