Error Code: 23 Out of resources when opening file - mysql

When I execute a query in MySQL, I get this error:
Error Code: 23
Out of resources when opening file '.\test\sample_table#P#p364.MYD' (Errcode: 24 - Too many open files)
MySQL version details:
VERSION 5.6.21
version_comment MySQL Community SERVER (GPL)
version_compile_machine x86_64
version_compile_os Win64
How to solve this problem?

The mysql error: Out of resources when opening file... (Errcode: 24) indicates that the number of files that msyql is permitted to open has been exceeded.
This limit is controlled by the variable open_files_limit. You can read this in phpMyAdmin (or the MySQL command line utility) with the statement:
SHOW VARIABLES LIKE 'open%'
To set this variable to a higher number, edit the /etc/my.cnf file and add the lines:
[mysqld]
open_files_limit = 5000

This answer explains the error code 24 (which is at the end of your error message).

If you happen (like me) to be doing some hacky server maintenance by running a single insert query for 35k rows of data, upping the open_files_limit is not the answer. Try breaking up your statement into multiple bite-sized pieces instead.
Here's some python code for how I solved my problem, to illustrate:
headers = ['field_1', 'field_2', 'field_3']
data = [('some stuff', 12, 89.3), ...] # 35k rows worth
insert_template = "\ninsert into my_schema.my_table ({}) values {};"
value_template = "('{}', {}, {})"
start = 0
chunk_size = 1000
total = len(data)
sql = ''
while start < total:
end = start + chunk_size
values = ", \n".join([
value_template.format(*row)
for row in data[start:end]
])
sql += template.format(headers, values)
start = end
Note that I do not recommend running statements like this as a rule; mine was a quick, dirty job, neglecting proper cleansing and connection management.

Related

Windows and Linux file path issues using python SQL load data infile

I am working on a mysql (8) db which is too big for the 2TB linux partition size so I have moved the mysql instance onto a 16TB nvme raid under windows 10. All my other code is running on Debian 10 on a virtualbox instance and I have mapped a perm drive between the Debian VM and the nvme raid array.
I can open the database from debian and read and write as normal, so the odbc connector is working fine.
The issue here is the fact I am loading very large json log files into one table and doing it a row at a time was taking hours, so I opted to create a csv file for each log file and use LOAD DATA INFILE as part of the SQL statement.
Trouble is, when I execute the sql statement I get the 'file not found' issue, even though, looking at debug code, the file path is correct and it actually exists.
An excert from my python 3 code is:
p = f"/media/sf_unpack/{filename}"
try:
with open(p, 'r', buffering=1024 * 1024) as csvfile:
print(csvfile.read())
SQL = f"LOAD DATA INFILE '{p}' INTO TABLE xxx fields terminated by ',' LINES TERMINATED BY '\n' (field,field,.....etc) ;"
try:
mycursor.execute(SQL)
connection_object.commit()
except Exception as ex:
displayerror(ex)
This code will open the file correctly and show a value for p of /media/sf_unpack/filename.csv (which is correct).
When we get to the mycusror.execute(SQL) is raises an exception and says the directory or filename cannot be found. Interestingly, and I am sure this is the issue, the dubugger tells me the file that cannot be found is a windows version D:\media/sf_unpack/filename.csv - which looks as if it has something to do with the virtualbox mapping.
i.e. p =
I have tried to use the Path method from pathlib i.e p = Path(f"D:\mysql\unpack\{filename}" but that makes no difference.
I know I am doing something stupid but I am not sure what it is.
Any help would be gratefully recieved

Set locktimeout using JDBC for MySQl

Is there a way to set the locktimeout on SQL queries from the ConnectorJ JDBC driver. I'm looking for something like the SQL server:
connectURL = url + domain + ":1433;" + "databaseName="+databaseName+ ";lockTimeout=" + lockTimeOut;
driver parameter.
Thanks.
Thought I'd post a solution that I found for this, which I hope helps someone: for MySQl you're looking to add a line to your conf file. On Linux, SystemD, this resides inside /etc/my.cnf
Add this line to the [mysqld] entries
innodb_lock_wait_timeout=1
That's it. Lock timeout is now set to one second.

DatabaseError: 1 (HY000): Can't create/write to file '2015-04-06 20:48:33.418000'.csv (Errcode: 13 - Permission denied)

I am designing an application in Python and trying to write to a CSV file, but I am getting this error:
DatabaseError: 1 (HY000): Can't create/write to file '2015-04-06 20:48:33.418000'.csv (Errcode: 13 - Permission denied)
The Code:
def generate_report(self):
conn=mysql.connector.connect(user='root',password='',host='localhost',database='mydatabase')
exe2 = conn.cursor()
exe2.execute("""SELECT tbl_site.Site_name, State_Code, Country_Code,Street_Address, instrum_start_date, instrum_end_date, Comment INTO OUTFILE %s FIELDS TERMINATED BY '|' OPTIONALLY ENCLOSED BY '"' ESCAPED BY '\\\\' LINES TERMINATED BY '\\n'FROM tbl_site JOIN tbl_site_monit_invent ON site_id = tbl_Site_site_id """, (str(datetime.datetime.now()),))
I can run this code without any errors on a Mac, but I need it to work on Windows.
How can I resolve this error?
Simple really. A colon character is not a valid character in a filename on Windows. It's not allowed.
Reference: https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247%28v=vs.85%29.aspx
The colon character is in the list of "reserved characters", along with several others. (NOTE: One use of the colon character is as a separator for an Alternate Data Stream on NTFS. Ref: http://blogs.technet.com/b/askcore/archive/2013/03/24/alternate-data-streams-in-ntfs.aspx
Followup
The question has been significantly edited since my previous answer was provided. Some notes:
I'm not very familiar with running MySQL on Windows OS. Most of my work with MySQL server is on Linux.
The SELECT ... INTO OUTFILE statement will cause the MySQL server to attempt to write a file on the server host.
The MySQL user (the user logged in to MySQL) must have the FILE privilege in order to use the SELECT ... INTO OUTFILE statement.
Also, the OS account that is running MySQL server must have OS permissions to write a file to the specified directory, and the file to be written must not already exist. Also, the filename must conform to the naming rules for filenames on OS filesystem.
Ref: https://dev.mysql.com/doc/refman/5.5/en/select-into.html
For debugging this type of issue, I strongly recommend you echo out the actual SQL text that is going to be sent to the MySQL server. And then take that SQL text and run it from a different client, like the mysql command line client.
For debugging a privileges issues, you can use a much simpler statement. Test writing a file to a directory that is known to exist, that is known the mysql server has permissions to write files to, and with a filename that does not exist and that conforms to the rules for the OS and filesystem.
For example, on a normal Linux box, we could test with something like this:
mysql> SELECT 'bar' AS foo INTO OUTFILE '/tmp/mysql_foo.csv'
Before we run that, we can easily verify that the /tmp directory exists, that it is writable by the OS account that is running the mysql server, and that the filename conforms to the rules for the filesystem, and that the filename doesn't exist, e.g.
$ su - mysql
$ ls -l /tmp/mysql_foo.csv
$ echo "foo" >/tmp/mysql_foo.csv
$ cat /tmp/mysql_foo.csv
$ rm /tmp/mysql_foo.csv
$ ls -l /tmp/mysql_foo.csv
Once we get over that hurdle, we can move on to testing writing a file to a different directory, a file with a more more complex filename. Once we get that plumbing working, we can work on getting actual data, into a usable csv format.
The original question seems to indicate that the MySQL server is running on Windows OS, and it seems to indicate that the filename attempting to be written contains semicolon characters. Windows does not allow semicolon as part a filename.
It was simply permission error.

MySQL-proxy and basic failover (detect state)

I have just installed mysql proxy 0.8.2, and started playing with it. I am using it together with two MySQL 5.5 servers, listening on 3306, the proxy is running on 4040. Oh, and OS is Win 7 32-bit.
My problem is that that the mysql proxy checking the state of the servers doesn't seem like it should.
I start up the script, and it runs as it should. But when I shutdown the primary server, the script doesn't seem to recognize that - it still tries to connect to it...
Version information
mysql-proxy 0.8.2
chassis: mysql-proxy 0.8.2
glib2: 2.16.6
libevent: 1.4.12-stable
LUA: Lua 5.1.2
package.path: C:\ProgramX86\dev\mysql-proxy\lib\mysql-proxy\lua\?.lua
package.cpath: C:\ProgramX86\dev\mysql-proxy\bin\lua-?.dll
-- modules
proxy: 0.8.2*
My config
[mysql-proxy]
proxy-address = :4040
proxy-backend-addresses = 10.3.0.9:3306,192.168.4.100:3306
proxy-lua-script = C:/ProgramX86/dev/mysql-proxy/failover3.lua
daemon = true
Failover lua script
function connect_server()
if proxy.global.backends[1].state == proxy.BACKEND_STATE_DOWN then
proxy.connection.backend_ndx = 2
else
proxy.connection.backend_ndx = 1
end
print ("s Connecting: " .. proxy.global.backends[proxy.connection.backend_ndx].dst.name)
end
function read_query(packet)
if proxy.global.backends[1].state == proxy.BACKEND_STATE_DOWN then
proxy.connection.backend_ndx = 2
else
proxy.connection.backend_ndx = 1
end
print ("q Connecting: " .. proxy.global.backends[proxy.connection.backend_ndx].dst.name)
end
It's because proxy.global.backends[1].state is still proxy.BACKEND_STATE_UP when the primary server is shut down.
Someone said it will take 3 minutes to wait the back end response, instead of watching Mysql service all the time.
I am trying to find a better way to solve the problem.

pyodbc/FreeTDS/unixODBC on Debian Linux: issues with TDS Version

I'm having a bit of trouble successfully using pyodbc on Debian Lenny (5.0.7). Specifically, I appear to be having trouble fetching NVARCHAR values (not a SQL Server expert, so go easy on me :) ).
Most traditional queries work OK. For instance, a count of rows in table1 yields
cursor.execute("SELECT count(id) from table1")
<pyodbc.Cursor object at 0xb7b9b170>
>>> cursor.fetchall()
[(27, )]
As does a full dump of ids
>>> cursor.execute("SELECT id FROM table1")
<pyodbc.Cursor object at 0xb7b9b170>
>>> cursor.fetchall()
[(0.0, ), (3.0, ), (4.0, ), (5.0, ), (6.0, ), (7.0, ), (8.0, ), (11.0, ), (12.0, ), (18.0, ), (19.0, ), (20.0, ), (21.0, ), (22.0, ), (23.0, ), (24.0, ), (25.0, ), (26.0, ), (27.0, ), (28.0, ), (29.0, ), (32.0, ), (33.0, ), (34.0, ), (35.0, ), (36.0, ), (37.0, )]
But a dump of names (again, of type NVARCHAR) does not
>>> cursor.execute("SELECT name FROM table1")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
pyodbc.ProgrammingError: ('42000', '[42000] [FreeTDS][SQL Server]Unicode data in a Unicode-only collation or ntext data cannot be sent to clients using DB-Library (such as ISQL) or ODBC version 3.7 or earlier. (4004) (SQLExecDirectW)')
... the critical error being
pyodbc.ProgrammingError: ('42000', '[42000] [FreeTDS][SQL Server]Unicode data in a Unicode-only collation or ntext data cannot be sent to clients using DB-Library (such as ISQL) or ODBC version 3.7 or earlier. (4004) (SQLExecDirectW)')
This is consistent across tables.
I've tried a variety of different versions of each, but now I'm running unixODBC 2.2.11 (from lenny repos), FreeTDS 0.91 (built from source, with ./configure --enable-msdblib --with-tdsver=8.0), and pyodbc 3.0.3 (built from source).
With a similar combination (unixODBC 2.3.0, FreeTDS 0.91, pyodbc 3.0.3), the same code works on Mac OS X 10.7.2.
I've searched high and low, investigating the solutions presented here and here and recompiling different versions of unixODBC and FreeTDS, but still no dice. Relevant configuration files provided below:
user#host:~$ cat /usr/local/etc/freetds.conf
#$Id: freetds.conf,v 1.12 2007/12/25 06:02:36 jklowden Exp $
#
# This file is installed by FreeTDS if no file by the same
# name is found in the installation directory.
#
# For information about the layout of this file and its settings,
# see the freetds.conf manpage "man freetds.conf".
# Global settings are overridden by those in a database
# server specific section
[global]
# TDS protocol version
tds version = 8.0
client charset = UTF-8
# Whether to write a TDSDUMP file for diagnostic purposes
# (setting this to /tmp is insecure on a multi-user system)
; dump file = /tmp/freetds.log
; debug flags = 0xffff
# Command and connection timeouts
; timeout = 10
; connect timeout = 10
# If you get out-of-memory errors, it may mean that your client
# is trying to allocate a huge buffer for a TEXT field.
# Try setting 'text size' to a more reasonable limit
text size = 64512
# A typical Sybase server
[egServer50]
host = symachine.domain.com
port = 5000
tds version = 5.0
# A typical Microsoft server
[egServer70]
host = ntmachine.domain.com
port = 1433
tds version = 8.0
[foo]
host = foo.bar.com
port = 1433
tds version = 8.0
user#host:~$ cat /etc/odbc.ini
[foo]
Description = Foo
Driver = foobar
Trace = No
Database = db
Server = foo.bar.com
Port = 1433
TDS_Version = 8.0
user#host:~$ cat /etc/odbcinst.ini
[foobar]
Description = Description
Driver = /usr/lib/odbc/libtdsodbc.so
Setup = /usr/lib/odbc/libtdsS.so
CPTimeout =
CPReuse =
Any advice or direction would be very much appreciated!
I encountered the same error with Ubuntu. I "solved" it with a work around.
All you need to do is to set the environment variable TDSVER.
import os
os.environ['TDSVER'] = '8.0'
As I said it is not a real "solution" but it works.
Try to add
TDS_Version=8.0;ClientCharset=UTF-8
in your connection string.
For example,
DRIVER=FreeTDS;SERVER=myserver;DATABASE=mydatebase;UID=me;PWD=pwd;TDS_Version=8.0;ClientCharset=UTF-8
Cant you just side step the issue and either Convert or Cast name to something it can handle?
cursor.execute("SELECT CAST(name AS TEXT) FROM table")