I have a copy of my e-shop in my local server (XAMPP). The computer it's an i7 3820 16gb ram with xampp installed in SSD.
I change this values in php.ini file:
memory_limit -1
max_input_time 9999999999
max_execution_time 9999999999
max_file_uploads 1024
post_max_size 2048M
upload_max_filesize 120M
I'm trying to import by the csv importations tool included in Prestashop (v1.6) a file with more or less 9000 rows. The file is like this:
ID: 1
Attribute: AAA:select:1%BBB:select:2%CCC:select:3%DDD:select:4%EEE:select:5%FFF:select:6
Value: UUU:1%VVV:2%WWW:3%XXX:4%YYY:5%ZZZ:6
Price impact: 999
(I used "%" instead of "," to split rows because otherwise I have problems when the strings have "," in text)
Well. I am not able to import more than 1000 rows by file. And still takes about 60-90 minutes by file. If I try with larger files the browser returns me a timeout error.
Therefore, the question is, do you know some trick to optimize this process? another option? more parameters to change in php.ini? I appreciate any advice to help me improve the process.
Related
My Ubuntu 22.04 server is suddenly telling me that "The redo log file "./#innodb_redo/#ib_redo0 size 23289856 is not a multiple of innodb_page_size." My innodb_page_size is 16K, so the error is correct, but I can't seem to find any advice on how to fix it. I tried moving ib_redo0 out of the way but that didn't help. Any ideas?
I also encountered this issue. It appeared to be specific to using ZFS on Ubuntu, in my case it was during an upgrade to MYSQL 8.0.30-0ubuntu0.20.04.2.
Following details in this Ubuntu issue report and this MySQL issue report I was able to come up with a solution that worked in my environment.
There are 3 commands below to be ran as root or with sudo. You should replace 8192 in the first with the result of <broken_file_size> % <default_page_size>. The default page size is usually 16384 unless modified.
You may need to replace the #ib_redo0 part of the second command with the broken file reported in the error message.
These commands are intended to pad out the reportedly invalid file with zeros.
Perform a backup before running!
# Gather required zeros to append
# Will create a "zeros" file in the current directory
# This has been calculated based upon 23289856 % 16384 = 8192 or <broken_file_size> % <default_page_size>
dd if=/dev/zero bs=1 count=8192 of=./zeros
# Append zeroes to invalid file
cat zeros >> /var/lib/mysql/#innodb_redo/#ib_redo0
# Restart MySQL
systemctl restart mysql.service
I'd be wary of remaining on ZFS, even if the above fixes things, for the sake of potentially hitting the same issue again.
I had the same Problem in a LXD Container running on ZFS. I had to move it to a different type of storage-pool, e.g. Directory or BTRFS.
After that the solution of #DanBrown worked for me too.
Thank you.
I',m having a huge issue with running MySQL#5.7 on my freshly installed 16" Macbook (with OSX 10.15.1 Catalina) During certain actions I get errors like
PDO::__construct(): MySQL server has gone away.
This is caused by the following error I found in the MySQL log.
2019-11-27T13:24:04.835245Z 0 [Warning] File Descriptor 3226 exceeded FD_SETSIZE=1024
After some research, I tried stuff like sudo launchctl limit maxfiles 65536 200000
When i run launchctl limit i get the follwing data
cpu unlimited unlimited
filesize unlimited unlimited
data unlimited unlimited
stack 8388608 67104768
core 0 unlimited
rss unlimited unlimited
memlock unlimited unlimited
maxproc 4096 4096
maxfiles 524288 524288
This looks fine to me. To get the max processes and max files correct I also tried
sudo sysctl -w kern.maxfilesperproc=524288
my.cnf looks like this
[mysqld]
open_files_limit=999999
local_infile=ON
secure_file_priv=""
max_allowed_packet=1073741824
max_connections=100000
key_buffer_size=2G
innodb_buffer_pool_size=12G
query_cache_size=67108864
query_cache_type=1
query_cache_limit=4194304
table_open_cache=4096
innodb_buffer_pool_instances=24
innodb_sort_buffer_size=2G
sort_buffer_size=1G
innodb_flush_log_at_trx_commit=0
innodb_log_file_size=3G
interactive_timeout=3600
max_connect_errors=1000000
thread_cache_size=4096
log_error=/var/log/mysql/error.log
[mysqld_safe]
open_files_limit=999999
There is of course a solution to change my table_open_cache to a lower value but that hurts performance and before i always had this on a higher number.
Anybody any clue where this FD_SETSIZE is coming from and how to change it so its used properly
rebooting has no effect by the way.
Resource explaining issue: https://expressionengine.com/blog/mysql-5.7-server-os-x-has-gone-away
try setting following environment variables in mysql configuration file (my.cnf)
interactive_timeout = 300
wait_timeout = 300
I was having this issue on Big Sur v 11.6. The solution for me was modifying the MySQL config with:
max_allowed_packet=256M
table_open_cache=250
I am new to Amazon EC2, recently I found mysql is not working properly, continuously crashing.
I think issue is less space or something..
Here is the some of the outputs:
free -k
total used free shared buffers cached
Mem: 1020536 254744 765792 0 54028 83748
-/+ buffers/cache: 116968 903568
Swap: 82628 0 82628
swapon -s
Filename Type Size Used Priority
/swapfile file 82628 0 -1
I am not able to start the mysql.
Can someone please suggest me the solutions?
Check the InnoDB cache size in my.cnf file. If it is more than the size of your system's memory you found your problem. Try to reduce that and try to start MySQL.
mysql config my.ini default query_cache_type=0 .
I have already set sql_query_pre = SET SESSION query_cache_type=OFF in sphinx.conf.I think it is not good to turn cache while indexing.But sphinx still asking me to turn on cache...
error detail:
win7 x64, sphinx 2.1.7
I:\sphinx\bin>I:\sphinx\bin\indexer.exe --all --config I:\sphinx\bin\sphinx.conf
Sphinx 2.1.7-id64-release (r4638)
Copyright (c) 2001-2014, Andrew Aksyonoff
Copyright (c) 2008-2014, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file 'I:\sphinx\bin\sphinx.conf'...
indexing index 'test1'...
ERROR: index 'test1': sql_query_pre[1]: Query cache is disabled; restart the server with query_cache_type=1 to enable it
(DSN=mysql://root:***#localhost:3306/test).
total 0 docs, 0 bytes
total 0.018 sec, 0 bytes/sec, 0.00 docs/sec
skipping non-plain index 'rt'...
total 0 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
total 0 writes, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
The 'message' you are receiving is coming from mysql - not from sphinx. indexer just runs the commands as provided and reports/uses the results.
Basically mysql is telling yo the query cache is already disabled. its not enabled globally.
So trying to turn if off for just the (indexing) session, fails, because its not on. If its not enabled in teh first place you cant disable it!
http://www.big.info/2013/04/error-code-1651-query-cache-is-disabled.html
Its telling you NEED to turn it on globally first, before you are ABLE to turn if off.
Maybe mysql could just silently fail to turn it off, rather than giving an error, but thats a different story.
I had a case where I was seeing this error, and it was actually preventing the indexer --all command from generating indices. I went to the XAMPP Control Panel and clicked on the Config button for the MySQL module. This opened the file my.ini in Notepad. I added the following line to the [mysqld] section in the file:
query_cache_type = 1
Then I restarted the MySQL service. The value of query_cache_type was now displayed as ON, and the indexer --all command successfully generated indices.
when i m trying to load the .sql file into wamp using phpmyadmin
i m getting the fatal error as below
Fatal error: Maximum execution time of 300 seconds exceeded in C:\wamp\apps\phpmyadmin4.5.1\libraries\plugins\import\ImportSql.class.php on line 220
Location: C:\xampp\phpmyadmin\config.inc.php
$cfg['ExecTimeLimit'] = 600;
In C:\xampp\php\php.ini
Look for and Change max_execution_time
you already have execution 300 seconds, so go to that file and increase max_execution_time to to what ever you want of seconds.
Add this line
$cfg['ExecTimeLimit'] = 6000;
to phpmyadmin/config.inc.php
And Change php.ini and my.ini
post_max_size = 750M
upload_max_filesize = 750M
max_execution_time = 5000
max_input_time = 5000
memory_limit = 1000M
max_allowed_packet = 200M (in my.ini)
OR
You may also go to xampp\phpMyAdmin\libraries\config.default.php,
and change this line to fix that error.
$cfg['ExecTimeLimit'] = 600;
You are uploading huge sql file. So php cannot executed all queries within a default time limit. You need to change the php execution time limit.
Your Answer is solved in Fatal Error while trying to export a result of a query in mysql
Hope this would help you please refer: http://www.nginxtips.com/configure-max_execution_time-in-php-fpm-using-nginx/