I have a Mac Pro with i7 processor, 16GB RAM, and sufficient storage running Win 8.1 via Parallel on top of OS X Yosemite. I have a 23GB MySQL data and I am wondering if I am able to have such a big data loaded into MySQL in my PC. I started to import data but it stops after an hour throwing error
Error 1114 (HY000) at line 223. The table X is full.
I googled the error and found the same error discussed in Stackoverflow (but not this much of data). I tried to resolve using the given solutions but failed. MySQL imports about 3G of data and then throws the error.
Now, here are my 3 main questions.
Is my data much more bigger than a MySQL data engine can have on a PC?
If this is not the case and I am good to go with that much data, do I have any configuration required to enable running a 23GB data on my PC?
Final concluding question is how big is big that one cannot run on its machine? Is it only matter to be able to store data locally or it needs some other things?
Of course MySQL on Windows can handle 23GB of data. That's not even close to its limit.
Keep in mind that a database takes lots of disk space for indexes and other things. 23GB of raw data probably will need 100GB of disk space to load, to index, and to get running. If you are loading it into an InnoDB table you will also need transaction rollback space for the load.
It seems likely that your Windows 8.1 virtual machine running on Parallels is running out of disk space. You can allocate more of your Mac's disk for use by Parallels. Read this. http://kb.parallels.com/en/113972
Your answers can be found within the MySQL reference
The effective maximum table size for MySQL databases is usually
determined by operating system constraints on file sizes, not by MySQL
internal limits. The following table lists some examples of operating
system file-size limits. This is only a rough guide and is not
intended to be definitive. For the most up-to-date information, be
sure to check the documentation specific to your operating system.
Related
we're using MySql on CloudSql for quite some time now.
Obviously, we started with Mysql 5 but after a long wait and the final release of Mysql8 we decided to upgrade our database server.
As the title promotes, we now see a strange behavior of our memory utilization.
As you can see here it constantly fills up until server max resources are reached and then restarts and start filling up again.
I mean there could be an issue with one of our services but before the upgrade our memory consumption looked like this:
So you can see, memory consumption was more or less constant.
Furthermore, we increased resources when we upgraded to mysql8 and switched from db-n1-standard-1 to db-n1-standard-2, to have more available resources when data grows up.
Does anyone knows this behavior? Is there a change in Mysql5 to 8? I didn't find any information about it. Just found some notes that it's normal that Mysql takes as much memory as it can get. But I'm still wondering why it didn't on Mysql5.
Some more details on the configuration:
We're using read replica for HA
Binarylogs activated
Slow Query log enabled with FILE output.
Everything else is default CloudSql Configuration.
Any help is much appreciated.
Best regards,
Chris
Indeed, it seems that MySQL 8 is consuming more memory than MySQL 5. As shown in some tests performed by the author of the article MySQL 8 and MySQL 5.7 Memory Consumption on Small Devices
, the memory used by the version 8 in same VM settings is considerably higher than on versions 5, including both resident and virtual memories - even though these are tests in small VMs, it's a good indication that this occurs in bigger configurations as well.
So, yes, it seems that, as you mentioned, it's normal that Mysql takes as much memory as it can get, but that indeed, MySQL 8 is consuming more memory than the 5 one.
I have installed a SpringMVC Web application with JPA and a Mysql Database.
The application is displaying statistics from the database (with a lot of selects)
It works quite fast on Linux(mysql 5.5.54), but it is very slow on Windows 10 (mysql 5.6.38).
Do you know what could cause such a behaviour on Windows?
Or could you give me hints or tell me where to search?
[UPDATE]
Linux : Intel® Core™ i7-4510U CPU # 2.00GHz × 4 / 8GoRAM
Windows : Intel Xeon CPU E31220 3.1Ghz 4GoRAM
I know that the windows machine is not as "powerful" than the linux one. I wonder if, by increasing the memory, that could be enough. Or does Mysql needs a lot of CPU too.
My list would be:
Check configs are identical - not just the settings in my.ini - values not set here are set at compile time and the 2 instances have definitely been compiled seperately! You'll need to capture and compare the output of SHOW VARIABLES
Check file deployment is similar - whether innodb is configured to use one file per table, whether the files are distributed across multiple disks
Check adequate memory available for caching on MSWindows
disable anti-virus
Make sure MSWindows is configured as a server (prioritize background tasks)
Windows sucks, deal with it :)
I have an issue that is driving me (and my customer) up the wall. They are running MySQL on Windows - I inherited this platform, I didn't design it and changing to MSSQL on Windows or migrating the MySQL instance to a *Nix VM isn't an option at this stage.
The Server is a single Windows VM, reasonably specced (4 vCores, 16 Gb RAM etc.)
Initially - they had a single Disk for the OS, MySQL and the MySQL backup location and they were getting inconsistent backups, regularly failing with the error message:
mysqldump: Got errno 22 on write
Eventually we solved this by simply moving the Backup destination to a second virtual disk (even though it is is on the same underlying Storage network, we believed that the above error was being caused by the underlying OS)
And life was good....
For about 2-3 months
Now we have a different (but my gut is telling me related) issue:
The MySQL Dump process is taking increasingly longer (over the last 4 days, the time taken for the dump has increased by about 30 mins per backup).
The Database itself is a bit large - 58 Gb, however the delta in size is only about 100 mb per day (and unless I'm missing something - it shouldn't take 30 minutes extra to dump 100 mb of Data).
Initially we thought that this was the underlying Storage network I/O - however as part of the backup script, once the .SQL file is created, it gets zipped up (to about 8.5 Gb) - and this process is very consistent in the time taken - which leads me not to suspect the disk I/O (as my presumption is that the Zip time would also increase if this were the case).
the script that I use to invoke the backup is this:
%mysqldumpexe% --user=%dbuser% --password=%dbpass% --single-transaction --databases %databases% --routines --force=true --max_allowed_packet=1G --triggers --log-error=%errorLogPath% > %backupfldr%FullBackup
the version of MySQLDump is C:\Program Files\MySQL\MySQL Server 5.7\bin\mysqldump.exe
Now to compound matters - I'm primarily a Windows guy (so have limited MySQL experience) and all the Linux Guys at the office won't touch it because it's on Windows.
I suspect the cause for the increase time is that there is something funky happening with the row Locks (possibly due to the application that uses the MySQL instance) - but I'm not sure.
Now for my questions: Has anyone experienced anything similar with a gradual increase of time for a MySQL dump over time?
Is there a way on Windows to natively monitor the performance of MySQLdump to find where the bottlenecks/locks are?
Is there a better way of doing a regular MySQL backup on a Windows platform?
Should I just tell the customer it's unsupported and to migrate it to a supported Platform?
Should I simply utter a 4 letter word and go to the Pub and drown my sorrows?
Eventually found the culprit was the underlying storage network.
I’m working on a relatively small application, serving about 1,500 users and running on a Mysql database that is about 300 megs. The entire system runs on AWS with a single dedicated EC2 node running the Grails application on Tomcat 8 and a single dedicated Mysql RDS instance. The system has been running live in production for about three years with no database issues. The two largest tables contain about 40k records. The application is built using Grails and Java 1.7.
Yesterday our application began throwing the following exception, with the underlying error message of:
"Got error 28 from storage engine"
The logs available from the RDS admin web console are empty.
Googling has not revealed any promising leads that have helped us resolve the issue, other than most messages point out that the disk is out of space. Given that most search results refer to disk space. Being software developers rather than DBA's with significant Mysql expertise, we boosted the storage space of the Mysql RDS instance. Unfortunately, today our application is still sporadically throwing the same exception. Having created our Mysql DRS instance with 15 gigs of space -- which is several orders of magnitude of additional space than our application makes use of -- we are at a lost as to what is the root cause of this issue. Our guess is that there is possibly some out of the box Mysql limitation that we are hitting up against but have no idea what it may be or how to solve it. Indeed, the whole reason we host onRDS was to avoid issues of this type.
Doing some Googling, this seems like a somewhat common Mysql error but that does not have any concrete trail for us to follow. Most suggestions talk about checking the filesystem or "inode" space. Given that this is a hosted Mysql RDS instance on AWS, I am unsure if or how to check such things. Looking at the CloudWatch for the RDS instance, I can see that the CPU is idle and that the instance is dramatically under the 15 gig storage limit.
Does anyone have any suggestions for us to investigate?
Given that we are new to RDS, can you please point us to any documentation or -- even better -- suggest what settings we can tweak in the RDS console -- to help prevent this error from occuring? Ideally, we moved to RDS thinking that if this was a mysql sizing or scaling issue that moving to RDS would solve the problem. As a last resort, this morning we then deleted about 20k rows of unessential data. Unfortunately, the issue persists and we continue to experience the issue.
A few questions:
Are there any RDS settings we can adjust to avoid this issue?
Can this be solved by moving to a larger RDS instance, perhaps with more memory?
Would we experience this issue if we moved to Aurora?
Well from your comment this is definitely a low on storage issue. Because 13 Gb is very less storage. you can check the free available storage in the dashboard. Check this screenshot below in the "Storage" metric under monitoring if it goes ahead of the red line you will start getting Error 28. You will have to increase the storage of your RDS instance or free up some space. I will suggest increase the storage to avoid this issue in future.
I'm seeing a few of these errors during high load times:
mysql_connect() [<a
href='function.mysql-connect'>function.mysql-connect</a>]: [2002] Resource
temporarily unavailable (trying to connect via
unix:///var/lib/mysql/mysql.sock)
From what I can tell the mysql server isn't hitting its max connections limit, but there's something else stopping it from serving the query. What other limits would MySQL be hitting?
I'm running RHEL 6.2 64bit with MySQL 5.5.21
Let's assume your system is currently Unix-based (as given in your problem statement). If this is correct, here's the set of issues you may be running into:
You've run out of memory available to MySQL.
This is the most likely problem you're facing. Each connection in MySQL's connection pool requires memory to function, and if this resource is exhausted, no further connections can be made. Of course, the memory footprints and maximum packet sizes of various operations can be tuned in your equivalent to my.cnf if you discover this to be an issue.
Here's an additional thread that can help there, but you may also consider using simpler profiling tools like top to get a good ballpark estimate of what's going on.
You've run out of file descriptors available to your MySQL user account.
Another common issue: if you're trying to service requests that require file IO above the 1,024 boundary (by default), you will run into cases where the operation simply fails. This is because most systems specify a soft and hard limit on the number of open file descriptors each user can have available at one time, and walking over this threshold can cause problems.
This will usually have a series of glaringly obvious signs expressed in your log files. Check /var/log/messages and your comparable directories (for example, /var/log/mysql to see if you can find anything interesting.
You've run into a livelock or deadlock scenario where your thread is unsatisfiable.
Corollary to memory and file descriptor exhaustion, threads can time out if you've overstepped the computational load your system is capable of handling. It won't throw this error message, but this is something to watch out for in the future.
Your system is running out of PIDs available to fork.
Another common scenario: fork only has so many PIDs available for its use at any given time. If your system is simply overforked, it will cease to be able to service requests.
The easiest check for this is to see if any other services can connect through to the machine. For example, trying to SSH into the box and discovering that you cannot is a big clue.
An upstream proxy or connection manager has run out of resources and ceased servicing requests.
If you have any service layer between your client and MySQL, it bears inspecting to see if it has crashed, hung, or otherwise become unstable. The advice above applies.
Your port mapper has exhausted itself after 65,536 connections.
Unlikely, but again, a possible exhaustion case. Checking the trivial service connection as above is, ehm, also the best port of call here.
In short: this is a resource exhaustion scenario, inclusive of the server simply being "down". You're going to have to profile your system further to see what you're blocking on. All the error message gives us in this case is the fact the resource is unavailable to the client -- we'd need to see more information about the server to determine a more adequate remedy.
I still haven't found which limits it was hitting, but I did manage to work around the problem. There was a problem with our session table (in vbulletin) which uses the MEMORY engine. The indexes for this table were HASH and thus when vbulletin purged this table once an hour it would lock the table just long enough to hold up other queries and push mysql to the limit of its resources.
By changing the indexes to BTREE this allowed MySQL to delete the rows from the session table a lot quicker and avoid any limits there were reached previously. The errors only started when we upgraded our master db server to MySQL 5.5, so I'm guessing MEMORY tables are handled differently in the latest release.
See http://www.mysqlperformanceblog.com/2008/02/01/performance-gotcha-of-mysql-memory-tables/ for information on speed increases from using BTREE indexes over HASH For MEMORY.
Geez, this could be so many things. It could be that the socket buffer space is exhausted. It could be that mysql is not accepting connections as fast as they are coming in and the backlog limit is reached (though I'd expect that to give you a "Connection Refused" error, I don't know for sure that's what you'll get for a Unix domain socket). It could be any of the things #MrGomez pointed out.
Since you are running Apache and MySQL on the same server and this is a problem under high load, it could well be that Apache is starving the system of some resource and you're just not seeing (noticing?) the dropped/failed incoming connections/requests in your logs.
Are you using connection pooling? If not, I'd start there.
I'd also look for errors in the Apache logs and syslog around the same time as the mysql_connect error and see what else turns up. I'd especially recommend getting MySQL moved over to its own separate dedicated server.
In my case, I was working with JSON data types with PDO (PHP Driver).
I was using fetch to retrieve one item but forgot to add LIMIT 1 to the query. Adding it solved the problem.