MUMPS doesn't permit to create a file bigger than 2 GB - mumps

We know that MUMPS does not allow to create files bigger than 2 GB.
A Volume Group accepts 16GB, but only with 2 GB for each VG file.
How can I fix it?

Please, check the OS your implementation of MUMPS DB is installed on, in particular check the user setup (user's ulimit in case of Unix) I don't want to guess around since you did not specify any details about your implementation (M DB type and version, OS type and version).
MUMPS definitely allows to create files greater than 2GB. MUMPS Database region files are usually way over 20GB, MUMPS Database Journal Files are kept at 2GB for ease of maintenance, but could be greater, output files MUMPS system at hand may produce (in a batch on demand) can grow to whatever size the file system allows it.

Related

Is there an automatic gem5 configuration generator based on my Server?

Is there a tool or script that would read my server properties (Number of CPUS, Cores, Memory layout and Structure) and automatically generate a full system mode (FSE) configuration script based on that ?
I know this might little ambitious, but maybe something like that exists somewhere ....

MySQL what is the maximum size of a database?

I have looked all over the MySQL website and found no definitive answers.
Is the size of the database determined by the operating system or is there 4GB limit?
Where can I find perforamnce statistics against other databases (sqlServer, oracle ect)
According to the MySQL Manual:
E.10.3. Limits on Table Size
The effective maximum table size for MySQL databases is usually determined by operating system constraints on file sizes, not by MySQL internal limits. The following table lists some examples of operating system file-size limits. This is only a rough guide and is not intended to be definitive. For the most up-to-date information, be sure to check the documentation specific to your operating system.
Operating System File-size Limit
Win32 w/ FAT/FAT32 2GB/4GB
Win32 w/ NTFS 2TB (possibly larger)
Linux 2.2-Intel 32-bit 2GB (LFS: 4GB)
Linux 2.4+ (using ext3 file system) 4TB
Solaris 9/10 16TB
MacOS X w/ HFS+ 2TB
Windows users, please note that FAT and VFAT (FAT32) are not considered suitable for production use with MySQL. Use NTFS instead.
On Linux 2.2, you can get MyISAM tables larger than 2GB in size by using the Large File Support (LFS) patch for the ext2 file system. Most current Linux distributions are based on kernel 2.4 or higher and include all the required LFS patches. On Linux 2.4, patches also exist for ReiserFS to get support for big files (up to 2TB). With JFS and XFS, petabyte and larger files are possible on Linux.
As for the other part of your question, a few thoughts:
It's a broad, complex, multi-factorial question. Consider narrowing the scope of the question to MySQL and one other RDBMS (eg. SQL Server) and probably even one particular feature.
Google is your friend.
Vendors tend to publish their own biased comparisons. Take vendor numbers with a grain of salt.
1- With respect to database size, the limit is dependent on Operating System file size. Please see this article
2- The effective maximum table size for MySQL databases is usually determined by operating system constraints on file sizes, not by MySQL internal limits. (Source)
3- You may google for MySQL vs SQL Server Vs Oracle, also check this link
default mysql is 256 TB for myd file with 6 byte pointer size. i know this is ridiculous answer, but that is what you wanted to know. in real life all depends on the queries, indexes, column count, row count, etc.. i guess.
With today's hardware and OS, together with MySQL's preferred Engine InnoDB, the table size limitation is 64TB. With PARTITIONing, that can be stretched to over a hundred petabytes.
A database is a collection of tables. So, to answer the title question literally, we need to go beyond the max table size. Since there can be thousands of tables in a database, we are now into the exabyte stratosphere.
See also Hard limits in MySQL .

mysql: Tuning or Customizing OS(red-hat OR ubuntu) for mysql

I would like to know whether there is any thing at the OS level to look at and tune which could help for the mysql that will be installed in next step.
I know a few things to consider like
file system: some file systems will fit better for mysql and its engines
behavior
open file : We should be able to open enough number of file as MYISAM
opens 3 files for each table
Architecture : 64 bit is recommended than 32 bit.
What else I can tune or consider here.
THanks in advacen....

Is it true that database management systems typically bypass file systems..?

Is my general understanding that a typical database management systems bypass file system correct? I understand that they manage their own space on disk and they write actual data and index systems like B tree directly into disk blocks bypassing any intermediate help from file system.
This assumes that root would provide the database user permission to directly read and write from disk blocks. In Linux, this is still easier as disk can be treated as a file.
Any pointer to real case studies will be greatly appreciated.
Most rely on the underlying file system for WAL etc: basically they outsource it to the OS.
Some DBMS support (Oracle, MySQL) "raw" partitions, but it isn't typical. Too much hassle (see this chat about Postgres) because you still need WAL etc on your raw partition.
DBMSs do not bypass filesystem. If this was the case, the table names would not be case-insensitive under Windows and case-sensitive under Linux (in MySQL). What they do is to allocate large space on the file system (by the way, the data is still visible as a file / set of files in the underlying operating system) and manage internal data structure. This lower the fragmentation and the overall overhead. In a similar way cache systems works - Varnish allocates entire memory it needs with a single call to operating system, then maitains internal data structure.
Not completely, mysql asks for data directory and stores the data in a specific file format where it tries to optimize the reads,writes from a file and also stores indexes there.
More over it can also differ from one storage engine to another.
Mongodb uses memory mapped files for disk IO
Looking forward for more discussion here.

How long should a 20GB restore take in MySQL? (A.k.a. Is something broken?)

I'm trying to build a dev copy of a production MySQL database by loading one of the backups. How long should it take to do this if the uncompressed dump is ~20G?
This command has been running for something like 24h with 10% CPU load and I'm wondering if it's just slow or if it/I am doing something wrong.
mysql -u root -p < it_mysql_dump.sql
BTW it's on a beefy desktop dev machine with plenty of ram, but it might be reading and writing the same HDD. I think I'm using InnoDB.
Restoring MySQL dumps can take a long time. This is because it does really rebuild the entire tables.
Exactly what you need to do to fix it depends on the engine, but in general
I would say, do the following:
Zeroth rule: Only use a 64-bit OS.
Make sure that you have enough physical ram to fit the biggest single table into memory; include any overhead for the OS in this calculation (NB: On operating systems that use 4k pages i.e. all of them, the page tables take up a lot of memory themselves on large-memory systems - don't forget this)
Tune the innodb_buffer_pool such that it is bigger than the largest single table; or if using MyISAM, tune the key_buffer so that it is big enough to hold the indexes of the largest table.
Be patient.
Now, if you are still finding that it is slow having done the above, it may be that your particular database has a very tricky structure to restore.
Personally I've managed to rebuild a server with ~ 2Tb in < 48 hours, but that was a particular case.
Be sure that your development system has production-grade hardware if you intend to load production data into it.
In particular, if you think that you can bulk-load data into tables which don't fit into memory (or at least, mostly into memory), forget it.
If this all seems like too much, remember that you can just use a filesystem or LVM snapshot online with InnoDB, and then just copy the files. With MyISAM it's a bit trickier but can still be done.
Open another terminal, run mysql, and count the rows in some of the tables in your dump (SELECT COUNT(*) FROM table). Compare to the source database. That'll tell you the progress.
I INSERTed about 80GB of data into MySQL over a network in about 14 hours. They were one-insert-per-row dumps (slow) with a good bit of overhead, inserting on a server with fast disks.
24 hours is possible if the hardware is old enough, or your import is competing with something else for disk IO and memory.
I just went through the experience of restoring a 51.8 Gb database from a 36.8 Gb mysqldump file to create an imdb database. For me the restore which was not done over the network but was done from a file on the native machine took a little under 4 hours.
The machine is a Quad Core Server running Windows Server 2008. People have wondered if there is a way to monitor progress. There actually is. You can watch the restore create the database files by going to the Program Data directory and finding the MYSQL subdirectory and then finding the subdirectory with your database name.
The files are gradually built in the directory and you can watch them build up. No small comfort when you have a production issue and you are wondering if the restore job is hung up or just taking a long time.