Storing large prime numbers in a database - mysql

This problem struck me as a bit odd. I'm curious how you could represent a list of prime numbers in a database. I do not know of a single datatype that would be able to acuratly and consistently store a large amount of prime numbers. My concern is that when the prime numbers are starting to contain 1000s of digits, that it might be a bit difficult to reference form the database. Is there a way to represent a large set of primes in a DB? I'm quite sure that this has topic has been approached before.
One of the issues about this that makes it difficult is that prime numbers can not be broken down into factors. If they could this problem would be much easier.

If you really want to store primes as numbers and one of questions, stopping you is "prime numbers can not be broken down into factors", there are another thing: store it in list of modulus of any number ordered by sequence.
Small example:
2831781 == 2*100^3 + 83*100^2 + 17*100^1 + 81*100^0
List is:
81, 17, 83, 2
In real application is useful to split by modulus of 2^32 (32-bits integers), specially if prime numbers in processing application stored as byte arrays.
Storage in DB:
create table PRIMES
(
PRIME_ID NUMBER not null,
PART_ORDER NUMBER(20) not null,
PRIME_PART_VALUE NUMBER not null
);
alter table PRIMES
add constraint PRIMES_PK primary key (PRIME_ID, PART_ORDER) using index;
insert for example above (1647 is for example only):
insert into primes(PRIME_ID, PART_ORDER, PRIME_PART_VALUE) values (1647, 0, 81);
insert into primes(PRIME_ID, PART_ORDER, PRIME_PART_VALUE) values (1647, 1, 17);
insert into primes(PRIME_ID, PART_ORDER, PRIME_PART_VALUE) values (1647, 2, 83);
insert into primes(PRIME_ID, PART_ORDER, PRIME_PART_VALUE) values (1647, 3, 82);
prime_id value can be assigned from oracle sequence ...
create sequence seq_primes start with 1 increment by 1;
Get ID of next prime number to insert:
select seq_primes.nextval from dual;
select prime number content with specified id:
select PART_ORDER, PRIME_PART_VALUE
from primes where prime_id = 1647
order by part_order

You could store them as binary data. They won't be human readable straight from the database, but that shouldn't be a problem.

Databases (depending on which) can routinely store numbers up to 38-39 digits accurately. That gets you reasonably far.
Beyond that you won't be doing arithmetic operations on them (accurately) in databases (barring arbitrary-precision modules that may exist for your particular database). But numbers can be stored as text up to several thousand digits. Beyond that you can use CLOB type fields to store millions of digits.
Also, it's worth nothing that if you're storing sequences of prime numbers and your interest is in space-compression of that sequence you could start by storing the difference between one number and the next rather than the whole number.

This is a bit inefficient, but you could store them as strings.

If you are not going to use database-side calculations with these numbers, just store them as bit sequences of their binary representation (BLOB, VARBINARY etc.)

Here's my 2 cents worth. If you want to store them as numbers in a database then you'll be constrained by the maximum size of integer that your database can handle. You'd probably want a 2 column table, with the prime number in one column and it's sequence number in the other. Then you'd want some indexes to make finding the stored values quick.
But you don't really want to do that do you, you want to store humongous (sp?) primes way beyond any integer datatype you've even though of yet. And you say that you are averse to strings so it's binary data for you. (It would be for me too.) Yes, you could store them in a BLOB in a database but what sort of facilities will the DBMS offer you for finding the n-th prime or checking the primeness of a candidate integer ?
How to design a suitable file structure ? This is the best I could come up with after about 5 minutes thinking:
Set a counter to 2.
Write the two-bits which represent the first prime number.
Write them again, to mark the end of the section containing the 2-bit primes.
Set the counter to counter+1
Write the 3-bit primes in order. ( I think there are two: 5 and 7)
Write the last of the 3-bit primes again to mark the end of the section containing the 3-bit primes.
Go back to 4 and carry on mutatis mutandis.
The point about writing the last n-bit prime twice is to provide you with a means to identify the end of the part of the file with n-bit primes in it when you come to read the file.
As you write the file, you'll probably also want to make note of the offsets into the files at various points, perhaps the start of each section containing n-bit primes.
I think this would work, and it would handle primes up to 2^(the largest unsigned integer you can represent). I guess it would be easy enough to find code for translating a 325467-bit (say) value into a big integer.
Sure, you could store this file as a BLOB but I'm not sure why you'd bother.

It all depends on what kinds of operations you want to do with the numbers. If just store and lookup, then just use strings and use a check constraint / domain datatype to enforce that they are numbers. If you want more control, then PostgreSQL will let you define custom datatypes and functions. You can for instance interface with the GMP library to have correct ordering and arithmetic for arbitrary precision integers. Using such a library will even let you implement a check constraint that uses the probabilistic primality test to check if the numbers really are prime.
The real question is actually whether a relational database is the correct tool for the job.

I think you're best off using a BLOB. How the data is stored in your BLOB depends on your intended use of the numbers. If you want to use them in calculations I think you'll need to create a class or type to store the values as some variety of ordered binary value and allow them to be treated as numbers, etc. If you just need to display them then storing them as a sequence of characters would be sufficient, and would eliminate the need to convert your calculatable values to something displayable, which can be very time consuming for large values.
Share and enjoy.

Probably not brilliant, but what if you stored them in some recursive data structure. You could store it as an int, it's exponent, and a reference to the lower bit numbers.
Like the string idea, it probably wouldn't be very good for memory considerations. And query time would be increased due to the recursive nature of the query.

Related

store 300 digit number in sql

Which datatype can I use to store really big integer in SQL. I am using phpmyAdmin to view data and java program for storing and retrieving values. Actually I am working with Bilinear Maps which uses random numbers generated from Zp where p is very large prime number and then "raised to" operations on those number.
I want to store some numbers in database like public keys. What data type can I use for table columns in SQL for such values?
You could store them as strings of decimal digits using type CHARACTER. While this does waste some space, an advantage is that the database will be easier for humans to understand.
You could store them as raw binary big-endian values using type BLOB. This is the most efficient for software to access and takes up the least space. However, humans will not be able to easily query the database for these values or understand them in dumps.
Personally, I would opt for the blob unless there's a real need for the database to be understandable by humans using standard query tools. If you can't get around needing to administer the database with tools that don't understand your data format, then just use decimal values in text.
For MySQL, VARCHAR(300) CHARACTER SET ascii.
VAR, assuming the numbers won't always be exactly 300.
CHAR -- no big advantage in BLOB.
ascii -- no need for utf8 involvement.
DECIMAL won't work because there is a 64-digit limit.
The space taken will be 2+length bytes (302 in your example), where the 2 is for length for VAR.

Anonymization of Account Numbers in 2TB of CSV's

I have ~2TB of CSV's where the first 2 columns contains two ID numbers. These need to be anonymized so the data can be used in academic research. The anonymization can be (but does not have to be) irreversible. These are NOT medical records, so I do not need the fanciest cryptographic algorithm.
The Question:
Standard hashing algorithms make really long strings, but I will have to do a bunch of ID-matching (i.e. 'for subset of rows in data containing ID XXX, do...)' to process the anonymized data, so this is not ideal. Is there a better way?
For example, If I know there are ~10 million unique account numbers, is there a standard way of using the set of integers [1:10million] as replacement/anonymized ID's?
The computational constraint is that data will likely be anonymized on a 32-core ~500GB server machine.
I will assume that you want to make a single pass, one CSV with ID
numbers as input, another CSV with anonymized numbers as output. I will
also assume the number of unique IDs is somewhere on the order of 10
million or less.
It is my thought that it would be best to use some totally arbitrary
one-to-one function from the set of ID numbers (N) to the set of
de-identified numbers (D). This would be more secure. If you used some
sort of hash function, and an adversary learned what the hash was, the
numbers in N could be recovered without too much trouble with a
dictionary attack. Instead I suggest a simple lookup table: ID 1234567
maps to de-identified number 4672592, etc. The correspondence would be
stored in another file, and an adversary without that file would not be
able to do much.
With 10 million or fewer records, on a machine such as you describe,
this is not a big problem. A sketch program in pseudo-Python:
mapping = {}
unused_numbers = list(range(10000000))
while data:
read record
for each ID number N in record:
if N in mapping:
D = mapping[N]
else:
D = choose_random(unused_numbers)
unused_numbers.del(D)
mapping[N] = D
replace N with D in record
write record
write mapping to lookup table file
It seems you don't care about the ids being reversible, but if it helps, you can try one of the format preserving encryption ideas. They are pretty much designed for this use case.
Otherwise if hashes are too large, you can always just strip the end of it. Even if you replace each digit (of the original ID) with a hex digit (from the hash), the collisions are unlikely. You could first read the file and check for collisions though.
PS. If you end up doing hashing, make sure you prepend salt of a reasonable size. Hashes of IDs in the range [1:10M] would be trivial to bruteforce otherwise.

What's the best data type to store 01,02,03 etc. in mySQL?

I would like to store numbers like 00, 01, 02, 03, 04 etc in mySQL.
Which data type would be the best for this?
Use int to store numbers.
Do not confuse value with rendering: If you want the numbers dusplayed with a leading zero, take care of that in a layer between the data and the user's eyes; do not store rendering with the data.
The layer that does the formatting can be anything from the SQL used to select the data right through to javascript in a web page that displays it and anything in between.
These look like codes as opposed to numbers, meaning that you are not doing arithmetic operations. If they are codes, the best way to store them is as varchar(2) or char(2) types.
If you are treating them as numbers (that is, doing any arithmetic operations), then you should store them as integers (small integers) and append the leading zeros on output.
EDIT:
It is very important to distinguish between strings of digits that are just that -- strings that contain digits -- and actual numbers. There are many examples of such strings that you definitely want to store as character strings. Four that come to mind are telephone numbers, American zip codes, account numbers (the account number at my bank starts "057" and the "0" is really, really, really important), and (for the most part) two-part version numbers.
These are distinguished from being actual numbers because you don't do numerical things on them. That is, you never add them together. Or increment them. Or increase the value by 10%. You do sort them, search for specific values (or even ranges of values), and join on the values. These are string operations as well as numeric ones.
Your question does not explain what the codes really are. So, depending on what they represent a character string or number might be appropriate. However, based on the fact that the "0" is important enough to ask a question about, I definitely lean toward a character representation.
shortly, varchar.
Though I don't know what you would do with these "numbers", but if you want to preserve those '0' behind the actual number, you should treat them as string type.
Better use numeric datatypes itself. Specifically smallint(2) as it is best suited with your samples.
Look here for knowing more about Numeric Type Attributes in MySQL.
Also use ZEROFILL to append 0 with your data. See this SO thread.

Using bitwise on a large number in mysql

I have a need to store a provider name and the country(ies) they are able to provide services in. There are 92 counties. Rather than store up to 92 rows, I'd like to store 2^91 so if they provide only in county 1 and 2, I'd store 3.
The problem I'm having is the largest number is 2475880078570760549798248448, which is way too big for the largest BigInt.
In the past when I've had smaller numbers of options I've been able to do something like....
SELECT * FROM tblWhatever WHERE my_col & 2;
If my_col had 2, 3, 6, etc. stored (anything with bit 2) it would be found.
I guess I'm not sure of 2 things... how to store AND how to query if stored in a way other than an INT.
You could use BINARY(13) as a datatype to store up to 94 bits. But MySQL bitwise operators only support BIGINT which is 64 bits.
So if you want to use bitwise operators, you'd have to store your county bitfield in two BIGINT columns (or perhaps one BIGINT and one INT), and in application code work out which of the two columns to search. That seems like it's awkward.
However, I'll point out a performance consideration: using bitwise operators for searching isn't very efficient. You can't make use of an index to do the search, so every query is be forced to perform a table-scan. As your data grows, this will become more and more costly.
It's the same problem as searching for a substring with LIKE '%word%', or searching for all dates with a given day of the month.
So I'd suggest storing each county in a separate row after all. You don't have to store 92 rows for each service provider -- you only have to store as many rows as the number of counties they service. The absence of a row indicates no service in that county.

MySQL PRIMARY KEYs: UUID / GUID vs BIGINT (timestamp+random)

tl;dr: Is assigning rows IDs of {unixtimestamp}{randomdigits} (such as 1308022796123456) as a BIGINT a good idea if I don't want to deal with UUIDs?
Just wondering if anyone has some insight into any performance or other technical considerations / limitations in regards to IDs / PRIMARY KEYs assigned to database records across multiple servers.
My PHP+MySQL application runs on multiple servers, and the data needs to be able to be merged. So I've outgrown the standard sequential / auto_increment integer method of identifying rows.
My research into a solution brought me to the concept of using UUIDs / GUIDs. However the need to alter my code to deal with converting UUID strings to binary values in MySQL seems like a bit of a pain/work. I don't want to store the UUIDs as VARCHAR for storage and performance reasons.
Another possible annoyance of UUIDs stored in a binary column is the fact that rows IDs aren't obvious when looking at the data in PhpMyAdmin - I could be wrong about this though - but straight numbers seem a lot simpler overall anyway and are universal across any kind of database system with no conversion required.
As a middle ground I came up with the idea of making my ID columns a BIGINT, and assigning IDs using the current unix timestamp followed by 6 random digits. So lets say my random number came about to be 123456, my generated ID today would come out as: 1308022796123456
A one in 10 million chance of a conflict for rows created within the same second is fine with me. I'm not doing any sort of mass row creation quickly.
One issue I've read about with randomly generated UUIDs is that they're bad for indexes, as the values are not sequential (they're spread out all over the place). The UUID() function in MySQL addresses this by generating the first part of the UUID from the current timestamp. Therefore I've copied that idea of having the unix timestamp at the start of my BIGINT. Will my indexes be slow?
Pros of my BIGINT idea:
Gives me the multi-server/merging advantages of UUIDs
Requires very little change to my application code (everything is already programmed to handle integers for IDs)
Half the storage of a UUID (8 bytes vs 16 bytes)
Cons:
??? - Please let me know if you can think of any.
Some follow up questions to go along with this:
Should I use more or less than 6 random digits at the end? Will it make a difference to index performance?
Is one of these methods any "randomer" ?: Getting PHP to generate 6 digits and concatenating them together -VS- getting PHP to generate a number in the 1 - 999999 range and then zerofilling to ensure 6 digits.
Thanks for any tips. Sorry about the wall of text.
I have run into this very problem in my professional life. We used timestamp + random number and ran into serious issues when our applications scaled up (more clients, more servers, more requests). Granted, we (stupidly) used only 4 digits, and then change to 6, but you would be surprised how often that the errors still happen.
Over a long enough period of time, you are guaranteed to get duplicate key errors. Our application is mission critical, and therefore even the smallest chance it could fail to due inherently random behavior was unacceptable. We started using UUIDs to avoid this issue, and carefully managed their creation.
Using UUIDs, your index size will increase, and a larger index will result in poorer performance (perhaps unnoticeable, but poorer none-the-less). However MySQL supports a native UUID type (never use varchar as a primary key!!), and can handle indexing, searching,etc pretty damn efficiently even compared to bigint. The biggest performance hit to your index is almost always the number of rows indexed, rather than the size of the item being index (unless you want to index on a longtext or something ridiculous like that).
To answer you question: Bigint (with random numbers attached) will be ok if you do not plan on scaling your application/service significantly. If your code can handle the change without much alteration and your application will not explode if a duplicate key error occurs, go with it. Otherwise, bite-the-bullet and go for the more substantial option.
You can always implement a larger change later, like switching to an entirely different backend (which we are now facing... :P)
You can manually change the autonumber starting number.
ALTER TABLE foo AUTO_INCREMENT = ####
An unsigned int can store up to 4,294,967,295, lets round it down to 4,290,000,000.
Use the first 3 digits for the server serial number, and the final 7 digits for the row id.
This gives you up to 430 servers (including 000), and up to 10 million IDs for each server.
So for server #172 you manually change the autonumber to start at 1,720,000,000, then let it assign IDs sequentially.
If you think you might have more servers, but less IDs per server, then adjust it to 4 digits per server and 6 for the ID (i.e. up to 1 million IDs).
You can also split the number using binary digits instead of decimal digits (perhaps 10 binary digits per server, and 22 for the ID. So, for example, server 76 starts at 2^22*76 = 318,767,104 and ends at 322,961,407).
For that matter you don't even need a clear split. Take 4,294,967,295 divide it by the maximum number of servers you think you will ever have, and that's your spacing.
You could use a bigint if you think you need more identifiers, but that's a seriously huge number.
Use the GUID as a unique index, but also calculate a 64-bit (BIGINT) hash of the GUID, store that in a separate NOT UNIQUE column, and index it. To retrieve, query for a match to both columns - the 64-bit index should make this efficient.
What's good about this is that the hash:
a. Doesn't have to be unique.
b. Is likely to be well-distributed.
The cost: extra 8-byte column and its index.
If you want to use the timestamp method then do this:
Give each server a number, to that append the proccess ID of the application that is doing the insert (or the thread ID) (in PHP it's getmypid()), then to that append how long that process has been alive/active for (in PHP it's getrusage()), and finally add a counter that starts at 0 at the start of each script invocation (i.e. each insert within the same script adds one to it).
Also, you don't need to store the full unix timestamp - most of those digits are for saying it's year 2011, and not year 1970. So if you can't get a number saying how long the process was alive for, then at least subtract a fixed timestamp representing today - that way you'll need far less digits.