MySQL update bigint field - mysql

How do you update a MySql database bigint field?
Currently our database has a bigint(196605) field which is generating errors. I am pretty sure the field limit is 250 i.e. bigint(250) which explains the errors being generated.
The field itself only stores integer values 3 digits e.g. 100, so I am not sure why it is even bigint. In any case, I need to fix the field without any loss of data.
Any help would be most appreciated!

This is a common confusion ... BIGINT type has a fixed size is stored on 8B so the ONLY difference between BIGINT(1) and BIGINT(20) is the number of digits that is gonna be displayed 1 digit respectively 20 digits .
If you store only 3 digits numbers ,and you do not think you will need more you can use a SMALLINT UNSIGNED type which takes only 2B instead of 8B so you will save a lot of space and the performance will increase.
I suggest you read this first.

May be when creating database field, you set its length, if we do not set any length then I think it takes 11 as default. but If we pass then it will take specified value as length.

Before you do something, export (backup) your data into an SQL file (autogenerated inserts for your data) to save your data before you change your table and you know that if your data is lost, you can import it.
If you want to change your column, you must update your table like this:
ALTER TABLE tableName CHANGE columnName newColumnName BIGINT(250);
I think this should work.
DIDN'T TESTED (I use MS SQL Server 2008 and can't check if it works)

Related

Using similar MySQL tables but for different varchar size of one column to save on DB size, querying via UNION

I'm creating 2 similar tables, atestunion1 and atestunion2, with columns of: of id, customer_id, product_id, comment, date. The only difference between these is the length of the varchar comment. The "why" of this structure is below.
As comments are entered, the number of characters are counted, and then the entry is saved to the table (via if or switch php statement) with the smallest varchar character size that the comment with fit into.
Then, these are accessed like a single table, using UNION, like this:
SELECT * FROM atestunion1 UNION SELECT * from atestunion2 ORDER BY date
This query seems to work without issue - the different comment field size doesn't seem to cause a problem - but I'm wondering if there are issues with this conceptually. The reason for doing this is to save on the DB size. I believe (assumption 1) that a comment field with 20 characters in varchar(30) column takes up less memory than one with varchar(500). However, I would think that this sort of optimization might be built into MySQL and is thus not in need of my lowly hack. Maybe it does this already, such that my assumption 1 is simply incorrect? Or, perhaps there is a setting for the varchar column that will cause this?
My waterfall of questions:
Does MySQL already do such an optimization behind the scenes, such that an entry with some number of characters takes up the same memory regardless of the varchar setting and such that I don't need to mess with it?
If not, is there a setting for the varchar that would cause it to do so?
If not, does this concept of similar tables but for the varchar size difference, then accessed like a single table via UNION, seem like a valid and non-problematic way to save on DB size?
The difference in storage size between varchar(30) and varchar(500) (for the same string) is one byte. See String Type Storage Requirements:
L represents the actual length in bytes of a given string value.
[..]
VARCHAR(M), VARBINARY(M) [..]
L + 1 bytes if column values require 0 − 255 bytes, L + 2 bytes if values may require more than 255 bytes
So no - It's not worth splitting the table and overcomplicating your code.
The only case I know, where it might make a significant difference, is when you use temporary tables with MEMORY engine. Then the VARCHAR columns will be expanded to it's maximum size (That are 2000 bytes for VARCHAR(500) with utf8mb4 character set).
See The MEMORY Storage Engine:
MEMORY tables use a fixed-length row-storage format. Variable-length
types such as VARCHAR are stored using a fixed length.

Mysql zerofill length different from default field length

I am using MySQL and InnoDB.
I need to store a numeric id which length can vary but needs to be at least 10. For instance:
0000000001
11111111111 are both correct values.
Currently, I my column has the following attributes: bigint(10), unsigned zerofill. This works: if I try to insert "1" then "0000000001" is actually inserted, and if I insert a bigger number (with length>10) it also works.
So, in the end, what is the purpose of the length attribute in the field definition? I thought it was the maximum length, but apparently it is not the case...? Or is my current implementation going to crash eventually?
The length attribute is just a hint for MySQL how to format select query results in the command line client. Nothing more. It has no effect on the datatype actually. An int is an int with 4 bytes, no matter what length you specify. Same of course for bigint, but with 8 bytes.

If i set length of my char to 80 in mysql, and dont validate for that, would that be exploited

Here is sample code.
create table (id int(11) not null, name char(80));
and while taking input, i dont validate the maximum length, i just see if the user input value is not null, than insert or update table.
if some user input value more than 80 characters, than Query will not run and will return SQL Error, now is this SQL error Exploitable ??
My First & Last Guess is "No".
Prove me Wrong with Examples.
Thanks & Reqards
You are wrong. Query will be executed and will return only warning. first 80 chars will be inserted and the rest ignored! try yourself. open phpmyadmin make this table of yours and then insert 81 character or more to name column, and you will see! http://goo.rs/rK6sWE of this.
No, if you try to put 100 characters of input into a 80-character field, the attacker can't exploit that. However, you'll get a run-time error, so you should still be making sure your data will fit in the columns before you try to stick it in the database.
name char(80) only ensures that MySQL will refuse to store anything longer than 80 chars. However, before this data even gets written into MySQL (say from your web form), there would be no such guarantee.
If you are really concerned about security, you should always use prepared statements and binding variables.

SQL server datatype int Vs Big int

I created a table with column id as int data type. However, I realized that int type may not be able to hold some of the values I might put in the table. I wish to find out, if I define the column as bigint, does it take up "space" or does it use space on the database EVEN before I put a value in the column? I am using sql server 2008 R2. Thank you.
int always uses 4 bytes, bigint always uses 8 bytes. The actual value stored does not affect the size of the field.
Every time you enter any number, even 1, it will use the full 8 bytes. So the extra storage overhead is 4bytes * number of rows. If you are worried that you numbers will grow higher than 2,147,483,647, then you should use bigint.

Mysql column with null values - what are the space requirements?

I have a table with quite a lot entries.
I need an additional column with an integer value or null.
The thing is that only very few rows will have that field populated.
So i wonder whether its better to create a seperate table where i link the entries in an 1:1 relation.
I know one integer entry takes 4 bytes in mysql/myisam. If I have the column set to allow null values, and only 100 of 100 000 rows have the field populated, will the rest still consume 4 bytes for every null value?
Or is mysql intelligent enough to set the value where it is populated and just regard everything as null, where nothing is set?
This depends on the ROW_FORMAT value you give when you create your table.
Before version 5.0.3, the default format is set to "REDUNDANT" : any fixed-length field will use the same space, even if it's value is NULL.
Starting with version 5.0.3, the value is set to "COMPACT" : NULL values will never use any space in your database.
You can do an ALTER TABLE to be sure to use the correct format :
ALTER TABLE ... ROW_FORMAT=COMPACT
More details here :
http://dev.mysql.com/doc/refman/5.1/en/data-size.html
As far as my understanding goes, once you declare a field as int, 4 bytes will be set aside for it. So, for 100,000 rows you are looking at ~ 400 KB of space.
If space is a constraint, then separate table will be better. On the other hand, if performance is a criteria, then you'll have to take into account how many times that field is queried and whether it is checked for existence or non-existence. In either case, you'll need a join. If you want to check whether the field is set you can use inner join, which will be slower than single table query. If you want to check for non-existence, you'll need left/right outer join which will be slower than inner join.
It will use bitfields to store nulls so it may need less than one byte. But, even if it did - who cares, unless you are using 3.5" floppies to store your backend in ;-)
NULL in MySQL (Performance & Storage)