Generating a big range of numbers in MySQL - mysql

How do I generate a range of numbers in one column in MySQL? I'm looking for any soluton to make numbers range that starts from 500000000 and ends on 889999999.

It seems that may want to use an AUTO_INCREMENT in the column value you want. You can set the starting value to the one you desire in this way.
Also, you can only have one AUTO_INCREMENT column in a given table.
CREATE TABLE your_table (
column_1 INT NOT NULL AUTO_INCREMENT = 500000000
--Add other columns
)
If you already have a table with the AUTO_INCREMENT column, just set the value to the one you want.
ALTER TABLE your_table AUTO_INCREMENT = 500000000;
If what you want is to insert rows with those numbers, use a loop.

Just for fun, generate the range in a text file, by any means available.
Unload to a text file.
Load that text file to your table. You don not say if you are constrained by how long this takes. It sounds like you just want a table of a single column of INT with lower and upper limits.
MySQL should just handle these numbers,this is not really a "big" range, seriously.
Do you want to constrain the values in the column to
{500000000..889999999}?
Or do you want to know how to define a column
to hold these values?
Do you want a written procedure to generate
these numbers for you?
Do you want us to size this for you?
Do you want us to write a script or program to load these?
All of these answers are available with minimal sweat. Keywords are MySQL,Integer, Types.
We cannot see your problem because your question does not describe a problem.
Tell us what you tried, and tell us what happened...
Otherwise just add them, you are still in INT territory (-2Gi..2Gi), not BIGINT yet.

Switch to MariaDB, then JOIN to a pseudo-table called seq_500000000_to_889999999.

Related

Auto Increment Manually

There is a table with an int field - field_1.
I want to insert a new row.
The field_1 value will be Maximum value from all the entries plus one.
I've tried:
INSERT INTO table (field names, `field_1`)
VALUES (values, '(SELECT MAX(field_1) FROM table)');
I get '0' in the field_1.
I know I can do it in separate queries.
Is there a way to perform this action with one query? I mean one call from php.
I have an auto-increment field 'id' and I want to add 'position' field. I want to be able to make changes in position but the new item will always have highest position
Whatever it is that you are trying to do, it will not work, because it is not guaranteed to be atomic. So two instances of this query executing in parallel are guaranteed to mess each other up at some random point in time, resulting in skipped numbers and duplicate numbers.
The reason why databases offer auto-increment is precisely so as to solve this problem, by guaranteeing atomicity in the generation of these incremented values.
(Finally, 'Auto Increment Manually' is an oxymoron. It is either going to be 'Auto Increment', or it is going to be 'Manual Increment'. Just being a smart ass here.)
EDIT (after OP's edit)
One inefficient way to solve your problem would be to leave the Position field zero or NULL, and then execute UPDATE table SET Position = Id WHERE Position IS NULL. (Assuming Id is the autonumber field in your table.)
An efficient but cumbersome way would be to leave the Position field NULL when you have not modified it, and give it a value only when you decide to modify it. Then, every time you want to read the Position field, use a CASE statement: if the Position field is NULL, then use the value of Id; otherwise, use the value of Position.
EDIT2 (after considering OP's explanation in the comments)
If you only have 30 rows I do not see why you are even trying to keep the order right on the database. Just load all rows in an array, programmatically assign incrementing values to any Position fields that are found to be NULL, and when the order of the rows in your array changes, just fix the Position values and update all 30 rows in the database.
Try this:
INSERT INTO table (some_random_field, field_to_increment)
SELECT 'some_random_value', IF(MAX(field_to_increment) IS NULL, 1, MAX(field_to_increment) + 1)
FROM table;
Or this:
INSERT `table`
SET
some_random_field = 'some_random_value',
field_to_increment = (SELECT IF(MAX(field_to_increment) IS NULL, 1, MAX(field_to_increment) + 1) FROM table t);
P.S. I know it's 4 years late but I was looking for the same answer. :)
ALTER TABLE table_name AUTO_INCREMENT = 1 allows the database to reset the AUTO_INCREMENT to:
MAX(auto_increment_column)+1
It does not reset it to 1.
This prevents any duplication of AUTO_INCREMENT values. Also, since
AUTO_INCREMENT values are either primary/unique, duplication would
never happen anyway. The method to do this is available for a reason.
It will not alter any database records; simply the internal counter so
that it points to the max value available. As stated earlier by
someone, don't try to outsmart the database... just let it handle it.
It handles the resetting of AUTO_INCREMENT very well. See gotphp

MySQL Database Design Questions

I am currently working on a web service that stores and displays money currency data.
I have two MySQL tables, CurrencyTable and CurrencyValueTable.
The CurrencyTable holds the names of the currencies as well as their description and so forth, like so:
CREATE TABLE CurrencyTable ( name VARCHAR(20), description TEXT, .... );
The CurrencyValueTable holds the values of the currencies during the day - a new value is inserted every 2 minutes when the market is open. The table looks like this:
CREATE TABLE CurrencyValueTable ( currency_name VARCHAR(20), value FLOAT, 'datetime' DATETIME, ....);
I have two questions regarding this design:
1) I have more than 200 currencies. Is it better to have a separate CurrencyValueTable for each currency or hold them all in one table?
2) I need to be able to show the current (latest) value of the currency. Is it better to just insert such a field to the CurrencyTable and update it every two minutes or is it better to use a statement like:
SELECT value FROM CurrencyValueTable ORDER BY 'datetime' DESC LIMIT 1
The second option seems slower.. I am leaning towards the first one (which is also easier to implement).
Any input would be greatly appreciated!!
p.s. - please ignore SQL syntax / other errors, I typed it off the top of my head..
Thanks!
To your questions:
I would use one table. Especially if you need to report on or compare data from multiple currencies, it will be incredibly improved by sticking to one table.
If you don't have a need to track the history of each currency's value, then go ahead and just update a single value -- but in that case, why even have a separate table? You can just add "latest value" as a field in the currency table and update it there. If you do need to track history, then you will need the two tables and the SQL you posted will work.
As an aside, instead of FLOAT I would use DECIMAL(10,2). After MySQL 5.0, this will actually have improved results when it comes to currency handling with rounding.
It is better to have one table holding all currencies
If there is need for historical prices, then the table needs to hold them. A reasonable compromise in many situations is to split the price table into a full list of historical prices and another table which only has the current prices.
Using data type float can be troublesome. Please be sure you know what you are doing. If not, use a database currency data type.
As your webservice is transactional it is better if you'd have to access less tables at the same time. Since you will be reading and writing a lot, I would suggest having a single table.
Its better to insert a field to the CurrencyTable and update it rather than hitting two tables for a single request.

What is the best method to store default values in database?

I have several tables like Buyers, Shops, Brands, Money_Collectors, e.t.c.
Each one of those has a default value, e.g. the default Buyer is David, the default Shop is Ebay, and so on.
I would like to save those default values in a database (so that user could change them).
I thought to add is_default column to each one of the tables, but it seems to be ineffective because only one row in each table may be the default.
Then I thought that the best would be to have Defaults table that will contain all the default values. This table will have 1 row and N columns, where N is the number of the default values:
Defaults table:
buyer shop brand money_collector
----- ---- ----- ---------------
David Ebay Dell NULL (no default value)
But, this seems to be not the best approach because the table structure changes when a new default value is added.
What would be the best approach to store default values ?
Just to be clear.
The best way is with a column on each table which dropdowns source from.
And here's why...
"Shouldn't I worry about space when
saving data in a database?"
The short answer is no. The longer answer is what you should worry about is performance. Focusing on space will lead you to do very bad things.
Bad things that you'll do if space is a concern.
You'll bury meaning into Primary Keys. i.e. Smart Keys.
You'll try to store mulitple values in one column.
You'll index too little
(No doubt we could create a list of 50 bad practices which save space)
suppose there are 50 shops (select box
with 50 possible values). In this
case, to store the default shop you
need 50 boolean fields,
Well it's ONE Boolean column. It exists on each row.
Let me ask you this. If you created a table with 1 date column and inserted 1 row, how much space would you use on disk?
If you said a 7 or 8 bytes then you're off by about 1000 times.
The smallest unit of disk space is a block. Blocks are typical 8kb (the can be as small as 2kb as large as 32kb, in general (no nitpicking here, the actual limits are unimportant))
Let's say you have 8kb blocks then your 1 column, 1 row table takes 8Kb. If you insert another 999 rows it will still take up 8KB. (Again no nitpicking there is overhead per block and per row - it's an example)
So in your look up table with 50 store names, the likelihood that adding 50 bytes to the size of the table forces you to expand from 1 block to 2 is slim to none and completely irrelevant.
On the other hand, your default table will certainly take up at least one additional block.
But the worst hit to PERFORMANCE is that your call to fill a drop down will need two round trips to the database, one to get the list, one to get the default. (yes, you may be able to do this in one but go with it)
So you've saved exactly zero space and doubled your network traffic.
You see what I'm saying.
Another crucial reason to stop worrying about space is you're giving up clarity. think of the developer you're going to hire to run this app. When he joins the team and looks at the database, imagine the two scenarios.
There's a Boolean column named Default_value
There's a table with no relationships to anything that's named Default_Values
You ask him to build a new for with a dropdown for 'store'.
In scenario 1 he finds the store table, wires up the dropdown to a simple query of the table and uses the default_value field to select the initial value.
In scenario 2, without some training, how would he know to look for a separate table? Maybe he'd see the table but by the time you're hiring, your datamodel now has hundreds of tables.
Again, a little contrived but the point is salient. Clarity in the database is well, well worth a byte per row.
Technical stuff
I'm not a MySQL guy but in Oracle, a null column at the end of a row take no additional space. In Oracle I would use a Varchar2(1) and let 'T' = Default and leave the others null. That would have the effect on only using 1 addition byte total, and not per row. YMMV with MySQL, you can pose that question separately if you can't Google the answer.
But the time to worry about that is on millions of rows, not hundreds. Any table which feeds a dropdown will never be big enough to start worrying about extra bytes.
What if you create an XML and then store that XML in the table in an XML column. The XML column would contain the XML, and the XML could have tags of tables and a sub node of default values.
You should rather create a a table with two columns and n rows
Defaults table:
buyer, David
shop, Ebay,
brand, Dell
This way you can add new values without having to change table structure
You can create a catalog table (some kind of metadata table) containing the default values as strings for the desired table columns. Then you can use the convert function for getting the appropriate value. Below is a sample table definition (Transact-SQL was used):
create table dbo.cat_default_values
(
id_column varchar(30) not null,
id_table varchar(30) not null,
datatype varchar(30) not null,
value varchar(100) not null,
f_creation datetime not null,
usr_creation char(8) null,
primary key clustered (id_column, id_table)
)
declare #defaultValueInt int,
#defaultValueVarchar varchar(30)
select #defaultValueInt = convert(int, value)
from cat_default_values where id_column = "defColumInteger" and id_table = "table1"
select defaultValueVarchar = value
from cat_default_values where id_column = "defColumVarchar" and id_table = "table1"
What you are trying to store is not meta data information. First of all, so I will not invent an external data store to store this data.(coupled with extra code )
I assume you have a PK Sequence generation logic (under your control). I will assign a magic number x and I will insert a record in each table with _id = x as the default value. So if you want to show the user the default value, you can handle in your query in a uniform way or you can handle this in application logic while insert. The good thing about this is, you have access to default value all the time and without writing any extra logic and the logic for maintaining default value of a table can be maintained using the same code (templating ;)
(From the lessons W3c learned from modeling schema information of XML using DTD.)
Only catch is this logic should be made explicit either using some extensive documentation or could be hard imposed by using a trigger.

mysql keyword search across multiple columns

Various incarnations of this question have been asked here before, but I thought I'd give it another shot.
I had a terrible database layout. A single entity (widget) was split into two tables:
CREATE TABLE widgets (widget_id int(10) NOT NULL auto_increment)
CREATE TABLE widget_data (
widget_id int(10),
field ENUM('name','size','color','brand'),
data TEXT)
this was less that ideal. if wanted to find widgets of a specific name, color and brand, i had to do a three-way join on the widget_data table. so I converted to the reasonable table type:
CREATE TABLE widgets (widget_id int(10) NOT NULL auto_increment,
name VARCHAR(32),size INT(3),color VARCHAR(16), brand VARCHAR(32))
This makes most queries much better. But it makes searching harder. It used to be that if i wanted to search widgets for, say, '%black%', I would just SELECT * FROM widget_data WHERE data LIKE '%black%'. This would give me all instances of widgets that are black in color, or are made by blackwell industries, or whatever. I would even know exactly which field matched, and could show that to my user.
how do I execute a similar search using the new table layout? I could of course do WHERE name LIKE '%black%' OR size LIKE '%black%'... but that seems clunky, and I still don't know which fields matched. I could run a separate query for each column I want to match on, which would give me all matches and how they matched, but that would be a performance hit. any ideas?
You can include part of WHERE expression into selecting columns. For example:
SELECT
*,
(name LIKE '%black%') AS name_matched,
(size LIKE '%black%') AS size_matched
FROM widget_data
WHERE name LIKE '%black%' OR size LIKE '%black%'...
Then check value of name_matched on side of the script.
Not sure how it will affect performance. Feal free to test it before going to production
You have two conflicting requirements. You want to search as if all your data is in a single field, but you also want to identify which specific field was matched.
There's nothing wrong with your WHERE name LIKE '%black%' OR size LIKE '%black%'... expression. It's a perfectly valid search on the table as you have defined it. Why not just check the results in code to see which one matched? It's a minimal overhead.
If you want a cleaner syntax for your SQL then you could create a view on the table, adding an extra field which consists of concatenating the other fields:
CREATE VIEW extra_widget_data AS
SELECT (name, size, color, brand,
CONCAT(name, size, color, brand) as all_fields)
FROM widget_data;
Then you'd have to add an index on this field, which requires more space, CPU time to maintain etc. I don't think it's worth it.
You probably want to look into MySQL full text search capability, this enables you to match against multiple columns of varchar type.
http://dev.mysql.com/doc/refman/5.1/en/fulltext-search.html

How can i convert a price field which is currently varchar to a decimal so my prices will order correctly?

I am using MYSQL.
I have a varchar field which i incorrectly used for a price. Now the ordering of this table will not work correctly putting anything over 1000 to the bottom of the list.
I need to convert this price field in an existing POPULATED database from varchar to decimal i guess?
Any help would be appreciated.
Simply use the ALTER TABLE statement.
If for example the table is called 'products' and the field is called 'product_price' you could simply use:
ALTER TABLE products MODIFY COLUMN product_price DOUBLE;
NB: As with anything, I'd be very tempted to make a backup of the data (via mysqldump) prior to performing this operation - it'll take seconds and it's always better to be safe rather than sorry. :-)