2 auto_increment fields where 1 resets - mysql

I'm about to code a couple of MySQL tables to handle invoices.
My plan is to break this into 3 main tables:
create table invoice
(
id auto_increment,
client (foreign key),
created (date),
*etc*...
)
create table products
(
id auto_increment
*product info*...
)
create table invoice_products
(
invoice_id (references invoice.id)
row (resetting auto_increment) <--THIS!!!
product_id(references products.id)
product_quantity INT
primary key (invoice_id,row)
)
The dilemma is, that when a new invoice is created, invoice.id is auto_incremented, and this is as it's supposed to be. What I wan't is for the invoice_products.row to start from 1 for every new invoice.
So the row auto_increment will start from 1 for every new invoice, but if new rows are added to an existing invoice id, the row id will continue from where it left off.
Any recommendations on how to accomplish this?
(I hope the short version of the code is enough for you to understand the dilemma)
Thanks in advance for any advice!
EDIT: Clarification: All tables in the database are InnoDB (because of heavy use of foreign key constraints)

All you have to do, is to change your table to using MyISAM engine. But note, that MyISAM doesn't support transactions and foreign keys.
Anyway, here's an example how it would work (quoting the manual):
For MyISAM and BDB tables you can specify AUTO_INCREMENT on a secondary column in a multiple-column index. In this case, the generated value for the AUTO_INCREMENT column is calculated as MAX(auto_increment_column) + 1 WHERE prefix=given-prefix. This is useful when you want to put data into ordered groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
In this case (when the AUTO_INCREMENT column is part of a multiple-column index), AUTO_INCREMENT values are reused if you delete the row with the biggest AUTO_INCREMENT value in any group. This happens even for MyISAM tables, for which AUTO_INCREMENT values normally are not reused.
If the AUTO_INCREMENT column is part of multiple indexes, MySQL will generate sequence values using the index that begins with the AUTO_INCREMENT column, if there is one. For example, if the animals table contained indexes PRIMARY KEY (grp, id) and INDEX (id), MySQL would ignore the PRIMARY KEY for generating sequence values. As a result, the table would contain a single sequence, not a sequence per grp value.
As you can see, your primary key and auto_increment setup is right already. Just change to MyISAM.
If you don't want to use MyISAM, you can also calculate it while selecting.
SELECT
ip.*,
#row := IF(#prev_inv != invoice_id, 1, #row + 1) AS `row`,
#prev_inv := invoice_id
FROM invoice_products ip
, (SELECT #row:=1, #prev_inv:=NULL) vars
ORDER BY invoice_id
And third possibility is of course to calculate it outside the database. That I'll leave to you :)

For this you cant use an auto_increment column. You should query the max(row) + 1 for the actual invoice before each insert.

Related

getting the total consumption

if i have the following tables:
create table rar (
rar_id int(11) not null auto_increment primary key,
rar_name varchar (20));
create table data_link(
id int(11) not null auto_increment primary key,
rar_id int(11) not null,
foreign key(rar_id) references rar(rar_id));
create table consumption (
id int(11) not null,
foreign key(id) references data_link(id),
consumption int(11) not null,
total_consumption int(11) not null,
date_time datetime not null);
i want the total consumption to be all the consumption field values added up. Is there a way to accomplish this through triggers? or do i need to each time read all the values + the latest value, sum them up and then update the table? is there a better way to do this?
--------------------------------------------------
id | consumption | total_consumption | date_time |
==================================================|
1 | 5 | 5 | 09/09/2013 |
2 | 5 | 10 | 10/09/2013 |
3 | 7 | 17 | 11/09/2013 |
4 | 3 | 20 | 11/09/2013 |
--------------------------------------------------
just wondering if there is a cleaner faster way of getting the total each time a new entry is added?
Or perhaps this is bad design? Would it better to have something like:
SELECT SUM(consumption) FROM consumption WHERE date BETWEEN '2013-09-09' AND '2013-09-11' in order to get this type of information... would doing this be the best option? The only problem i see with this is that the same command would be re-run multiple times - where each time the data would not be stored as it would be retrieved by request....it could be inefficient when you are re-generating the same report several times over for viewing purposes.... rather if the total is already calculated all you have to do is read the data, rather than computing it again and again... thoughts?
any help would be appreciated...
If you've got an index on total_consumption it won't noticeably slow the query down to have a nested select of MAX(total_consumption) as part of the insert as the max value will be stored already.
eg.
INSERT INTO `consumption` (consumption, total_consumption)
VALUES (8,
consumption + (SELECT MAX(total_consumption) FROM consumption)
);
I'm not sure how you're using the id column but you can easily add criteria to the nested select to control this.
If you do need to put a WHERE on the nested select, make sure you have an index across the fields you use and then the total_consumption column. For example, if you make it ... WHERE id = x, you'll need an index on (id, total_consumption) for it to work efficiently.
if You MUST have trigger - it shoud be like that:
DELIMITER $$
CREATE
TRIGGER `chg_consumption` BEFORE INSERT ON `consumption`
FOR EACH ROW BEGIN
SET NEW.total_consumption=(SELECT
MAX(total_consumption)+new.consumption
FROM consumption);
END;
$$
DELIMITER ;
p.s. and make total_consumption int(11) not null, nullable or default 0
EDIT:
improve from SUM(total_consumption) for MAX(total_consumption) as #calcinai suggestion

Inserting New Items Into an Already Populated Table

I have a table with various fields including a primary key, id, which is auto-incrementing:
+-------------------------------+------------------+------+-----+---------+
| Field | Type | Null | Key | Default | Extra
+-------------------------------+------------------+------+-----+---------+
| id | tinyint(11) | NO | PRI | NULL | auto_increment
The table is already populated with 114 items:
mysql> select count(*) as cnt from beer;
+-----+
| cnt |
+-----+
| 114 |
+-----+
And I am trying to insert a group of new items into the table. I am not explicitly inserting an id key. Here's a sample query:
mysql> INSERT INTO beer (name, type, alcohol_by_volume, description, image_url)
VALUES('Test Ale', 1, '4.6', '', 'https://untappd.s3.amazonaws.com/site/assets/images/temp/badge-beer-default.png');
I get the following error when attempting to manually insert that query (the insertion is actually done with a PHP script to the same results):
ERROR 1062 (23000): Duplicate entry '127' for key 1
What's going on? I thought the id would automatically increment upon insertion. I should note that the first 13 entries are blank/null for some reason, and the last key is currently 127. (it's not my table -- I'm just writing the script).
Tiny int is not the good choice for auto_increment primary key... Range is just (-128...127). Normally it's used as a flag; you need to use unsigned int
Try resetting the auto increment of primary key manually using this:
ALTER TABLE `beer` AUTO_INCREMENT = 128;

MySQL I/O bound InnoDB query optimization problem without setting innodb_buffer_pool_size to 5GB

I got myself into a MySQL design scalability issue. Any help would be greatly appreciated.
The requirements:
Storing users' SOCIAL_GRAPH and USER_INFO about each user in their social graph. Many concurrent reads and writes per second occur. Dirty reads acceptable.
Current design:
We have 2 (relevant) tables. Both InnoDB for row locking, instead of table locking.
USER_SOCIAL_GRAPH table that maps a logged in (user_id) to another (related_user_id). PRIMARY key composite user_id and related_user_id.
USER_INFO table with information about each related user. PRIMARY key is (related_user_id).
Note 1: No relationships defined.
Note 2: Each table is now about 1GB in size, with 8 million and 2 million records, respectively.
Simplified table SQL creates:
CREATE TABLE `user_social_graph` (
`user_id` int(10) unsigned NOT NULL,
`related_user_id` int(11) NOT NULL,
PRIMARY KEY (`user_id`,`related_user_id`),
KEY `user_idx` (`user_id`)
) ENGINE=InnoDB;
CREATE TABLE `user_info` (
`related_user_id` int(10) unsigned NOT NULL,
`screen_name` varchar(20) CHARACTER SET latin1 DEFAULT NULL,
[... and many other non-indexed fields irrelevant]
`last_updated` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
PRIMARY KEY (`related_user_id`),
KEY `last_updated_idx` (`last_updated`)
) ENGINE=InnoDB;
MY.CFG values set:
innodb_buffer_pool_size = 256M
key_buffer_size = 320M
Note 3: Memory available 1GB, these 2 tables are 2GBs, other innoDB tables 3GB.
Problem:
The following example SQL statement, which needs to access all records found, takes 15 seconds to execute (!!) and num_results = 220,000:
SELECT SQL_NO_CACHE COUNT(u.related_user_id)
FROM user_info u LEFT JOIN user_socialgraph u2 ON u.related_user_id = u2.related_user_id
WHERE u2.user_id = '1'
AND u.related_user_id = u2.related_user_id
AND (NOT (u.related_user_id IS NULL));
For a user_id with a count of 30,000, it takes about 3 seconds (!).
EXPLAIN EXTENDED for the 220,000 count user. It uses indices:
+----+-------------+-------+--------+------------------------+----------+---------+--------------------+--------+----------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-------+--------+------------------------+----------+---------+--------------------+--------+----------+--------------------------+
| 1 | SIMPLE | u2 | ref | user_user_idx,user_idx | user_idx | 4 | const | 157320 | 100.00 | Using where |
| 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | u2.related_user_id | 1 | 100.00 | Using where; Using index |
+----+-------------+-------+--------+------------------------+----------+---------+--------------------+--------+----------+--------------------------+
How do we speed these up without setting innodb_buffer_pool_size to 5GB?
Thank you!
The user_social_graph table is not indexed correctly !!!
You have ths:
CREATE TABLE user_social_graph
(user_id int(10) unsigned NOT NULL,
related_user_id int(11) NOT NULL,
PRIMARY KEY (user_id,related_user_id),
KEY user_idx (user_id))
ENGINE=InnoDB;
The second index is redundant since the first column is user_id. You are attempting to join the related_user_id column over to the user_info table. That column needed to be indexed.
Change user_social_graphs as follows:
CREATE TABLE user_social_graph
(user_id int(10) unsigned NOT NULL,
related_user_id int(11) NOT NULL,
PRIMARY KEY (user_id,related_user_id),
UNIQUE KEY related_user_idx (related_user_id,user_id))
ENGINE=InnoDB;
This should change the EXPLAIN PLAN. Keep in mind that the index order matters depending the the way you query the columns.
Give it a Try !!!
What is the MySQL version? Its manual contains important information for speeding up statements and code in general;
Change your paradigm to a data warehouse capable to manage till terabyte table. Migrate your legacy MySQL data base with free tool or application to the new paradigm. This is an example: http://www.infobright.org/Downloads/What-is-ICE/ many others (free and commercial).
PostgreSQL is not commercial and there a lot of tools to migrate MySQL to it!

Problem with an querying an array. (MySQL/PHP)

I have an array of strings inputted by the user from an dynamic form. I want to store each value of the array into a table along with an itemid (which is the same for all)
My query is currently inserting the whole array into one row's text_value with implode.
Is there a way instead of looping through the array and running a query for each value in the array, for me to query each array value with the itemId.
I was thinking perhaps adding another dimension to the array with the itemId? is this possible?
current query:
$query = "INSERT INTO answers_tb (item_id, text_value)VALUES('$itemid','".implode(',', $answers) . "')";
here is print_r of array:
Array ( [0] => option 1 [1] => option 2 [2] => option 3 [3] => option 4 )
here is the table structure I am inserting to (item_id is a foreign key):
**Field** | **Type** **Attributes**
answer_id | int(11) PRIMARY KEY
item_id | int(11) FOREIGN KEY
text_value | varchar(50)
the referenced table:
**Field** | **Type** | **Attributes**
item_id | int(11) | PRIMARY KEY
item_type | tinyint(1) |
user_id | int(11) |
unit_id | int(11) |
question_text | varchar(100)
question_text_2 | varchar(100)
item_desc | varchar(25)
item_name | varchar(25)
thanks
If you structure your table as item_id, astring rather than item_id, alongconcatenatedstring, you could do the insert like this:
$id=2;
$valueclause=function($string) use ($id) { return "('$id','$string')";};
array_walk($valueclause, $arr);
$values=implode(',',$arr);
$query= "INSERT INTO answers_tb (item_id, text_value) VALUES $values";
ETA: It appears that it might be useful to have a primary key that combines an auto_increment and another column. So given your table struture of:
**Field** | **Type**
answer_id | int(11)
item_id | int(11)
text_value | varchar(50)
you might consider indexing like this:
CREATE TABLE answers_tb(
item_id INT NOT NULL,
answer_id INT NOT NULL AUTO_INCREMENT,
text_value CHAR(50) NOT NULL,
PRIMARY KEY (item_id, answer_id)//note the 2 columns in the key
);
Then when you insert like this:
INSERT INTO answers_tb (item_id, text_value)
VALUES (1,'thing'), (1,'foo'),
(17,'blah'),
(6,'beebel'), (6,'bar');
your resulting data will look like this:
item_id, answer_id, textvalue
1, 1, thing
1, 2, foo
17, 1, blah
6, 1, beebel
6, 2, bar
It sounds like you would be better served with a different table design.
Instead of answers_tb (item_id, text_value), use answers_tb (item_id, offset, value).
(The primary key would be (item_id, offset).)
Then you would find it much easier to query the table.
EDIT: You posted the following table structure:
**Field** | **Type** **Attributes**
answer_id | int(11) PRIMARY KEY
item_id | int(11) FOREIGN KEY
text_value | varchar(50)
If I understand the table design right, your design works like this:
Each row of the referenced table (let's call it questions) represents a single question asked by a user of your application. It has a question ID, and the ID of the user who posted it.
Each row of the table answers_tb represents the set of all answers to the question in the row of table questions referenced by item_id. Answers are distinguished by the order in which they appear in the column entry.
What I'm saying is that this design for answers_tb doesn't work very well, for the reason you've identified: it's difficult to query against the answers stored in the "array" column. That is why this design is problematic. A better design would be as follows:
**Field** | **Type**
item_id | int(11)
answer_number | int
text_value | varchar(50)
wherein item_id is still a foreign key, but the primary key is (item_id, answer_number). In this design, each row of the table, rather than containing the set of all answers to the corresponding question, would contain just one answer to that question. The rows are distinguished from one another by the different values in answer_number, but you know which question each row corresponds to by the value in item_id. This design is much easier to query against.
It is a general rule that you ought not to try to store an array of data in a column, because it makes it problematic to search against. In some cases it makes sense to break that rule, but you have to be able to recognise when you are in such a case. In this case, you want to search by the stored values, so you should not do it.

How do I prevent MySQL from auto-incrementing the Primary Key while using ON DUPLICATE KEY UPDATE when the duplicate is a different unique column?

Consider the following table:
+-------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+------------------+------+-----+---------+----------------+
| vendor_id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| vendor_name | varchar(100) | NO | UNI | NULL | |
| count | int(10) unsigned | NO | | 1 | |
+-------------+------------------+------+-----+---------+----------------+
I have the following MySQL query:
INSERT INTO `table`
(`vendor_name`)
VALUES
('foobar') ON DUPLICATE KEY UPDATE `count` = `count` + 1
The intent of this query is to insert a new vendor name to the table and in case the vendor name already exists, the column count should be incremented by 1. This works however the primary key of the current column will also be auto-incremented. How can I prevent MySQL from auto-incrementing the primary key in these cases? Is there a way to do this with one query?
Thank you.
This works however the primary key of the current column will also be auto-incremented. How can I prevent MySQL from auto-incrementing the primary key in these cases?
By using an UPDATE statement when the value already exists:
IF EXISTS(SELECT NULL
FROM TABLE
WHERE vendor_name = $vendor_name) THEN
UPDATE TABLE
SET count = count + 1
WHERE vendor_name = $vendor_name
ELSE
INSERT INTO TABLE
(vendor_name)
VALUES
($vendor_name
END IF
I tried the alternative to ON DUPLICATE KEY UPDATE, REPLACE INTO:
REPLACE INTO vendors SET vendor_name = 'foobar', COUNT = COUNT + 1
It updates the count, and the vendor_id so it's worse...
The database & data doesn't care if the numbers aren't sequential, only that the values are unique. If you can live with that, I'd use the ON DUPLICATE UPDATE syntax though I admit the behaviour is weird (understandable considering using an INSERT statement).
I think this might do it. But it's very much against the principles of Daoism - you're really going against the grain.
There is probably a better solution.
INSERT INTO `table`
(`vendor_name`)
VALUES
('foobar') ON DUPLICATE KEY UPDATE `count` = `count` + 1, `vendor_id`=`vendor_id`-1