I have an array of strings inputted by the user from an dynamic form. I want to store each value of the array into a table along with an itemid (which is the same for all)
My query is currently inserting the whole array into one row's text_value with implode.
Is there a way instead of looping through the array and running a query for each value in the array, for me to query each array value with the itemId.
I was thinking perhaps adding another dimension to the array with the itemId? is this possible?
current query:
$query = "INSERT INTO answers_tb (item_id, text_value)VALUES('$itemid','".implode(',', $answers) . "')";
here is print_r of array:
Array ( [0] => option 1 [1] => option 2 [2] => option 3 [3] => option 4 )
here is the table structure I am inserting to (item_id is a foreign key):
**Field** | **Type** **Attributes**
answer_id | int(11) PRIMARY KEY
item_id | int(11) FOREIGN KEY
text_value | varchar(50)
the referenced table:
**Field** | **Type** | **Attributes**
item_id | int(11) | PRIMARY KEY
item_type | tinyint(1) |
user_id | int(11) |
unit_id | int(11) |
question_text | varchar(100)
question_text_2 | varchar(100)
item_desc | varchar(25)
item_name | varchar(25)
thanks
If you structure your table as item_id, astring rather than item_id, alongconcatenatedstring, you could do the insert like this:
$id=2;
$valueclause=function($string) use ($id) { return "('$id','$string')";};
array_walk($valueclause, $arr);
$values=implode(',',$arr);
$query= "INSERT INTO answers_tb (item_id, text_value) VALUES $values";
ETA: It appears that it might be useful to have a primary key that combines an auto_increment and another column. So given your table struture of:
**Field** | **Type**
answer_id | int(11)
item_id | int(11)
text_value | varchar(50)
you might consider indexing like this:
CREATE TABLE answers_tb(
item_id INT NOT NULL,
answer_id INT NOT NULL AUTO_INCREMENT,
text_value CHAR(50) NOT NULL,
PRIMARY KEY (item_id, answer_id)//note the 2 columns in the key
);
Then when you insert like this:
INSERT INTO answers_tb (item_id, text_value)
VALUES (1,'thing'), (1,'foo'),
(17,'blah'),
(6,'beebel'), (6,'bar');
your resulting data will look like this:
item_id, answer_id, textvalue
1, 1, thing
1, 2, foo
17, 1, blah
6, 1, beebel
6, 2, bar
It sounds like you would be better served with a different table design.
Instead of answers_tb (item_id, text_value), use answers_tb (item_id, offset, value).
(The primary key would be (item_id, offset).)
Then you would find it much easier to query the table.
EDIT: You posted the following table structure:
**Field** | **Type** **Attributes**
answer_id | int(11) PRIMARY KEY
item_id | int(11) FOREIGN KEY
text_value | varchar(50)
If I understand the table design right, your design works like this:
Each row of the referenced table (let's call it questions) represents a single question asked by a user of your application. It has a question ID, and the ID of the user who posted it.
Each row of the table answers_tb represents the set of all answers to the question in the row of table questions referenced by item_id. Answers are distinguished by the order in which they appear in the column entry.
What I'm saying is that this design for answers_tb doesn't work very well, for the reason you've identified: it's difficult to query against the answers stored in the "array" column. That is why this design is problematic. A better design would be as follows:
**Field** | **Type**
item_id | int(11)
answer_number | int
text_value | varchar(50)
wherein item_id is still a foreign key, but the primary key is (item_id, answer_number). In this design, each row of the table, rather than containing the set of all answers to the corresponding question, would contain just one answer to that question. The rows are distinguished from one another by the different values in answer_number, but you know which question each row corresponds to by the value in item_id. This design is much easier to query against.
It is a general rule that you ought not to try to store an array of data in a column, because it makes it problematic to search against. In some cases it makes sense to break that rule, but you have to be able to recognise when you are in such a case. In this case, you want to search by the stored values, so you should not do it.
Related
I have created 3 tables: item, shop and stock. Plus a stored procedure called inserting
which inserts to the shop table with a given item from the item table
CREATE TABLE item(
i_id int(11) auto_increment,
i_name varchar(255) not null,
primary key(i_id));
CREATE TABLE shop(
s_id int(11) auto_increment,
s_name varchar(255) not null,
s_item int(11) not null,
s_qty int(11) not null,
primary key(s_id),
foreign key(s_item) references item(i_id)
);
CREATE TABLE stock(
item int(11) not null,
total int(11) not null
);
CREATE PROCEDURE inserting (
IN shop_name varchar(225),
IN shop_item int(11),
IN shop_qty int(11)
)
BEGIN
INSERT INTO shop(s_name, s_item, s_qty)
VALUES
(shop_name, shop_item, shop_qty);
INSERT INTO STOCK(item, total)
SELECT s_item, SUM(s_qty) FROM shop GROUP BY s_item
ON DUPLICATE KEY UPDATE
item = VALUES(item),
total = VALUES(total);
The first insert works, but on the second insert when it populates the stock table it gives me extra columns, which i'm not expecting.
I have tried using REPLACE INTO and ON DUPLICATE KEY UPDATE to get single results, still the results comes as the following:
SELECT * FROM `stock`;
+------+-------+
| ITEM | TOTAL |
+------+-------+
| 1 | 5 |
| 1 | 9 |
+------+-------+
what I am trying to achieve is, group the ITEM column, and sum up the TOTAL to a single row.
what am I doing wrong here, or missing from the query?
thanks.
For the on duplicate key syntax to work as expected, you need a unique or primary key constraint on the target table, so the database can identify the "duplicate" rows. Same goes for the REPLACE syntax.
But your stock table does not have a primary key. Consider the following DDL instead:
CREATE TABLE stock(
item int(11) primary key,
total int(11) not null
);
Side note: there is no need to reassign column item in the on duplicate key clause, since it's what is used to identify the conflict in the first place. This is good enough:
INSERT INTO STOCK(item, total)
SELECT s_item, SUM(s_qty) FROM shop GROUP BY s_item
ON DUPLICATE KEY UPDATE total = VALUES(total);
If you run this one time, it should work as you expected. But subsequent runs may bring duplicate ITEM because of what #gmb said. The table must have a UNIQUE index or PRIMARY KEY. See more details here
https://dev.mysql.com/doc/refman/5.7/en/insert-on-duplicate.html
I need a searchable database with messages, which I can attach tags to, and see whether they have been used, when and where.
For example, I got the following message "Hi! I live in Stockholm and are looking for a handyman. I saw in your profile that you own a toolbox and since I don't own any tools myself, except for a screwdriver, I was hoping that hiring you would be the best since yo can bring your own tools! Please contact me ASAP!"
To this message, I want to attach the tags "Stockholm, handyman, toolbox, screwdriver and tools".
When searching the database, I wish to be able to find all messages containing the tags "Stockholm" and "Toolbox".
If I then decide to use this message above, and use it, I want to be able to set that it was used 2018-02-11 11.52 using the name "John Doe" at the site "findahandyman.site".
Now, this is all fictional, I will use completely different messages with other tags, places etc. But the scenario is real. However, I am not sure what way to do this best would be.
I am thinking like this:
tbl-tags
----------
|id | tag |
----------
tbl-messages
--------------
| id | message |
--------------
tbl-used
-------------------------
| id | date | name | site |
-------------------------
And then build a view where I can search the messages, registered with the tags #1 #2 #3 etc.
Am I thinking right? If I am, how can I relate them all and how to build the view. If I am not, how should I think? And also, how to relate them all and build the view according to your suggestion(s)?
In my opinion you would need to do this:
1.) make the parent tables like this:
create table tbl_tags
(
tagName VARCHAR(50) NOT NULL,
dateAdded datetime NULL,
primary key(tagName)
) ENGINE=InnoDB CHARACTER SET utf8 COLLATE utf8_general_ci;
create the tbl_message table using an id as a primary key (tagName is here primary because this way tag names will not duplicate) like this:
create table tbl_messages
(
message_ID INT(11) NOT NULL AUTO_INCREMENT,
message text NOT NULL,
dateAdded NULL,
primary key(message_ID)
) ENGINE=InnoDB CHARACTER SET utf8 COLLATE utf8_general_ci;
For the tbl_used I would make it a mapping table with three columns. One column would be the message_ID (a foreign key from the table tbl_messages) and the other the date and time it was used, I would also add an id as primary here to avoid getting an error if multiple users try to use the same message at the same time.
create table tbl_used
(
used_ID INT(11) NOT NULL AUTO_INCREMENT,
message_ID INT(11) NOT NULL,
timeOfUse dateTime NOT NULL,
PRIMARY KEY (`used_ID`),
FOREIGN KEY (`message_ID`) REFERENCES `tbl_messages` (`message_ID`) ON UPDATE CASCADE
) ENGINE=InnoDB CHARACTER SET utf8 COLLATE utf8_general_ci;
2.) create another mapping table to relate the messages and tags tables to each other:
create table tbl_messages_x_tbl_tags
(
message_ID INT(11) NOT NULL,
tagName VARCHAR(50) NOT NULL,
PRIMARY KEY (`message_ID`, `tagName`),
FOREIGN KEY (`message_ID`) REFERENCES `tbl_messages` (`message_ID`) ON UPDATE CASCADE,
FOREIGN KEY (`tagName`) REFERENCES `tbl_tags` (`tagName`) ON UPDATE CASCADE
) ENGINE=InnoDB CHARACTER SET utf8 COLLATE utf8_general_ci;
You will notice that you will be unable to populate the foreign key columns in the mapping tables with arbitrary content. You can only insert valid values from their respective parent tables. That means your mapping table data is consistent.
To fill the tables, you first need to fill the parent tables (tbl_messages, tbl_tags), then you can populate the mapping tables (tbl_messages_x_tbl_tags, tbl_used).
On insertion of a new message you would simply check for new tags and insert new tags into the table tbl_tags if they are not already there. Then add the message into tbl_messages and populate the mapping table tbl_messages_x_tbl_tags with (message_ID, tagName) rows.
After that, on each use of the message you can simply write to the database:
mysqli_query($connection, "INSERT INTO tbl_used (message_ID,timeOfUse) VALUES($msgID, NOW())");
tbl-tags
----------
|id | tag |
----------
tbl-message-tags
----------------------
| id | tag_id | msg_id |
----------------------
tbl-messages
--------------
| id | message |
--------------
tbl-used
-------------------------
| id | date | name | site |
-------------------------
Creating tables (if you want, you can add constraints):
create table tbl_tags(id mediumint not null auto_increment, tag varchar(255) not null, primary key(id));
create table tbl_messages(id mediumint not null auto_increment, message text not null, primary key(id));
create table tmt(tag_id mediumint not null, msg_id mediumint not null, primary key(tag_id, msg_id));
Insert some test data:
insert into tbl_tags(tag) values ('tag0'), ('tag1');
insert into tbl_messages(message) values ('msg1'), ('msg2'), ('msg3'), ('msg4'), ('msg5');
insert into tbl_message_tags(tag_id, msg_id) values (1, 1), (0, 1), (1, 2), (0, 3);
After this you can make query like this:
select tag from tbl_tags join (select tag_id from tbl_messages join tbl_message_tags on id = msg_id where msg_id = 1) as t on id = t.tag_id;
Result will be:
----------
| id | tag |
|----|-----|
| 1 | tag0|
|----|-----|
| 2 | tag1|
---- -----
Also, you need to add message identifier field into the tbl-used to get message, linked with every row.
Another variant (not preferable):
You need tbl-tags only if you want to use similar tags in many messages (after receving message, you can normalize case, split it and append only new tags into the tbl-tags), but if you don't need this type of optimization, you can use "array field" in the message table (i.e., in mysql you can do this in the similar way: How can I simulate an array variable in MySQL?).
I'm about to code a couple of MySQL tables to handle invoices.
My plan is to break this into 3 main tables:
create table invoice
(
id auto_increment,
client (foreign key),
created (date),
*etc*...
)
create table products
(
id auto_increment
*product info*...
)
create table invoice_products
(
invoice_id (references invoice.id)
row (resetting auto_increment) <--THIS!!!
product_id(references products.id)
product_quantity INT
primary key (invoice_id,row)
)
The dilemma is, that when a new invoice is created, invoice.id is auto_incremented, and this is as it's supposed to be. What I wan't is for the invoice_products.row to start from 1 for every new invoice.
So the row auto_increment will start from 1 for every new invoice, but if new rows are added to an existing invoice id, the row id will continue from where it left off.
Any recommendations on how to accomplish this?
(I hope the short version of the code is enough for you to understand the dilemma)
Thanks in advance for any advice!
EDIT: Clarification: All tables in the database are InnoDB (because of heavy use of foreign key constraints)
All you have to do, is to change your table to using MyISAM engine. But note, that MyISAM doesn't support transactions and foreign keys.
Anyway, here's an example how it would work (quoting the manual):
For MyISAM and BDB tables you can specify AUTO_INCREMENT on a secondary column in a multiple-column index. In this case, the generated value for the AUTO_INCREMENT column is calculated as MAX(auto_increment_column) + 1 WHERE prefix=given-prefix. This is useful when you want to put data into ordered groups.
CREATE TABLE animals (
grp ENUM('fish','mammal','bird') NOT NULL,
id MEDIUMINT NOT NULL AUTO_INCREMENT,
name CHAR(30) NOT NULL,
PRIMARY KEY (grp,id)
) ENGINE=MyISAM;
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
Which returns:
+--------+----+---------+
| grp | id | name |
+--------+----+---------+
| fish | 1 | lax |
| mammal | 1 | dog |
| mammal | 2 | cat |
| mammal | 3 | whale |
| bird | 1 | penguin |
| bird | 2 | ostrich |
+--------+----+---------+
In this case (when the AUTO_INCREMENT column is part of a multiple-column index), AUTO_INCREMENT values are reused if you delete the row with the biggest AUTO_INCREMENT value in any group. This happens even for MyISAM tables, for which AUTO_INCREMENT values normally are not reused.
If the AUTO_INCREMENT column is part of multiple indexes, MySQL will generate sequence values using the index that begins with the AUTO_INCREMENT column, if there is one. For example, if the animals table contained indexes PRIMARY KEY (grp, id) and INDEX (id), MySQL would ignore the PRIMARY KEY for generating sequence values. As a result, the table would contain a single sequence, not a sequence per grp value.
As you can see, your primary key and auto_increment setup is right already. Just change to MyISAM.
If you don't want to use MyISAM, you can also calculate it while selecting.
SELECT
ip.*,
#row := IF(#prev_inv != invoice_id, 1, #row + 1) AS `row`,
#prev_inv := invoice_id
FROM invoice_products ip
, (SELECT #row:=1, #prev_inv:=NULL) vars
ORDER BY invoice_id
And third possibility is of course to calculate it outside the database. That I'll leave to you :)
For this you cant use an auto_increment column. You should query the max(row) + 1 for the actual invoice before each insert.
if i have the following tables:
create table rar (
rar_id int(11) not null auto_increment primary key,
rar_name varchar (20));
create table data_link(
id int(11) not null auto_increment primary key,
rar_id int(11) not null,
foreign key(rar_id) references rar(rar_id));
create table consumption (
id int(11) not null,
foreign key(id) references data_link(id),
consumption int(11) not null,
total_consumption int(11) not null,
date_time datetime not null);
i want the total consumption to be all the consumption field values added up. Is there a way to accomplish this through triggers? or do i need to each time read all the values + the latest value, sum them up and then update the table? is there a better way to do this?
--------------------------------------------------
id | consumption | total_consumption | date_time |
==================================================|
1 | 5 | 5 | 09/09/2013 |
2 | 5 | 10 | 10/09/2013 |
3 | 7 | 17 | 11/09/2013 |
4 | 3 | 20 | 11/09/2013 |
--------------------------------------------------
just wondering if there is a cleaner faster way of getting the total each time a new entry is added?
Or perhaps this is bad design? Would it better to have something like:
SELECT SUM(consumption) FROM consumption WHERE date BETWEEN '2013-09-09' AND '2013-09-11' in order to get this type of information... would doing this be the best option? The only problem i see with this is that the same command would be re-run multiple times - where each time the data would not be stored as it would be retrieved by request....it could be inefficient when you are re-generating the same report several times over for viewing purposes.... rather if the total is already calculated all you have to do is read the data, rather than computing it again and again... thoughts?
any help would be appreciated...
If you've got an index on total_consumption it won't noticeably slow the query down to have a nested select of MAX(total_consumption) as part of the insert as the max value will be stored already.
eg.
INSERT INTO `consumption` (consumption, total_consumption)
VALUES (8,
consumption + (SELECT MAX(total_consumption) FROM consumption)
);
I'm not sure how you're using the id column but you can easily add criteria to the nested select to control this.
If you do need to put a WHERE on the nested select, make sure you have an index across the fields you use and then the total_consumption column. For example, if you make it ... WHERE id = x, you'll need an index on (id, total_consumption) for it to work efficiently.
if You MUST have trigger - it shoud be like that:
DELIMITER $$
CREATE
TRIGGER `chg_consumption` BEFORE INSERT ON `consumption`
FOR EACH ROW BEGIN
SET NEW.total_consumption=(SELECT
MAX(total_consumption)+new.consumption
FROM consumption);
END;
$$
DELIMITER ;
p.s. and make total_consumption int(11) not null, nullable or default 0
EDIT:
improve from SUM(total_consumption) for MAX(total_consumption) as #calcinai suggestion
Consider the following table:
+-------------+------------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-------------+------------------+------+-----+---------+----------------+
| vendor_id | int(10) unsigned | NO | PRI | NULL | auto_increment |
| vendor_name | varchar(100) | NO | UNI | NULL | |
| count | int(10) unsigned | NO | | 1 | |
+-------------+------------------+------+-----+---------+----------------+
I have the following MySQL query:
INSERT INTO `table`
(`vendor_name`)
VALUES
('foobar') ON DUPLICATE KEY UPDATE `count` = `count` + 1
The intent of this query is to insert a new vendor name to the table and in case the vendor name already exists, the column count should be incremented by 1. This works however the primary key of the current column will also be auto-incremented. How can I prevent MySQL from auto-incrementing the primary key in these cases? Is there a way to do this with one query?
Thank you.
This works however the primary key of the current column will also be auto-incremented. How can I prevent MySQL from auto-incrementing the primary key in these cases?
By using an UPDATE statement when the value already exists:
IF EXISTS(SELECT NULL
FROM TABLE
WHERE vendor_name = $vendor_name) THEN
UPDATE TABLE
SET count = count + 1
WHERE vendor_name = $vendor_name
ELSE
INSERT INTO TABLE
(vendor_name)
VALUES
($vendor_name
END IF
I tried the alternative to ON DUPLICATE KEY UPDATE, REPLACE INTO:
REPLACE INTO vendors SET vendor_name = 'foobar', COUNT = COUNT + 1
It updates the count, and the vendor_id so it's worse...
The database & data doesn't care if the numbers aren't sequential, only that the values are unique. If you can live with that, I'd use the ON DUPLICATE UPDATE syntax though I admit the behaviour is weird (understandable considering using an INSERT statement).
I think this might do it. But it's very much against the principles of Daoism - you're really going against the grain.
There is probably a better solution.
INSERT INTO `table`
(`vendor_name`)
VALUES
('foobar') ON DUPLICATE KEY UPDATE `count` = `count` + 1, `vendor_id`=`vendor_id`-1