I am having problem inserting data into MySql table. For simplicity, my database has 2 tables, foo & foo2.
Table foo2 has two records
id=1, code="N", desc="Normal"
id=2, code="D", desc="Deviate"
I want to populate foo but I need to reference foo2 in doing so. My current code is:
$inputarray = array(
array("ONE", "Proj 1", "N"),
array("TWO", "Proj 2", "D"));
for ($i = 0; $i < count($inputarray); $i++) {
$sql3 = $pdo->prepare("INSERT INTO foo (var1, var2, var3)
VALUES ('{$inputarray[$i][0]}'
,'{$inputarray[$i][1]}'
, (select id from foo2 where code='($inputarray[$i][3])')
)");
$sql3->execute();}`
The "select id .." line generates an SQL error message but if I hard code it like
(select id from foo2 where code='N')
then the program runs without error. I have tried escape characters, using double quotes inside the single quotes etc. How can i best get around this problem?
The code to create foo was
$sql2 = $pdo->prepare('
CREATE TABLE foo(
id INT NOT NULL AUTO_INCREMENT,
var1 VARCHAR(3) NOT NULL UNIQUE,
var2 VARCHAR(32) NOT NULL,
var3 INT NOT NULL,
PRIMARY KEY (id),
FOREIGN KEY (var3) REFERENCES foo2 (id)
ON DELETE RESTRICT
ON UPDATE RESTRICT) ENGINE=INNODB');
This is not the way to use prepared statements
$pdo->prepare("INSERT INTO foo (var1, var2, var3)
VALUES ('{$inputarray[$i][0]}'
,'{$inputarray[$i][1]}'
, (select id from foo2 where code='($inputarray[$i][3])')
)");
This is plain old string concatenation, for all practical purposes you might as well use mysql_* functions here. The proper way to use PDO is like this:
$pdo->prepare("INSERT INTO foo (var1, var2, var3)
VALUES (?,?, (select id from foo2 where code=?))");
$pdo->bindParam(1, $inputarray[$i][0])
$pdo->bindParam(2, $inputarray[$i][1])
$pdo->bindParam(3, $inputarray[$i][3])
Note how much more readable your code has become? You can avoid the repetitve calls to bindParam by directly passing in the parameters to execute.
ps: To make your current code work {$inputarray[$i][3]} note the newly added curly brackets
Related
JSON support in SQL Server is pretty new. See here for more information.
We would like to use this to store our translatable fields. This means in the database we would no longer have a separate table with translations but instead store them in the column itself.
Our column value would look like this:
"Name" : {
"en-US" : "Conditioning",
"nl-NL" : "Behandeling",
"fr-FR" : "Traitement",
"de-DE" : "Konditionierung"
}
Since we still have a table with all the different cultures I would still like to apply a foreign key relation between Name.Key and the Culture table.
I couldn't find any information on how to apply foreign key constraints to the data in the JSON. Does anyone have a clue how to?
I've tried the constraint below for testing but this does not work. The JSON_VALUE does not seem to work on JSON keys. Whereas JSON_QUERY returns more than just the key.
ADD CONSTRAINT
[FK_ItemCulture]
CHECK (JSON_VALUE([Name], '$.Name') IN ('en-US', 'nl-NL', 'fr-FR', 'de-DE'))
You can define a scalar function that will check a single json:
CREATE FUNCTION [dbo].[svf_CheckJsonNames](#json nvarchar(max))
RETURNS bit AS
BEGIN
declare #ok bit
declare #validNames table([name] nvarchar(50))
insert into #validNames values ('en-US'),('nl-NL'),('fr-FR'),('de-DE')
if exists (
select x.[key] , vn.name from OPENJSON (#json)
with (
[Name] nvarchar(max) as json
) t
cross apply
(
select [key] from openjson(t.[Name])
) x
left join #validNames vn
on x.[key] COLLATE DATABASE_DEFAULT = vn.[name] COLLATE DATABASE_DEFAULT
where vn.[name] is null
)
set #ok = 0
else set #ok = 1
RETURN #ok
END
The functions returns 1 if all names are valid, 0 if one or more names are invalid.
Now you can use this function in your constraint:
create table tmp(myId int, [Name] nvarchar(max))
alter table tmp ADD CONSTRAINT [FK_ItemCulture]
CHECK ([dbo].[svf_CheckJsonNames]([Name]) = 1)
if you run the following insert statements:
insert into tmp values(1, '{"Name" : { "en-US" : "Conditioning", "nl-NL" : "Behandeling", "fr-FR" : "Traitement", "de-DE" : "Konditionierung" }}')
insert into tmp values(2, '{"Name" : { "en-EN" : "Conditioning", "nl-NL" : "Behandeling", "fr-FR" : "Traitement", "de-DE" : "Konditionierung" }}')
the first will succed because all names are correct:
but the second will fail since the first name ("en-EN") is invalid. Here is the error:
I have this query that increments a value index in existing rows ordered by the value in the column key.
UPDATE `documents`, (
SELECT #row_number:=ifnull(#row_number, 0)+1 as `new_index`, `id`
FROM `documents`
WHERE `path` = "/path/to/doc"
ORDER BY `key`
) AS `table_position`,
(
SELECT #row_number:=0
) AS `rowNumberInit`
SET `index` = `table_position`.`new_index`
WHERE `table_position`.`id` = `documents`.`id`
and I use this PHP code to execute it:
/** #var PDO $pdo */
$ret = $pdo->query($sql);
// Now every value in column `index` is set to 1
$res = $ret->execute();
// Now every value in column `index` is counted up
This doesn't look quite like the right way to do it.
I currently use PDO directly, because Zend_Db_Adapter_Pdo_Mysql seems to wreak havoc with this query.
In addition to that I'd like to have the "/path/to/doc" string in the WHERE clause a a bind param. Replacing it by a ? and passing the value to execute() didn't work.
How would I do this correctly with Zend or PDO?
How to update mysql data in bulk ?
How to define something like this :
UPDATE `table`
WHERE `column1` = somevalues
SET `column2` = othervalues
with somevalues like :
VALUES
('160009'),
('160010'),
('160011');
and othervalues :
VALUES
('val1'),
('val2'),
('val3');
maybe it's impossible with mysql ?
a php script ?
The easiest solution in your case is to use ON DUPLICATE KEY UPDATE construction. It works really fast, and does the job in easy way.
INSERT into `table` (id, fruit)
VALUES (1, 'apple'), (2, 'orange'), (3, 'peach')
ON DUPLICATE KEY UPDATE fruit = VALUES(fruit);
or to use CASE construction
UPDATE table
SET column2 = (CASE column1 WHEN 1 THEN 'val1'
WHEN 2 THEN 'val2'
WHEN 3 THEN 'val3'
END)
WHERE column1 IN(1, 2 ,3);
If the "bulk" data you have is dynamic and is coming from PHP (you did tag it, after all), then the query would look something like this:
INSERT INTO `foo` (id, bar)
VALUES
(1, 'pineapple'),
(2, 'asian pear'),
(5, 'peach')
ON DUPLICATE KEY UPDATE bar = VALUES(bar);
and the PHP to generate this from an existing array (assuming the array is of a format like:
$array = (
somevalues_key => othervalues_value
);
) would look something like this (by no means the best (doesn't address escaping or sanitizing the values, for instance), just an quick example):
$pairs = array();
foreach ($array as $key => $value) {
$pairs[] = "($key, '$value')";
}
$query = "INSERT INTO `foo` (id, bar) VALUES " . implode(', ', $pairs) . " ON DUPLICATE KEY UPDATE bar = VALUES(bar)";
You could try an UPDATE with JOIN as below:
UPDATE table
INNER JOIN (
SELECT 1 column1, 2 column2, 10 new_v1, 20 new_v2, 30 new_v3
UNION ALL SELECT 4 column1, 5 column2, 40 new_v1, 50 new_v2, 60 new_v3
) updates
ON table.column1 = updates.column1
AND table.column2 = updates.column2
SET
table.column1 = updates.new_v1,
table.column2 = updates.new_v2,
table.column3 = updates.new_v3;
As long as you can craft the inner SELECT statements from the updates subquery you would get the benefit of running all these updates in a single statement (which should give you some performance boost on InnoDB depending on your table size).
If you are using a drag & drop tableView or collectionView to sort datas in your app, like allowing users to arrange their photos by drag and drop functionality, send a comma seperated list of ordered ids to the backend after user edits finish.
In your backend, explode ids to the an array like
$new_ranks = array();
$supplied_orders = explode(",", $_POST["supplied_new_order"]); //52,11,6,54,2 etc
$start_order = 99999;
foreach ($supplied_orders as $supplied_row_id) {
//your all validations... make sure supplied_row_id belongs to that user or not etc..
$new_ranks[intval($supplied_row_id)] = $start_order--;
}
now, you can update all new ranks like #Farside recommendation 2.
if (count($new_ranks) > 0) {
$case_sqls = array();
foreach ($new_ranks as $id => $rank) {
$case_sqls[] = "WHEN ".intval($id)." THEN ".intval($rank)."";
}
$case_sql = implode(" ", $case_sqls);
$this->db->query("
UPDATE
service_user_medias
SET
rank = (CASE id ".$case_sql." END)
WHERE
id IN(".implode(",", array_keys($new_ranks)).");
");
}
If you have data in array format then try this
and your query is like "UPDATE table WHERE column1 = ? SET column2 = ?"
then set it like below
foreach($data as $key => $value) {
$query->bind_param('ss', $key, $value);
$query->execute();
}
hope it'll work.
Reference from this.
From C++, I'm generating an UPDATE statement programmatically in a way that makes stripping a trailing comma difficult:
UPDATE `myTable` SET
`Field1` = "value",
`Field2` = "value",
`Field3` = "value",
WHERE `Field4` = "value";
Is there some static, no-op key/value pair I can insert after the final column value specification, which would make the trailing comma "okay"? Or will I have to complicate my C++ code to avoid writing it entirely?
Something apparently equivalent to the following invalid approach would be nice.
UPDATE `myTable` SET
`Field1` = "value",
`Field2` = "value",
`Field3` = "value",
--- 1 = 1
WHERE `Field4` = "value";
Unless you're willing to duplicate a value as Igoel suggests (which may not be ideal if the value is lengthy), the simple answer is no.
One briefly encouraging possibility was the use of the alias NEW to represent the incoming values, so that the final value may be duplicated without actually having to present it in the query again (and I'd hope that this would be taken out by the query optimiser):
UPDATE `myTable` SET
`Field1` = "value",
`Field2` = "value",
`Field3` = "value",
--- `Field3` = NEW.`Field3`
WHERE `Field4` = "value";
Alas, this is not supported in an UPDATE statement, only inside a trigger body.
You'll have to do the manipulation in your C++ before executing the statement, either through character replacement (, for ) or character removal; the former may produce more complex code than you have now, and the latter may be inefficient (depending on the structure of your query-building code), but it's still your best bet.
One way to achieve this, given that you will have to do so in your C++ code, and assuming that you are using a std::ostream to build up the statement, is to pull back the "put" cursor one byte so that the next character replaces the trailing , rather than following it:
#include <iostream>
#include <sstream>
int main()
{
std::stringstream ss;
ss << "UPDATE `myTable` SET ";
ss << "`A` = \"value\","
<< "`B` = \"value\",";
ss.seekp(-1, std::ios_base::cur);
ss << " WHERE `C` = \"value\"";
std::cout << ss.str() << '\n';
}
// Output: UPDATE `myTable` SET `A` = "value",`B` = "value" WHERE `C` = "value"
This may be more feasible and/or efficient than playing with string lengths, depending on what exactly your code looks like.
you could dublicate one of the rows and strip the comma like:
UPDATE `myTable` SET
`Field1` = "value",
`Field2` = "value",
`Field3` = "value",
`Field3` = "value"
...
I don't see any elegant way to do it inside the mysql query.
In your C++ code, however, there are a number of ways to do this :
removing the last coma after creating the SET clause with an extra coma should be as simple as : qry[lgth-1] = ' '; or qry.erase(qry.length()-1);
I imagine you have some sort of loop to build the SET clause, checking the index i in your loop is the classical way, or build an array of strings and use some join(stringList, separator) function
You can place comma before data, as follows:
UPDATE `myTable` SET
`Field1` = "value"
, `Field2` = "value"
, `Field3` = "value"
WHERE `Field4` = "value";
I need to design the structure of a table that is going to store a log of events/actions for a project management website.
The problem is, these logs will be spelled differently depending on what the user is viewing
Example:
On the overview, an action could say "John F. deleted the item #2881"
On the single-item page, it would say "John F. deleted this item"
If the current user IS John F. it would spell "You deleted this item"
I'm not sure if I should store each different possibility in the table, this doesn't sound like the optimal approach.
For any kind of logs you can use the following table structure
CREATE TABLE logs (
id bigint NOT NULL AUTO_INCREMENT,
auto_id bigint NOT NULL DEFAULT '0',
table_name varchar(100) NOT NULL,
updated_at datetime DEFAULT NULL,
updated_by bigint NOT NULL DEFAULT '0',
updated_by_name varchar(100) DEFAULT NULL,
PRIMARY KEY (id)
) ENGINE=InnoDB AUTO_INCREMENT=3870 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
And then create another table to record what columns were updated exactly.
CREATE TABLE logs_entries (
id bigint NOT NULL AUTO_INCREMENT,
log_id bigint NOT NULL DEFAULT '0',
field_name varchar(100) NOT NULL,
old_value text,
new_value text,
PRIMARY KEY (id),
KEY log_id (log_id),
CONSTRAINT logs_entries_ibfk_1 FOREIGN KEY (log_id) REFERENCES logs (id) ON DELETE CASCADE ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=7212 DEFAULT CHARSET=utf8mb3
Now your table data look like this
Now Create a Database-View to fetch data easily in simple query
DELIMITER $$
ALTER ALGORITHM=UNDEFINED SQL SECURITY DEFINER VIEW view_logs_entries AS
SELECT
le.id AS id,
le.log_id AS log_id,
le.field_name AS field_name,
le.old_value AS old_value,
le.new_value AS new_value,
l.auto_id AS auto_id,
l.table_name AS table_name,
l.updated_at AS updated_at,
l.updated_by AS updated_by,
l.updated_by_name AS updated_by_name
FROM (logs_entries le
LEFT JOIN logs l
ON ((le.log_id = l.id)))$$
DELIMITER ;
After creating database-view your data will look like this, so now you can easily query any logs of your project
You must have noticed that I have added updated_by and updated_by_name columns in logs table, Now there are two ways to fill this updated_by_name column
You can write a query on every log entry to fetch user name and store it(not recommended)
You can make a database-trigger to fill this column (Recommended)
You can create database trigger like this, it will automatically insert user name whenever logs entry added in database.
DELIMITER $$
USE YOUR_DATABASE_NAME$$
CREATE
TRIGGER logs_before_insert BEFORE INSERT ON logs
FOR EACH ROW BEGIN
SET new.updated_by_name= (SELECT fullname FROM users WHERE user_id = new.updated_by);
END;
$$
DELIMITER ;
After doing all this, you can insert entries in logs table whenever you do changes in any database table. In my case I have PHP-CodeIgniter project, and I did it like this in my Model file, here I am updating patient table of my database
public function update($id,$data)
{
// Log this activity
$auto_id = $id;
$table_name = 'patients';
$updated_by = #$data['updated_by'];
$new_record = $data;
$old_record = $this->db->where('id',$auto_id)->get($table_name)->row();
$data['updated_at'] = date('Y-m-d H:i:s');
$this->db->where('id', $id);
$return = $this->db->update($table_name,$data);
//my_var_dump($this->db->last_query());
if($updated_by)
{
$this->log_model->set_log($auto_id,$table_name,$updated_by,$new_record,$old_record);
}
return $return;
}
set_log function code checks which fields are actually changed
public function set_log($auto_id,$table_name,$updated_by,$new_record,$old_record)
{
$entries = [];
foreach ($new_record as $key => $value)
{
if($old_record->$key != $new_record[$key])
{
$entries[$key]['old_value'] = $old_record->$key;
$entries[$key]['new_value'] = $new_record[$key];
}
}
if(count($entries))
{
$data['auto_id'] = $auto_id;
$data['table_name'] = $table_name;
$data['updated_by'] = $updated_by;
$this->insert($data,$entries);
}
}
and insert log function look like this
public function insert($data,$entries)
{
$data['updated_at'] = date('Y-m-d H:i:s');
if($this->db->insert('logs', $data))
{
$id = $this->db->insert_id();
foreach ($entries as $key => $value)
{
$entry['log_id'] = $id;
$entry['field_name'] = $key;
$entry['old_value'] = $value['old_value'];
$entry['new_value'] = $value['new_value'];
$this->db->insert('logs_entries',$entry);
}
return $id;
}
return false;
}
You need to separate data from display. In your log, store the complete information (user xxx deleted item 2881). In your log viewer, you have the luxury of substituting as needed to make it more readable.