I have a field that stores a numeric value that will go from 0 to 7. It is a counter for some steps to be completed in the application. Each time a step is completed, the counter is updated with the new value. User can go back on the steps and then forward. If he has completed step 3, he can go back to step 1 and then forward till step 3 again. What I want to do is to avoid that when the user returns to step 3 the counter gets updated with 1 and 2 values but remains 3. I want to investigate a way to do it within the update query.
The query is the following:
try{
$pdo->query("UPDATE ruolo SET wiz_step='$step' WHERE id_user='$utente'");
}
catch(PDOException $e){
$status='500';
$data['step']=$step;
$message='Si è verificato un errore. Abbiamo aperto una segnalazione al team tecnico.';
}
$message="Step aggiornato correttamente";
}
Is it possible to tell mysql to update wiz_step only if $step is > than the value of wiz_step before the update?
Table structure is just made of three int fields: id primary and autoincrement, id_user and wiz_step.
Note: I assume I am not open to mysql injections since none of the values in the query are coming from a user input. They are all set by the php logic.
As these are all values controlled by code it is quite simple to do, also change to using prepared queries to protect your code from SQL Injection Attack
try{
$data = [':step' => $step, ':step1' => $step, ':uid' => $utente];
$stmt = $pdo->prepare("UPDATE ruolo
SET wiz_step=:step
WHERE id_user=:uid
AND :step1 > wiz_step");
$stmt->execute($data);
}
Here's a slight variation on the answer from #RiggsFolly:
try{
$data = ['step' => $step, 'uid' => $utente];
$stmt = $pdo->prepare("UPDATE ruolo
SET wiz_step=GREATEST(:step, wiz_step)
WHERE id_user=:uid");
$stmt->execute($data);
}
See the GREATEST() function in the MySQL manual. It returns the greater value of its arguments. So if the parameter is greater, it will be used to update the column. If the existing value is greater, then no change will be made, because wiz_step = wiz_step is a no-op.
P.S.: It's not necessary to use the : character in the array keys when you pass parameters to a prepared query. It was needed in an early version of PDO long ago, but not anymore.
Related
Something just came to mind and I'd like to bounce it off:
Say you have a user profile, with 10 fields that the user can edit, and not all of them are required. When issuing update commands, is it more efficient to either:
A) Collect all of the fields, filled in or not, and issue one all encompassing update statement to the server's DB
or
B) Use client side validation to check to see which fields have been filled out or changed, and have a selection of SQL methods that only send and update these fields
or
C) Create groupings, like "updateRequiredFields(...) and updateExtraFields(...)", which would issue one smaller transfer if the changes only belong in one group, however two transfers if both are edited
General consensus? Clearly option B is the far more verbose approach, I'm just wondering if it's worth coding it all out or if it'll actually make a noticeable impact on the server (think "scaled for big data").
You could do something like this on your DB update function:
public function updateFields(array $fields) {
$updateQuery = array();
foreach($fields as $fieldKey => $fieldValue) {
//if $fieldValue is false, leave it unchanged
if ($fieldValue !== false) {
//NOTE: make sure you escape this or use PDO
$updateQuery[] = $fieldKey . '=' . $fieldValue;
}
}
$query = 'UPDATE UserInfo SET ' . implode(",", $updateQuery) . ' WHERE ...';
}
You just need to build $fields array based on what was modified on client side and then pass in with either new value or with false if no change.
I've been out of the mysql and perl game for quite a few years and can't seem to get this right. I have a table with just 3 columns. 'cnt' is one of them. All I want to do is query the table on 'name' and see if name exists. If it does, I want to capture the value of 'cnt'. The table has a record of testName with a value of 2 I added manually. When this script is run it returns empty.
my $count;
my $pop = qq(SELECT cnt FROM popular WHERE name="testName");
my $sth = $dbh->prepare($pop);
$sth->execute() or die $dbh->errstr;
my #return;
while (#return = $sth->fetchrow_array()) {
$count = $return[1];
}
print "our return count is $count";
Is it obvious to anyone what I did wrong?
You probably mean
$count = $return[0];
According to perl doc on mysql
An alternative to fetchrow_arrayref. Fetches the next row of data and returns it as a list containing the field values.
Since you select cnt as the return value ,so , the size of #return is 1,but you misunderstand it as the number of results which meets your query condition.No, it is not so!Please have a more careful reading of perl doc.
The normal result() method described in the documentation appears to load all records immediately. My application needs to load about 30,000 rows, and one at a time, submit them to a third-party search index API. Obviously loading everything into memory at once doesn't work well (errors out because of too much memory).
So my question is, how can I achieve the effect of the conventional MySQLi API method, in which you load one row at a time in a loop?
Here is something you can do.
while ($row = $result->_fetch_object()) {
$data = array(
'id' => $row->id
'some_value' => $row->some_field_name
);
// send row data to whatever api
$this->send_data_to_api($data);
}
This will get one row at the time. Check the CodeIgniter source code, and you will see that they will do this when you execute the result() method.
For those who want to save memory on large result-set:
Since CodeIgniter 3.0.0,
There is a unbuffered_row function,
All the methods above will load the whole result into memory (prefetching). Use unbuffered_row() for processing large result sets.
This method returns a single result row without prefetching the whole result in memory as row() does. If your query has more than one row, it returns the current row and moves the internal data pointer ahead.
$query = $this->db->query("YOUR QUERY");
while ($row = $query->unbuffered_row())
{
echo $row->title;
echo $row->name;
echo $row->body;
}
You can optionally pass ‘object’ (default) or ‘array’ in order to specify the returned value’s type:
$query->unbuffered_row(); // object
$query->unbuffered_row('object'); // object
$query->unbuffered_row('array'); // associative array
Official Document: https://www.codeigniter.com/userguide3/database/results.html#id2
Well, the thing is that result() gives away the entire reply of the query. row() simply fetches the first case and dumps the rest. However the query can still fetched 30 000 rows regardles of which function you use.
One design that would fit your cause would be:
$offset = (int)#$_GET['offset'];
$query = $this-db->query("SELECT * FROM table LIMIT ?, 1", array($offset));
$row = $query->row();
if ($row) {
/* Run api with values */
redirect(current_url().'?offset'.($offset + 1));
}
This would take one row, send it to api, update the page and use the next row. It will alos prevent the page from having a timeout. However it would most likely take a while with 30 000 records and refreshes, so you may wanna adjust your LIMIT ?, 1 to a higher number than 1 and go result() and foreach() multiple apis per pageload.
Well, there'se the row() method, which returns just one row as an object, or the row_array() method, which does the same but returns an array (of course).
So you could do something like
$sql = "SELECT * FROM yourtable";
$resultSet = $this->db->query($sql);
$total = $resultSet->num_rows();
for($i=0;$i<$total;$i++) {
$row = $resultSet->row_array($i);
}
This fetches in a loop each row from the whole result set.
Which is about the same as fetching everyting and looping over the $this->db->query($sql)->result() method calls I believe.
If you want a row at a time either you make 30.000 calls, or you select all the results and fetch them one at a time or you fetch all and walk over the array. I can't see any way out now.
Trying to make my blog secure and learning prepared statements.
Although I set the variable, I still get all the entries from database. $escapedGet is real variable when I print it out. It's obviously a rookie mistake, but I cant seem to find an answer.
I need to get the data where postlink is $escapedGet not all the data.
$escapedGet = mysql_real_escape_string($_GET['article']);
// Create statement object
$stmt = $con->stmt_init();
// Create a prepared statement
if($stmt->prepare("SELECT `title`, `description`, `keywords` FROM `post` WHERE `postlink` = ?")) {
// Bind your variable to replace the ?
$stmt->bind_param('i', $postlink);
// Set your variable
$postlink = $escapedGet;
// Execute query
$stmt->execute();
$stmt->bind_result($articleTitle, $articleDescription, $articleKeywords);
while($stmt->fetch()) {
echo $articleTitle, $articleDescription, $articleKeywords;
}
// Close statement object
$stmt->close();
}
just tryed this: echo $escapedGet;
echo $_Get['artcile']
and got - some_other
thats the same entry that I have saved in database as postlink
tried to shande postlink to id, and then it worked. but why not with postlink tab?
When you are binding your data using 'i' modifier, it gets bound as integer.
Means string will be cast to 0 in the final statement.
But as mysql does type casting, your strings become zeroes in this query:
SELECT title FROM post WHERE postlink = 0;
try it and see - for the textual postlinks you will have all your records returned (as well as a bunch of warnings).
So, bind strings using s modifier, not i
I was wondering if anybody knew a good way to create a unique random integer id for a primary key for a table. I'm using MySQL. The value has to be integer.
In response to: "Because I want to use that value to Encode to Base62 and then use that for an id in a url. If i auto increment, it might be obvious to the user how the url id is generated."
If security is your aim then using Base62, even with a "randomly" generated number won't help.
A better option would:
Do not re-invent the wheel -- use AUTO_INCREMENT
Then use a cryptographic hash function + a randomly generated string (hidden in the db for that particular url) to generate the final "unique id for that url"
If your're open to suggestions and you can implement it, use UUIDs.
MySQL's UUID() function will return a 36 chars value which can be used for ID.
If you want to use integer, still, I think you need to create a function getRandID() that you will use in the INSERT statement. This function needs to use random + check of existing ids to return one that is not used before.
Check RAND() function for MySQL.
How you generate the unique_ids is a useful question - but you seem to be making a counter productive assumption about when you generate them!
My point is that you do not need to generate these unique id's at the time of creating your rows, because they are essentially independent of the data being inserted.
What I do is pre-generate unique id's for future use, that way I can take my own sweet time and absolutely guarantee they are unique, and there's no processing to be done at the time of the insert.
For example I have an orders table with order_id in it. This id is generated on the fly when the user enters the order, incrementally 1,2,3 etc forever. The user does not need to see this internal id.
Then I have another table - unique_ids with (order_id, unique_id). I have a routine that runs every night which pre-loads this table with enough unique_id rows to more than cover the orders that might be inserted in the next 24 hours. (If I ever get 10000 orders in one day I'll have a problem - but that would be a good problem to have!)
This approach guarantees uniqueness and takes any processing load away from the insert transaction and into the batch routine, where it does not affect the user.
How about this approach (PHP and MySQL):
Short
Generate random number for user_id (UNIQUE)
Insert row with generated number as user_id
If inserted row count equal to 0, go to point 1
Looks heavy? Continue to read.
Long:
Table:
users (user_id int UNIQUE)
Code:
<?php
// values stored in configuration
$min = 1;
$max = 1000000;
$numberOfLoops = 0;
do {
$randomNumber = rand($min, $max);
// the very insert
$insertedRows = insert_to_table(
'INSERT INTO foo_table (user_id) VALUES (:number)',
array(
':number' => $randomNumber
));
$numberOfLoops++;
// the magic
if (!isset($reported) && $numberOfLoops / 10 > 0.5) {
/**
* We can assume that at least 50% of numbers
* are already in use, so increment values of
* $min and $max in configuration.
*/
report_this_fact();
$reported = true;
} while ($insertedRows < 1);
All values ($min, $max, 0.5) are just for explanation and they have no statistical meaning.
Functions insert_to_table and report_this_fact are not build in PHP. The are also as numbers just for clarify of explanation purposes.
You can use an AUTO_INCREMENT for your table, but give the users the encrypted version:
encrypted_id: SELECT HEX(AES_ENCRYPT(id, 'my-private-key'));
id: SELECT AES_DECRYPT(UNHEX(encrypted_id), 'my-private-key');
my way, for both 32bit and 64bit platform. result is 64bit
function hexstr2decstr($hexstr){
$bigint = gmp_init($hexstr, 16);
$bigint_string = gmp_strval($bigint);
return $bigint_string;
}
function generate_64bitid(){
return substr(md5(uniqid(rand(), true)), 16, 16);
}
function dbGetUniqueXXXId(){
for($i = 0; $i < 10; $i++){
$decstr = hexstr2decstr(generate_64bitid());
//check duplicate for mysql.tablexxx
if($dup == false){
return $decstr;
}
}
return false;
}
AUTO_INCREMENT is going to be your best bet for this.
Here are some examples.
If you need to you can adjust where the increment value starts (by default it's 1).
There is an AUTO_INCREMENT feature. I would use that.
See here more examples.