I am creating a php page with a small and simple database.
when I visit it online and try to pass the parameter "length" in the url like: index.php/?length=1 it works fine and fetches the data.
If I add the single quote like index.php/?length=1' I have no SQL error on the page...
but if I use index.php/?length=-1 I see the SQL error in my page.
Does this mean that my page is vulnerable?
How can I further test it and fix the problem?
Edit: added the code
$length = $wpdb->get_results( $wpdb->prepare("SELECT `title`, `website`, `material`, `color`, `width`, `height`, `group`, `category`, `numbers_positive`, `numbers_negative`, `custom` FROM {$wpdb->shirts} WHERE `id` = '%d' ORDER BY `rank` ASC, `id` ASC", intval($shirt_id)) );
if (!isset($shirt[0])) return false;
$shirt= $shirt[0];
$shirt->title = htmlspecialchars(stripslashes($shirt->title), ENT_QUOTES);
$shirt->custom = maybe_unserialize($shirt->custom);
$shirt->color = maybe_unserialize($shirt->color);
if ( $this->hasBridge() ) {
global $lmBridge;
$shirt->shirtColor = $lmBridge->getShirtColor($shirt->color);
}
$shirt = (object)array_merge((array)$shirt,(array)$shirt->custom);
unset($shirt->custom);
return $shirt;
Yes, from the URL examples you have given, it seems like you take user input and directly insert it into your MySQL statement. That is the absolute worst. You should always parse user input because direct input from a user can result in the string being escaped and them deleting every table in your DB. This is a great example: Bobby Tables
Also, this is been a topic of great discussion. There is a great answer here
Edit* Using the WordPress framework and looking at your code, its not as bad as it seemed.
accepting, but generating an error on -1 does not nessicarily mean you are suseptable to an injection attack. As long as you are varifying that the input is an integer and only using the integer compontent, you're fairly safe.
Prepared statements make it even more secure, by seperating the data from the query. Doing that means someone can never 'break out' of what you are supposed to be working on. It's absolutely the right way to use SQL.
We can even take it another step farther, by limiting the abilty of the account to do anything other that run stored queries, and storing your queries on the SQL server side, rather then in your PHP. At that point, even IF they broke out (which they can't), they would only be able to access those defined queries.
Related
I have the following code attempting to truncate a table. The Joomla documentation makes me believe this will work, but it does not. What am I missing?
$db = JFactory::getDbo();
truncate_query = $db->getQuery(true);
//$truncate_query = 'TRUNCATE ' . $db->quoteName('#__mytable');
$truncate_query->truncateTable($db->quoteName('#__mytable'));
$db->setQuery($truncate_query);
echo $truncate_query;
exit();
If I use the line that is commented out to manually generate the SQL, it does work. The reason I am still looking to use the truncateTable function is that I am trying to include the truncation in a transaction. When I use the manual statement, the table is still truncated even if another part of the transaction fails, which is annoying since the other statements rely on data that is truncated, so if the table is emptied when it shouldn't be there is no data left to run the transaction again. Very annoying!
Here's how you call/execute your truncation query:
JFactory::getDbo()->truncateTable('#__mytable');
And now some more details...
Here is the method's code block in the Joomla source code:
public function truncateTable($table)
{
$this->setQuery('TRUNCATE TABLE ' . $this->quoteName($table));
$this->execute();
}
As you can see the truncateTable() method expects a tablename as a string for its sole parameter; you are offering a backtick-wrapped string -- but the method already offers the backtick-wrapping service. (Even if you strip your backticks off, your approach will not be successful.)
The setQuery() and execute() calls are already inside the method, so you don't need to create a new query object nor execute anything manually.
There is no return in the method, so the default null is returned -- ergo, your $truncate_query becomes null. When you try to execute(null), you get nothing -- not even an error message.
If you want to know how many rows were removed, you will need to run a SELECT query before hand to count the rows.
If you want to be sure that there are no remaining rows of data, you'll need to call a SELECT and check for zero rows of data.
Here is my answer (with different wording) on your JSX question.
I have the following codes..
echo "<form><center><input type=submit name=subs value='Submit'></center></form>";
$val=$_POST['resulta']; //this is from a textarea name='resulta'
if (isset($_POST['subs'])) //from submit name='subs'
{
$aa=mysql_query("select max(reservno) as 'maxr' from reservation") or die(mysql_error()); //select maximum reservno
$bb=mysql_fetch_array($aa);
$cc=$bb['maxr'];
$lines = explode("\n", $val);
foreach ($lines as $line) {
mysql_query("insert into location_list (reservno, location) values ('$cc', '$line')")
or die(mysql_error()); //insert value of textarea then save it separately in location_list if \n is found
}
If I input the following data on the textarea (assume that I have maximum reservno '00014' from reservation table),
Davao - Cebu
Cebu - Davao
then submit it, I'll have these data in my location_list table:
loc_id || reservno || location
00001 || 00014 || Davao - Cebu
00002 || 00014 || Cebu - Davao
Then this code:
$gg=mysql_query("SELECT GROUP_CONCAT(IF((#var_ctr := #var_ctr + 1) = #cnt,
location,
SUBSTRING_INDEX(location,' - ', 1)
)
ORDER BY loc_id ASC
SEPARATOR ' - ') AS locations
FROM location_list,
(SELECT #cnt := COUNT(1), #var_ctr := 0
FROM location_list
WHERE reservno='$cc'
) dummy
WHERE reservno='$cc'") or die(mysql_error()); //QUERY IN QUESTION
$hh=mysql_fetch_array($gg);
$ii=$hh['locations'];
mysql_query("update reservation set itinerary = '$ii' where reservno = '$cc'")
or die(mysql_error());
is supposed to update reservation table with 'Davao - Cebu - Davao' but it's returning this instead, 'Davao - Cebu - Cebu'. I was previously helped by this forum to have this code working but now I'm facing another difficulty. Just can't get it to work. Please help me. Thanks in advance!
I got it working (without ORDER BY loc_id ASC) as long as I set phpMyAdmin operations loc_id ascending. But whenever I delete all data, it goes back as loc_id descending so I have to reset it. It doesn't entirely solve the problem but I guess this is as far as I can go. :)) I just have to make sure that the table column loc_id is always in ascending order. Thank you everyone for your help! I really appreciate it! But if you have any better answer, like how to set the table column always in ascending order or better query, etc, feel free to post it here. May God bless you all!
The database server is allowed to rewrite your query to optimize its execution. This might affect the order of the individual parts, in particular the order in which the various assignments are executed. I assume that some such reodering causes the result of the query to become undefined, in such a way that it works on sqlfiddle but not on your actual production system.
I can't put my finger on the exact location where things go wrong, but I believe that the core of the problem is the fact that SQL is intended to work on relations, but you try to abuse it for sequential programming. I suggest you retrieve the data from the database using portable SQL without any variable hackery, and then use PHP to perform any post-processing you might need. PHP is much better suited to express the ideas you're formulating, and no optimization or reordering of statements will get in your way there. And as your query currently only results in a single value, fetching multiple rows and combining them into a single value in the PHP code shouldn't increase complexety too much.
Edit:
While discussing another answer using a similar technique (by Omesh as well, just as the answer your code is based upon), I found this in the MySQL manual:
As a general rule, you should never assign a value to a user variable
and read the value within the same statement. You might get the
results you expect, but this is not guaranteed. The order of
evaluation for expressions involving user variables is undefined and
may change based on the elements contained within a given statement;
in addition, this order is not guaranteed to be the same between
releases of the MySQL Server.
So there are no guarantees about the order these variable assignments are evaluated, therefore no guarantees that the query does what you expect. It might work, but it might fail suddenly and unexpectedly. Therefore I strongly suggest you avoid this approach unless you have some relaibale mechanism to check the validity of the results, or really don't care about whether they are valid.
Using CI for the first time and i'm smashing my head with this seemingly simple issue. My query wont insert the record.
In an attempt to debug a possible problem, the insert code has been simplified but i'm still getting no joy.
Essentially, i'm using;
$data = array('post_post' => $this->input->post('ask_question'));
$this->db->insert('posts', $data);
I'm getting no errors (although that possibly due to disabling them in config/database.php due to another CI related trauma :-$ )
Ive used
echo print $this->db->last_query();
to get the generated query, shown as below:
INSERT INTO `posts` (`post_post`) VALUES ('some text')
I have pasted this query into phpMyAdmin, it inserts no problem. Ive even tried using $this->db->query() to run the outputted query above 'manually' but again, the record will not insert.
The scheme of the DB table 'posts' is simply two columns, post_id & post_post.
Please, any pointers on whats going on here would be greatly appreciated...thanks
OK..Solved, after much a messing with CI.
Got it to work by setting persistant connection to false.
$db['default']['pconnect'] = FALSE;
sigh
Things generally look ok, everything you have said suggests that it should work. My first instinct would be to check that what you're inserting is compatible with your SQL field.
Just a cool CI feature; I'd suggest you take a look at the CI Database Transaction class. Transactions allow you to wrap your query/queries inside a transaction, which can be rolled back on failure, and can also make error handling easier:
$this->db->trans_start();
$this->db->query('INSERT INTO posts ...etc ');
$this->db->trans_complete();
if ($this->db->trans_status() === FALSE)
{
// generate an error... or use the log_message() function to log your error
}
Alternatively, one thing you can do is put your Insert SQL statement into $this->db->query(your_query_here), instead of calling insert. There is a CI Query feature called Query Binding which will also auto-escape your passed data array.
Let me know how it goes, and hope this helps!
I've got to add like 25000 records to database at once in Rails.
I have to validate them, too.
Here is what i have for now:
# controller create action
def create
emails = params[:emails][:list].split("\r\n")
#created_count = 0
#rejected_count = 0
inserts = []
emails.each do |email|
#email = Email.new(:email => email)
if #email.valid?
#created_count += 1
inserts.push "('#{email}', '#{Date.today}', '#{Date.today}')"
else
#rejected_count += 1
end
end
return if emails.empty?
sql = "INSERT INTO `emails` (`email`, `updated_at`, `created_at`) VALUES #{inserts.join(", ")}"
Email.connection.execute(sql) unless inserts.empty?
redirect_to new_email_path, :notice => "Successfuly created #{#created_count} emails, rejected #{#rejected_count}"
end
It's VERY slow now, no way to add such number of records 'cause of timeout.
Any ideas? I'm using mysql.
Three things come into mind:
You can help yourself with proper tools like:
zdennis/activerecord-import or jsuchal/activerecord-fast-import. The problem is with, your example, that you will also create 25000 objects. If you tell activerecord-import to not use validations, it will not create new objects (activerecord-import/wiki/Benchmarks)
Importing tens thousands of rows into relational database will never be super fast, it should be done asynchronously via background process. And there are also tools for that, like DelayedJob and more: https://www.ruby-toolbox.com/
Move the code that belongs to model out of controller(TM)
And after that, you need to rethink the flow of this part of application. If you're using background processing inside a controller action like create, you can not just simply return HTTP 201, or HTTP 200. What you need to do is to return "quick" HTTP 202 Accepted, and provide a link to another representation where user could check the status of their request (do we already have success response? how many emails failed?), as it is in now beeing processed in the background.
It can sound a bit complicated, and it is, which is a sign, that you maybe shouldn't do it like that. Why do you have to add like 25000 records in one request? What's the backgorund?
Why don't you create a rake task for the work? The following link explains it pretty well.
http://www.ultrasaurus.com/sarahblog/2009/12/creating-a-custom-rake-task/
In a nutshell, once you write your rake task, you can kick off the work by:
rake member:load_emails
If speed is your concern, I'd attack the problem from a different angle.
Create a table that copies the structure of your emails table; let it be emails_copy. Don't copy indexes and constraints.
Import the 25k records into it using your database's fast import tools. Consult your DB docs or see e.g. this answer for MySQL. You will have to prepare the input file, but it's way faster to do — I suppose you already have the data in some text or tabular form.
Create indexes and constraints for emails_copy to mimic emails table. Constraint violations, if any, will surface; fix them.
Validate the data inside the table. It may take a few raw SQL statements to check for severe errors. You don't have to validate emails for anything but very simple format anyway. Maybe all your validation could be done against the text you'll use for import.
insert into emails select * from emails_copy to put the emails into the production table. Well, you might play a bit with it to get autoincrement IDs right.
Once you're positive that the process succeeded, drop table emails_copy.
I've been using the following snippet in developements for years. Now all of a sudden I get a DB Error: no such field warning
$process = "process";
$create = $connection->query
(
"INSERT INTO summery (process) VALUES($process)"
);
if (DB::isError($create)) die($create->getMessage($create));
but it's fine if I use numerics
$process = "12345";
$create = $connection->query
(
"INSERT INTO summery (process) VALUES($process)"
);
if (DB::isError($create)) die($create->getMessage($create));
or write the value directly into the expression
$create = $connection->query
(
"INSERT INTO summery (process) VALUES('process')"
);
if (DB::isError($create)) die($create->getMessage($create));
I'm really confused ... any suggestions?
It's always better to use prepared queries and parameter placeholders. Like this in Perl DBI:
my $process=1234;
my $ins_process = $dbh->prepare("INSERT INTO summary (process) values(?)");
$ins_process->execute($process);
For best performance, prepare all your often-used queries right after opening the database connection. Many database engines will store them on the server during the session, much like small temporary stored procedures.
Its also very good for security. Writing the value into an insert string yourself means that you must write the correct escape code at each SQL statement. Using a prepare and execute style means that only one place (execute) needs to know about escaping, if escaping is even necessary.
Ditto what Zan Lynx said about placeholders. But you may still be wondering why your code failed.
It appears that you forgot a crucial detail from the previous code that worked for you for years: quotes.
This (tested) code works fine:
my $thing = 'abcde';
my $sth = $dbh->prepare("INSERT INTO table1 (id,field1)
VALUES (3,'$thing')");
$sth->execute;
But this next code (lacking the quotation marks in the VALUES field just as your first example does) produces the error you report because VALUES (3,$thing) resolves to VALUES (3,abcde) causing your SQL server to look for a field called abcde and there is no field by that name.
my $thing = 'abcde';
my $sth = $dbh->prepare("INSERT INTO table1 (id,field1)
VALUES (3,$thing)");
$sth->execute;
All of this assumes that your first example is not a direct quote of code that failed as you describe and therefore not what you intended. It resolves to:
"INSERT INTO summery (process) VALUES(process)"
which, as mentioned above causes your SQL server to read the item in the VALUES set as another field name. As given, this actually runs on MySQL without complaint and will fill the field called 'process' with NULL because that's what the field called 'process' contained when MySQL looked there for a value as it created the new record.
I do use this style for quick throw-away hacks involving known, secure data (e.g. a value supplied within the program itself). But for anything involving data that comes from outside the program or that might possibly contain other than [0-9a-zA-Z] it will save you grief to use placeholders.