Codeigniter Complex MySQL Query - Removing Backticks - is it a Security Issue? - mysql

I'm trying to build a MySQL query to return appropriate search results, by examining several different database fields. For example if a user searches "plumber leeds", if a business had 'leeds' in the 'city' field and the word 'plumber' as part of their name, I would want that search result to be returned.
User searches could contain several words and are unpredictable. I'm currently achieving what I need by exploding the search term, trimming it and using it to compile a complex search query to return all relevant results.
I'm storing this complex query in a variable and using Codeigniter's Active Record to run the query.
$this->db->where($compiled_query, null, false);
What I'm concerned about is that I'm not protecting the query with backticks and I'm unsure if this is a security issue. I have XSS Clean enabled but still not sure if this is ok.
According to CI's user manual:
$this->db->where() accepts an optional third parameter. If you set it to FALSE, CodeIgniter will not try to protect your field or table names with backticks.
Source: http://ellislab.com/codeigniter/user-guide/database/active_record.html
Some info about how I compile the query here in a separate question. I'm aware mysql_real_escape_string is about to be deprecated and isn't a catch-all, hence part of my concern about this method.
https://stackoverflow.com/questions/13321642/codeigniter-active-record-sql-query-of-multiple-words-vs-multiple-database-fi
Any help appreciated

Backticks have nothing to do with security. They are really just a way to "stringify" your field and table names, so that you could use a field called datatype for example and not have ti conflict with mysql keywords
You are safe

I wouldn't say you're absolutely "safe", because you're never technically safe if you accept user input in a SQL query (even if you've manipulated it... when there's a will, there's a way).
Once you relinquish control over what is given to your application, you must be very careful how you deal with that data so that you don't open yourself up to an injection attack.
XSS Clean will help with POST or cookie data -- it does not run automatically on GET variables. I would manually run $data = $this->security->xss_clean($data); on the input if it's from the GET array.

Related

Any tips on improving this postgres function that creates alias and then returns it as json. To translate column headers to different language

Given a table, and then another 'meta data' table that contains the translations. The function looks up the translation and then builds an alias statement. Also is it injection safe?
https://dbfiddle.uk/JQ6AnBVx
This has SQL injections in several areas - whether they are exploitable or not will depend on what information can be passed through a trust boundary (passed in be an untrusted user / program).
For instance in your translate_column function all the parameters (except for the language ID) can be used for SQL injection. If any of those can (or in the future may) come from untrusted sources this is vulnerable.
Similarly, if your alias_text or translated_column can be controlled by an untrusted user, the f_get_table_with_alias is vulnerable. This includes allowing unvalidated data being inserted into your test tables.
There are two options to do this safely - ideally you should use both:
When your users create the names used for tables (or any DB objects) you may should limit the names to use a "safe" set of characters (alphanumeric & dashes, underscores...)
You should enquote the table names / column names - adding double quotes around the objects. If you do this, you need to be careful to validate there are no quotes in the name, and error out or escape the quotes. You also need to be aware that adding quotes will make the name case sensitive.

Go-MySQL-Driver: Prepared Statements with Variable Query Parameters

I'd like to use prepared statements with MySQL on my Go server, but I'm not sure how to make it work with an unknown number of parameters. One endpoint allows users to send an array of id's, and Go will SELECT the objects from the database matching the given id's. This array could contain anywhere from 1 to 20 id's, so how would I construct a prepared statement to handle that? All the examples I've seen require you to know exactly the number of query parameters.
The only (very unlikely) option I can think is to prepare 20 different SELECT statements, and use the one that matches the number of id's the user submits - but this seems like a terrible hack. Would I even see the performance benefits of prepared statements at that point?
I'm pretty stuck here, so any help would be appreciated!
No RDBMS I'm aware of is able to bind an unknown number of parameters. It is never possible to match an array with an unknown number of parameter placeholders. It means there is not smart way to bind an array to a query such as:
SELECT xxx FROM xxx WHERE xxx in (?,...,?)
This is not a limitation of the client driver, this is simply not supported by database servers.
There are various workarounds.
You can create the query with 20 ?, bind the values you have, and complete the binding by NULL values. It works fine, because of the particular semantic of comparison operations involving NULL values. A condition like "field = ?" evaluates always to false when the parameter is bound to a NULL value, even if some rows would match. Supposing you have 5 values in your array, the database server will have to deal with 5 provided values, plus 15 NULL values. It is usually smart enough to just ignore the NULL values
An alternative solution is to prepare all the queries (each one with a different number of parameters). It is only interesting if the maximum number of parameters is limited. It works well on database for which prepared statements really matters (such as Oracle).
As far as MySQL is concerned, the gain of using a prepared statement is quite limited. Keep in mind that prepared statements are only maintained per session, they are not shared across sessions. If you have a lot of sessions, they take memory. On the other hand, parsing statements with MySQL does not involve much overhead (contrary to some other database systems). Generally, generating plenty of prepared statements to cover a single query is not worth it.
Note that some MySQL drivers offer a prepared statement interface, while they do not use internally the prepared statement capability of the MySQL protocol (again, because often, it is not worth it).
There are also some other solutions (like relying on a temporary table), but they are only interesting if the number of parameters is significant.

Batch Set all MySQL columns to all NULL

I have a large database w/ a bunch of tables and columns are mixed some allowing NULL while others not allowing NULL..
I just recently decided to STANDARDIZE my methods and USE NULL for all empty fields etc.. therefore i need to set ALL COLUMNS in ALL my tables to allow NULL (except for primaries ofcourse)
I can whip up a php code to loop this , but i was wondering if there's a quick way to do it via SQL?
regards
You can use meta data from system tables to determine your tables, columns, types etc. And then using that, dynamically build a string that contains your UPDATE SQL, with table and column names concatented in to it. This is then executed.
I've recently posted a solution that allowed the OP to search through columns looking for those that contain a particular value. In lieu of anyone providing a more complete answer, this should give you some clues about how to approach this (or at least, what to research). You'd need to either provide table names, or join to them, and then do something similar as this except you'd be checking type, not value (and the dynamic SQL you build would build an update, not a select).
I will be in a position to help you with your specific scenario further in a few hours... If by then you've had no luck with this (or other answers) then I'll provide something more complete then.
EDIT: Just realised you've tagged this as mySql... My solution was for MS SQL Server. The principals should be the same (and hence I'll leave this answer up as i think youll find it usefull), assuming MySql allows you to query its metadata, and execute dynamically generated SQL commands.
SQL Server - Select columns that meet certain conditions?

Is there a way to get only the numeric elements of a string in mysql?

I'm looking to make it easier for the clients to search for stuff like phone/mobile/fax numbers. For that to happen I want to strip both the search value and the relevant columns in my database of any non-numeric characters before comparing them. I'm using these functions to get only the numeric elements of the strings in mysql but they slow my queries down to a crawl when I use them.
Is there any way to do it without blowing my run times sky high?
The reason why your query times are exploding is because any use of such functions disables you from using any index. Since you are not searching directly on a field, but on the output of a function, there is no way mySQL can use an index to execute the query.
This is in addition to the fact that you have to compute the function output for each record.
The best way around these runtimes, if you have access and permission to do so, is to add a new column with the content you're filtering. Add a WRITE trigger to fill the column with the stripped values, run a script that updates the field once for all records. Add an index and include the new column. Then, in your application, use the new column for searches for a number value of a telephone. Downsides are table schema alterations and added code for the business logic and/or data abstraction layer.

Search in a field with html entities

Our customer's data (SQL Server 2005) has html entities in it (é -> é).
We need to search inside those fields, so a search for "équipe" will find "équipe".
We can't change the data, because our customer's customers can edit those fields as will (with a HTML editor), so if we remove the entities, on the next edit they might reappear, and the problem will still be there.
We can't use a .net server-side function, because we need to find the rows before they are returned to the server.
I would use a function that replaces the entities by their UTF-8 counterparts, but it's kind of tiresome, and I think it seriously drops the search performances (something about full table scan if I recall correctly).
Any idea ?
Thanks
You would only need to examine and encode the incoming search term.
If you convert "équipe" to "équipe" and use that in your WHERE/FTS clause then any index on that field could still be used, if the optimizer deems it appropriate.