who knows this sql injection? - mysql

someone injects the following content by unsing textfields in forms:
IF(SUBSTR(##version,1,1)<5,BENCHMARK(2600000,SHA1(0xDEADBEEF)),SLEEP(5))/*'XOR(IF(SUBSTR(##version,1,1)<5,BENCHMARK(2600000,SHA1(0xDEADBEEF)),SLEEP(5)))OR'|"XOR(IF(SUBSTR(##version,1,1)<5,BENCHMARK(2600000,SHA1(0xDEADBEEF)),SLEEP(5)))OR"*/
do you have an Idea, what it does? For me, it looks like a try to slow things down.
You can find a lot of injected websites (using this code) in google. I reckon there is some "super hacker script" that is used for that. It seems to use "sample#email.tst" as default email adress. Does somebody know that script?

It's designed to hit the CPU hard regardless of whether the input box's value is unquoted, single-quoted, or double-quoted on older versions of MySQL, and on newer versions, to sleep for 5 seconds, holding the connection open.
In each case it's likely to perform a denial-of-service attack if the application is vulnerable to SQL injection, because holding a connection open for a long time is likely to result in the server running out of resources/available connections.
-- if unquoted, it sees this:
IF(SUBSTR(##version,1,1)<5,BENCHMARK(2600000,SHA1(0xDEADBEEF)),SLEEP(5))
---and then ignores the rest, which appears commented:
/*
-- If it's single-quoted, it doesn't see the comment,
-- rather, it terminates the singlequote:
'
-- ...and then sees this:
XOR(IF(SUBSTR(##version,1,1)<5,BENCHMARK(2600000,SHA1(0xDEADBEEF)),SLEEP(5)))OR
--- ...and then sees the next part as a single-quoted string terinated in the client
'|
--but if it's a double-quoted, string, it sees the end double-quote:
"
-- ...and runs this:
XOR(IF(SUBSTR(##version,1,1)<5,BENCHMARK(2600000,SHA1(0xDEADBEEF)),SLEEP(5)))OR
---and then opens a doublequote to be closed in the client
"
-- This is the end of the comment opened in the case of the unquoted client string.
*/
In each case it's attempting to benchmark an execution of the SHA1 function, which is quite CPU-intensive. BENCHMARK is simply a function that executes another expression a fixed number of times. In this case it's used to perform a CPU DOS on the host.

Related

Hashed password must be sanitized?

It's just a curiosity. If you encrypt a password (using sha1 or other methods) before inserting it in a query, it must be anyway sanitized? Or the hash's result is always safe?
This simple code are safe?
$salt = "123xcv";
$password = $_POST['password'];
$password = sha1($password+$salt);
$query = "select * from user where password='$password'";
Unless you validated the input somehow you shouldn't assume that it will always return a safe output because functions such as SHA1 can return error values if given unexpected input. For example:
echo '<?php echo sha1(''); ?>' | php
Warning: sha1() expects at least 1 parameter, 0 given in - on line 1
And this output obviously violates the assumption that "it's always a hex string". Other hashing functions in other languages can present yet another behaviour.
Apart from that, the above password hashing code scheme ($password = sha1($password+$salt);) is very weak (see why) and I would strongly recommend not using it even in an example as someone is eventually guaranteed to find it on StackOverflow and use in production.
Also, as already noted above, building SQL queries by concatenating strings is also a bad practice and can lead to security issues in future: today the only parameter in the query will be the password, tomorrow someone decides to add some other option and I bet they won't rewrite the query but just use the template that is already there...
This sql injection question question is asked out of a common delusion.
In fact, there is no such thing like "sanitization" at all, nor any function to perform such non-existent task. As well as there is no "safe" or "unsafe" data. Every data is "safe", as long as you're following simple rules.
Not to mention that a programmer have a lot more important things to keep in mind, other than if some particular piece of data is "safe" in some particular context.
What you really need, is to avoid raw SQL for such silly queries at all, using an ORM to run SQL for you. While in such rare cases when you really need to run a complex query, you have to use placeholders to substitute every variable in your query.
From the documentation:
The value is returned as a string of 40 hex digits, or NULL if the argument was NULL.
Assuming you have a large enough varchar column, you have no sanitization to do.
This being said, it's always cleaner to use prepared statements, there's no reason to just concat strings to build queries.

Iterating over a string in Vimscript or Parse a JSON file

So I'm creating a vim script that needs to load and parse a JSON file into a local object graph. I searched and I couldn't find any native way to process a JSON file, and I don't want to add any dependencies to the script. So I wrote my own function to parse the JSON string (gotten from the file), but it's really slow. At the moment, I iterate through each character in the file like so:
let len = strlen(jsonString) - 1
let i = 0
while i < len
let c = strpart(jsonString, i, 1)
let i += 1
" A lot of code to process file....
" Note: I've tried short cutting the process by searching for enclosing double-quotes when I come across the initial double quotes (also taking into account escaping '\' character. It doesn't help
endwhile
I've also tried this method:
for c in split(jsonString, '\zs')
" Do a lot of parsing ....
endfor
For reference, a file with ~29,000 characters takes about 4 seconds to process, which is unacceptable.
Is there a better way to iterate over a string in vim script?
Or better yet, have I missed a native function to parse JSON?
Update:
I asked for no dependencies because I:
Didn't want to deal with them
Genuinely wanted some ideas for best way to do this without someone else's work.
Sometimes I just like to do things manually even though the problem has already been solved.
I'm not against plugins or dependencies at all, it's just that I'm curious. Thus the question.
I ended up creating my own function to parse the JSON file. I was creating a script that could parse the package.json file associated with node.js modules. Because of this, I could rely on a fairly consistent format and quit the processing whenever I'd retrieved the information I needed. This usually cut out large chunks of the file since most developers put the largest chunk of the file, their "readme" section, at the end. Because the package.json file is strictly defined, I left the process somewhat fragile. It assumed a root dictionary { } and actively looks for certain entries. You can find the script here: https://github.com/ahayman/vim-nodejs-complete/blob/master/after/ftplugin/javascript.vim#L33.
Of course, this doesn't answer my own question. It's only the solution to my unique problem. I'll wait a few days for new answers and pick the best one before the bounty ends (already set an alarm on my phone).
The simplest solution with the least dependencies is just using the json_decode vim function.
let dict = json_decode(jsonString)
Even though Vim's origin dates back a lot it happens that its internal string() eval() representation is that close to JSON that its likely to work unless you need special characters.
You can lookup the implementation here which even supports true/false/null if you want:
https://github.com/MarcWeber/vim-addon-json-encoding
Better use that library (vim-addon-manager allows to install dependencies easily).
Now it depends on your data whether this is good enough.
Now Benjamin Klein posted your question to vim_use which is why I'm replying.
Best and fast replies happen if you subscribe to the Vim mailinglist.
Goto vim.sf.net and follow the community link.
You cannot expect the Vim community to scrape stackoverflow.
I've added the keyword "json" and "parsing" to that little code that it can be found easier.
If this solution does not work for you you can try the many :h if_* bindings or write an external script which extracts the information you're looking for, or turns JSON into Vim's dictionary representation which can be read by eval() escaping special characters you care about correctly.
If you seek for completely correct solution omitting dependencies is one of the worst thing you can do. The eval() variant mentioned by #MarcWeber is one of the fastest, but it has its disadvantages:
Using solution for securing eval I mentioned in comment makes it no longer the fastest. In fact after you use this it makes eval() slower by more then an order of magnitude (0.02s vs 0.53s in my test).
It does not respect surrogate pairs.
It cannot be used to verify that you have correct JSON: it accepts some strings (e.g. "\<C-o>") that are not JSON strings and it allows trailing commas.
It fails to give normal error messages. It fails badly if you use vam#VerifyIsJSON I mentioned in p.1.
It fails to load floating point values like 1e10 (vim requires numbers to look like 1.0e10, but numbers like 1e10 are allowed: note “and/or” in the first paragraph).
. All of the above (except for the first) statements also apply to vim-addon-json-encoding mentioned by #MarcWeber because it uses eval. There are some other possibilities:
Fastest and the most correct is using python: pyeval('json.loads(vim.eval("varname"))'). Not faster then eval, but fastest among other possibilities. (0.04 in my test: approximately two times slower then eval())
Note that I use pyeval() here. If you want solution for vim version that lacks this functionality it will no longer be one of the fastest.
Use my json.vim plugin. It has an advantages of slightly better error reporting compared to failed vam#VerifyIsJSON, slightly worse compared to eval() and it correctly loads floating-point numbers. It can be used for verification of strings (it does not accept "\<C-a>"), but it loads lists with trailing comma just fine. It does not support surrogate pairs. It is also very slow: in the test I used (it uses 279702 character long strings) it takes 11.59s to load. Json.vim tries to use python if possible though.
For the best error reporting you can take yaml.vim and purge YAML support out of it leaving only JSON (I once have done the same thing for pyyaml, though in python: see markedjson library used in powerline: it is pyyaml minus YAML stuff plus classes with marks). But this variant is even slower then json.vim and should only be used if the main thing you need is error reporting: 207 seconds for loading the same 279702 character long string.
Note that the only variant mentioned that satisfies both requirements “no dependencies” and “no python” is eval(). If you are not fine with its disadvantages you have to throw away one or both of these requirements. Or copy-paste code. Though if you take speed into account only two candidates are left: eval() and python: if you want to parse json fast you really must use C and only these solutions spend most time in functions written in C.
Most other interpreters (ruby/perl/TCL) do not have pyeval() equivalent so they will be slower even if their JSON implementation is written in C. Some other (lua/racket (mzscheme)) have pyeval() equivalent, but e.g. luaeval('{}') is zero meaning that you will have to add additional step explicitly and recursively converting objects into vim dictionaries and lists (e.g. luaeval('vim.dict({})')) which will impact performance. Cannot say anything about mzeval(), but I have never heard about anybody actually using racket (mzscheme) with vim.

sanitizing data before mysql injection and xss - am i doing it right with pdo and htmlpurifier

I am still working with securing my web app. I decided to use PDO library to prevent mysql injection and html purifier to prevent xss attacks. Because all the data that comes from input goes to database I perform such steps in order to work with data:
get data from input field
start pdo, prepare query
bind each variable (POST variable) to query, with sanitizing it using html purifier
execute query (save to database).
In code it looks like this:
// start htmlpurifier
require_once '/path/to/htmlpurifier/library/HTMLPurifier.auto.php';
$config = HTMLPurifier_Config::createDefault();
$purifier = new HTMLPurifier($config);
// start pdo
$pdo = new PDO('mysql:host=host;dbname=dbname', 'login', 'pass');
$pdo -> setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION);
// prepare and bind
$stmt = $pdo -> prepare('INSERT INTO `table` (`field1`) VALUES ( :field1 )');
// purify data and bind it.
$stmt -> bindValue(':field1', $purifier->purify($_POST['field1']), PDO::PARAM_INT);
// execute (save to database)
$stmt -> execute();
Here are the questions:
Is that all I have to do to prevent XSS and mysql injection? I am aware that i cant be 100% sure but in most cases should it work fine and is it enough?
Should I sanitize the data once again when grabing it from db and putting to browser or filtering before saving is just enough?
I was reading on wiki that it's smart to turn of magic_quotes. Ofocurse if magic quotes puts unnecessery slahes it can be annoying but if I don't care about those slashes isn't turning it of just losing another line of defense?
Answer:
Please note that code I have written in this example is just an example. There is a lot of inputs and query to DB is much more complicated. Unfortunately I can't agree with you that if PDO type of variable should be int I do not have to filter it with XSS attacks. Correct me if I am wrong:
If the input should be an integer, and it is then it's ok - I can put it to DB. But remember that any input can be changed and we have to expect the worse. So if everything is alright than it is alright, but if a malicious user would input XSS code than I have multiple lines of defense:
client side defense - check if it is numeric value. Easy to compromise, but can stop total newbies.
server side - xss injection test (with html purify or ie htmlspecialchars)
db side - if somehow somebody puts malicious code that will avoid xss protection than database is going to return error because there should be integer, not any other kind of variable.
I guess it is not doing anything wrong, and it can do a lot of good. Ofcourse we are losing some time to calculate everything, but i guess we have to put on the weight performance and security and determine what is more important for you. My app is going to be used by 2-3 users at a time. Not many. And a security is much more important for me than performance.
Fortunately my whole site is with UTF8 so I do not expect any problems with encoding.
While searching the net i met a lot of opinions about addslashes(), stripslashes(), htmlspecialchars(), htmlentities().. and i've chosen htmlpurity and pdo. Everyone is saying that they are best solutions before xss and mysql injections threats. If you have any other opinion please share.
As for SQL injection, yes, you can be 100% sure if you always use prepared statements. As for XSS, you must also make sure that all your pages are UTF-8. HTML Purifier sanitizes data with the assumption that it's encoded in UTF-8, so there may be unexpected problems if you put that data in a page with a different encoding. Every page should have a <meta> tag that specifies the encoding as UTF-8.
Nope, you don't need to sanitize the data after you grab it from the DB, provided that you already sanitized it and you're not adding any user-submitted stuff to it.
If you always use prepared statements, magic quotes is nothing but a nuisance. It does not provide any additional lines of defense because prepared statements are bulletproof.
Now, here's a question for you. PDO::PARAM_INT will turn $field1 into an integer. An integer cannot be used in an SQL injection attack. Why are you passing it through HTML Purifier if it's just an integer?
HTML Purifier slows down everything, so you should only use it on fields where you want to allow HTML. If it's an integer, just do intval($var) to destroy anything that isn't a number. If it's a string that shouldn't contain HTML anyway, just do htmlspecialchars($var, ENT_COMPAT, 'UTF-8') to destroy all HTML. Both of these are much more efficient and equally secure if you don't need to allow HTML. Every field should be sanitized, but each field should be sanitized according to what it's supposed to contain.
Response to your additions:
I didn't mean to imply that if a variable should contain an integer, then it need not be sanitized. Sorry if my comment came across as suggesting that. What I was trying to say is that if a variable should contain an integer, it should not be sanitized with HTML Purifier. Instead, it should be validated/sanitized with a different function, such as intval() or ctype_digit(). HTML Purifier will not only use unnecessary resources in this case, but it also can't guarantee that the variable will contain an integer afterwards. intval() guarantees that the result will be an integer, and the result is equally secure because nobody can use an integer to carry out an XSS or SQL injection attack.
Similarly, if the variable should not contain any HTML in the first place, like the title of a question, you should use htmlspecialchars() or htmlentities(). HTML Purifier should only be used if you want your users to enter HTML (using a WYSIWYG editor, for example). So I didn't mean to suggest that some kinds of inputs don't need sanitization. My view is that inputs should be sanitized using different functions depending on what you want them to contain. There is no single solution that works on all types of inputs. It's perfectly possible to write a secure website without using HTML Purifier if you only ever accept plain-text comments.
"Client-side defense" is not a line of defense, it's just a convenience.
I'm also getting the nagging feeling that you're lumping XSS and SQL injection together when they are completely separate attack vectors. "XSS injection"? What's that?
You'll probably also want to add some validation to your code in addition to sanitization. Sanitization ensures that the data is safe. Validation ensures that the data is not only safe but also correct.

Muetexes in perl & MySQL

I am trying to insure that only one instance of a perl script can run at one time. The script performs some kind of db_operation depending on the parameters passed in. The script does not necessarily live in one place or on one machine, and possibly multiple OSs. Though the file system is automounted across the various machines.
My first aproach was to just create a .lock file, and do the following:
use warnings;
use strict;
use Fcntl qw(:DEFAULT :flock);
...
open(FILE,">>",$lockFilePath);
flock(FILE,LOCK_EX) or die("Could not lock ");
do_something();
flock(FILE,LOCK_UN) or die("Could not unlock ");
close(FILE);
but I keep getting the following errors:
Bareword "LOCK_EX" not allowed while "strict subs" in use
Bareword "LOCK_UN" not allowed while "strict subs" in use
So I am looking for another way to approach the problem. Locking the DB itself is also not practical since the db could be used by other scripts(which is acceptable), I am just trying to prevent this script from running. And locking a table for write is not practical, since my script is not aware of what table the operation is taking place, it just launches another perl script supplied as a parameter.
I am thinking of adding a table to the db, with just one value, and to use that as a muetex, but I don't know how practical/reliable that is(a lot of red flags go up in my head). I have a DBI connection to a db that this script useses.
Thanks
The Bareword error you are getting sounds like you've done something in that "..." to confuse Perl with regard to the imported Fcntl constants. There's nothing wrong with using those constants like that. You might try something like LOCK_UN() to see what error that gets you.
If you are using MySQL, you can use the GET_LOCK() and RELEASE_LOCK() mechanism. It works reasonably well for cases like this:
SELECT GET_LOCK("script_lock");
and then when you are finished:
SELECT RELEASE_LOCK("script_lock");
See http://dev.mysql.com/doc/refman/4.1/en/miscellaneous-functions.html for details.
You may want to avoid the file locking; from what I remember it's notoriously unreliable on non-local filesystems. Your better bet is to just use the existence of the file itself to the indicator that the script is already running (similar to a UNIX PID file) Granted, this won't be 100% reliable but should work reasonably reliably with very low overhead, provided the script isn't getting invoked incessantly.
If you need better reliability than that, using the database for the mutex is a good solution.

Query in MUMPS statement

I $P(GIH,,24)= S $P(GIH,,24)="C" S
What is the meaning of two S's in above MUMPS statement?
Let me start out by saying the original statement is NOT either Standard MUMPS or InterSystems Cache, or GT.M code. Even broadly guessing what was originally meant, the final S on the line isn't something you would do in MUMPS. A single S could be a SET command, but you still don't have any arguments telling what variable could be assigned, or what value should be assigned to it.
The rest of my reply is trying to figure out what it could have meant.
Your question seems to be broken by some software. either that on stackoverflow or the cut-and-paste process to put it here:
I saw:
I $P(GIH,,24)= S $P(GIH,,24)="C" S
What is the meaning of two S's in above MUMPS statement?
It is hard to figure out what you meant, since it would require hypothesizing where quotes might be and which ones could have been deleted by the transmission of the question.
First of all, let's do something we can guess is reasonable. $P is usually an abbreviation for the built-in (intrinsic) function $PIECE. an I standing alone is probably an IF command
and an S standing alone is probably a SET command. This runs into a problem with your example, because the format of a line of MUMPS code is COMMAND COMMAND-ARGUMENT.
Aside Note: I also just tried to put the text COMMAND-ARGUMENT in "angle brackets" ie: with a less-than character at the beginning of the word and a greater-than character at the end. The text COMMAND-ARGUMENT just disappeared. Which means that stackoverflow sees it as HTML markup. I notice there is a Code marker on the top of this edit window which may or may not help.
If we do the expansions to the code above, we get:
IF $PIECE(GIH,,24)= SET $PIECE(GIH,,24)="C" SET
When we expand the final S but it looks like a SET command, but without any set-argument.
Note, if this was in a Cache system, we might have an example of extra spaces allowed by Cache, which are not allowed in Standard MUMPS, ie the S may have been the right hand side of an equality operator in the IF command. This would only make sense if Cache also allowed the argument of the SET command to be in code without an actual SET command.
i.e.:
IF $PIECE(GIH,,24)=S $PIECE(GIH,,24)="C" SET
We still would have to deal with the two commas in a row for the $PIECE intrinsic function. Currently using two commas in a row to indicate a missing argument is only allowed in Programmer-written code, not when using built-in functions. So this might be a place where we can guess what you meant, or originally pasted in.
If we put in double-quotes we run into the problem that $PIECE command (which separates a string based on a delimiter) would have an quoted string of zero length given as its second argument. Which is just as erroneous as having an empty argument.
So if we hypothesize a quoted string that has angle brackets, we would get something this for your original line:
IF $PIECE(GIH,"<something>",24)="<something>" SET $PIECE(GIH,"<something>",24)="C" SET
Note: I just saw the Code marker allows use of grave accents to keep from assuming a line is HTML - which is good since grave accent is not a character used in MUMPS coding.
As has been mentioned on another reply, the SET-$PIECE-ARGUMENT form is used to change the data stored in a database at a particular delimited substring location.
So this code might be fine for guessing, but it has gone far afield of what you may or may not have done. So I'm stopping now until we get feedback that this is even close to what you wanted. As I said at the first, this is still not quite valid code.
This is pretty bizarre, but what I think is going on is:
I $P(GIH,<null>,24)=<null>
Calling $PIECE with the second argument null will replace the entire string with the value you're assigning, which, in this case, is also null. It looks like a convoluted way of clearing the value of GIH and permitting control to flow into the following SET statement. I seriously doubt that $PIECE sets the $T flag, though, which means that calling this as the condition for the IF operator probably isn't working the way you want it to.
S $P(GIH,,24)="C"
The next statement looks a lot like the first -- replace the entirety of GIH with "C".
S
I don't think the last SET is valid MUMPS.
Why this isn't written as follows is beyond me:
s GIH="C"
Hope that helps!
Maybe Intersystems Caché handles this syntax differently, but that code results in a syntax error when I try it in Caché. There may be other versions of MUMPS for which that is valid, but I don't think it is.
As other have pointed out this statement is not valid, It appears pieces are missing
But S is the SET command in Mumps
Here is what a statement like this might look like:
I $P(GIH,"^",24)="P" S $P(GIH,"^",24)="C" S UPDATEFLG=1
in this case GIH might look something like:
GIH=256^^^42^^^^Mike^^^^^^^^^^^^^^^^P^^^
which would make this evaluate to TRUE:
I $P(GIH,"^",24)="P"
so after:
S $P(GIH,"^",24)="C"
GIH will be:
GIH=256^^^42^^^^Mike^^^^^^^^^^^^^^^^C^^^
then it would set the variable UPDATEFLG=1
Hope this helps :-)