I try to configure my proxy to de-duplicate some cached files.
Some site add query-string at the end of URL and so the file is cached multiple times. Ex :
http://download.oracle.com/otn-pub/java/jdk/7u75-b13/jre-7u75-linux-x64.tar.gz?AuthParam=kjzeghfhrehbfgjernf
http://download.oracle.com/otn-pub/java/jdk/7u75-b13/jre-7u75-linux-x64.tar.gz?AuthParam=jzehrguihegeijhpijf
I would like to create et rewrite rule for storeId like that :
^http:\/\/download\.oracle\.com\/otn\-pub\/java\/([a-zA-Z0-9\/\.\-\_]+\.(tar\.gz)) http://download.oracle.com/otn-pub/java/$1
but I have'nt found documention about how to do that.
Ok, so after long research I have find the answer to my question. I write here if case of someone else have the same question.
First of all, I have install Squid 3.4, the first version witch support StoreId rewrite.
Second, after reading StoreId documentation :
wiki.squid-cache.org/Features/StoreID
wiki.squid-cache.org/Features/StoreID/DB
and lot of google search I found this perl program http://pastebin.ca/2422099. It take a database file as first argument, you can find examples in the second link before. In the file I have had a line as above :
^http:\/\/download\.oracle\.com\/otn\-pub\/java\/([a-zA-Z0-9\/\.\-\_]+\.(tar\.gz)) http://download.oracle.com/otn-pub/java/$1
Third, in my squid.conf, I had this line :
store_id_program /usr/local/squid/store-id.pl /usr/local/squid/store_id_db
store_id_children 5 startup=1
store_id_program is the path to the perl file with in argument the database file.
store_id_children represent the number of subprocess allowed to the program, maximum 5, 1 at the beginning.
In the same squid.conf I replace this line :
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
by
refresh_pattern -i cgi-bin 0 0% 0
to allow caching url with query string.
Last, I ensure that the store-id.pl has 'x' permission
Hope this help :)
PS: Just a trick, in the db file, you must have to columns separate by a tabulation (not a space). To be sure, you can use this command (find in doc):
cat dbfile | sed -r -e 's/\s+/\t/g' |sed '/^\#/d' >cleaned_db_file
Related
I have a binary I've been trying to fuzz with AFL, the only thing is AFL only fuzzes STDIN, and File inputs and this binary takes input through its arguments pass_read [input1] [input2]. I was wondering if there are any methods/fuzzers that allow fuzzing in this manner?
I don't not have the source code so making a harness is not really applicable.
Michal Zalewski, the creator of AFL, states in this post:
AFL doesn't support argv fuzzing, because TBH, it's just not horribly useful in
practice. There is an example in experimental/argv_fuzzing/ showing how to do it
in a general case if you really want to.
Link to the mentioned example on GitHub: https://github.com/google/AFL/tree/master/experimental/argv_fuzzing
There are some instructions in the file argv-fuzz-inl.h (haven't tried myself).
Bash only Solution
As an example, lets generate 10 random strings and store them in a file
cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 10 | head -n 10 > string-file.txt
Next, lets read 2 lines from string-file and pass it into our application
exec handle< string-file.txt
while read string1 <&handle ; do
read string2 <&handle
pass_read $line1 $line2 >> crash_file.txt
done
exec handle<&-
We then have any crashes stored within crash_file.txt for further analysis.
This may not be the most elegant solution, but perhaps you gives you an idea of some other possibilities if no tool necessarily fulfills the current requirements
I looked at the AFLplusplus repo on GitHub. Inside AFLplusplus/utils/argv_fuzzing/, there is a Makefile. If you run it, you will get a .so file (a shared library) that you can use to do argv fuzzing, even if you only have the binary. Obviously, you must use AFL_PRELOAD. You can read more in the README.
When searching a string in an mysql database without knowing in what table or column I might find it, I usually resort to
mysqldump --extended-insert=FALSE --complete-insert=TRUE dbname|grep SomeString
That dumps a lot of unneeded data, ignores indeces invented for that exact purpose, and makes it rather hard to localize the results. Is there a more convenient and more performant way? (Except installing 3rd party software.)
About duplicate questions: As a comment pointed out, I am not restricted to SQL queries, but will accept any solution - may it be a bash script, some CLI function I might have not seen, or grepping the DB file as suggested.
I found a find command that lists the file which contains a specific string:
find [path of data folder of database]/*.ibd -type f -exec grep -i '[search string]' {} +
Outputs when found:
Binary file [path]/[table].ibd matches
Doesn't outputs anything when not found.
Means:
find : linux find command
path : the path pointing to the database folder under mysql/data
directory which contains the table's ibd files
-type f : only search for files
-exec : execute this while listing
grep -i : case insensitive text search
string : string to search for
For others, read this:
https://serverfault.com/a/343177
Same usage here:
http://www.valleyprogramming.com/blog/linux-find-grep-commands-exec-combine-search
This question already has answers here:
bash script, create array of all files in a directory
(3 answers)
Closed 7 years ago.
I am currently working on a bash script where I must download files from our mySQL database, host them somewhere different, then update the database with the new location for the image. The last portion is my problem area, creating the array full of filenames and iterating through them, replacing the file names in the database as we go.
For whatever reason I keep getting these kinds of errors:
not found/X2b6qZP.png: 1: /xxx/images/X2b6qZP.png: ?PNG /xxx/images/X2b6qZP.png: 2: /xxx/images/X2b6qZP.png: : not found
/xxx/images/X2b6qZP.png: 1: /xxx/images/X2b6qZP.png: Syntax error: word unexpected (expecting ")")
files=$($DOWNLOADDIRECTORY/*)
files=$(${files[#]##*/})
# Iterate through the file names in the download directory, and assign the new values to the detail table.
for file in "${files[#]}"
do
mysql -h ${HOST} -u ${USER} -p${PASSWORD} ${DBNAME} "UPDATE crm_category_detail SET detail_value = 'http://xxx.xxx.x.xxx/img/$file' WHERE detail_value LIKE '%imgur.com/$file'"
done
You are trying to execute a glob as a command. The syntax to use arrays is array=(tokens):
files=("$DOWNLOADDIRECTORY"/*)
files=("${files[#]##*/}")
You are also trying to run your script with sh instead of bash.
Do not run sh file or use #!/bin/sh. Arrays are not supported in sh.
Instead use bash file or #!/bin/bash.
whats going on right here?
files=$($DOWNLOADDIRECTORY/*)
I dont think this is doing what you think it is doing.
According to this answer, you want to omit the first $ to get an array of files.
files=($DOWNLOADDIRECTORY/*)
I just wrote a sample script
#!/bin/sh
alist=(/*)
printf '%s\n' "${alist[#]}"
Output
/bin
/boot
/data
/dev
/dist
/etc
/home
/lib
....
Your assignments are not creating arrays. You need arrayname=( values for array ) as the notation. Hence:
files=( "$DOWNLOADDIRECTORY"/* )
files=( "${files[#]##*/}" )
The first line will give you all the names in the directory specified by $DOWNLOADDIRECTORY. The second carefully removes the directory prefix.
I've used spaces after ( and before ) for clarity; the shell neither requires nor objects to them. I used double quotes around the variable name and expansions to keep things sane when name do contain spaces etc.
Although it isn't immediately obvious why you might do this, its advantage over many alternatives is that it preserves spaces etc in file names.
You could just loop directly over the files:
for file in "$DOWNLOADDIRECTORY"/*; do
file="${file##*/}" # or file=$(basename "$file")
# MySQL stuff
done
Some quoting added in case of spaces in paths.
since met so many startup errors,I decide to analyze mysql startup shell.while some code fragment I cannot understand clearly.
version:
mysql Ver 14.14 Distrib 5.5.43, for osx10.8 (i386) using readline 5.1
368 #
369 # First, try to find BASEDIR and ledir (where mysqld is)
370 #
372 if echo '/usr/local/mysql/share' | grep '^/usr/local/mysql' > /dev/null
373 then
374 relpkgdata=echo '/usr/local/mysql/share' | sed -e 's,^/usr/local/mysql,,' -e 's,^/,,' -e 's,^,./,'
375 else
376 # pkgdatadir is not relative to prefix
377 relpkgdata='/usr/local/mysql/share'
378 fi
what's the purpose of line 372? a little weird
any help will be appreciated.
At first glance, this is very strange indeed... but here's a solution to this mystery.
372: if echo '/usr/local/mysql/share' | grep '^/usr/local/mysql' > /dev/null
373: then
grep returns true if it matches and false if it doesn't, so this is testing whether the string /usr/local/mysql/share begins with (^) /usr/local/mysql. Output goes /dev/null because we don't need to see it, we just want to compare it.
"Well," you interject, "that's obvious enough. The question is why?" Stick with me.
If it matches:
374: relpkgdata=echo '/usr/local/mysql/share' | sed -e 's,^/usr/local/mysql,,' -e 's,^/,,' -e 's,^,./,'
Beginning with /usr/local/mysql/share, strip off the beginning /usr/local/mysql, then strip off the beginning / then prepend ./.
So /usr/local/mysql/share becomes ./share.
Otherwise, use the string /usr/local/mysql/share.
375: else
376: # pkgdatadir is not relative to prefix
377: relpkgdata='/usr/local/mysql/share'
"That's all fine, too," I hear you say, "but why go through all these gyrations to (apparently) compare and massage two fixed literal strings?? We already know the answer, so what's up with all the tests and substitution?"
It's a fair question.
My first suspicion was that there was some sort of magic bash hackery going on that I didn't recognize, but no, this code is really all too simple to be something along those lines.
My second suspicion, since this is notably absent from MySQL 5.0.96 (which I am not running but keep on hand for reference), was that this was an abandoned attempt to introduce some new magical behavior into mysqld_safe which was never finished and replaced with actual variables, the testing and massaging of which would have made a lot more sense than doing the same thing to literal strings.
But, no. When the only tool you have is a hammer, everything looks like a nail. What this is, is an example of doing something simple... the hard way. At least that's what it looks like to me. There actually is a somewhat rational explanation. To find it answer, you have to look into the source code (not binary) distribution.
MySQL has a lot of "hard-coded" defaults. This turns out to be an example of these.
In the source file scripts/mysqld_safe.sh, the snippet above looks very different:
if echo '#pkgdatadir#' | grep '^#prefix#' > /dev/null
then
relpkgdata=`echo '#pkgdatadir#' | sed -e 's,^#prefix#,,' -e 's,^/,,' -e 's,^,./,'`
else
# pkgdatadir is not relative to prefix
relpkgdata='#pkgdatadir#'
fi
Ah, source munging. Pattern substitution.
When you're compiling MySQL from source, the file scripts/Makefile contains instruction that use sed to replace things like #prefix# and #pkgdatadir# with the literal values. The same thing, of course, happens when Oracle or the Linux disto maintainers compile their binary distribution from source. These paths get hard-coded into many, many, many places in the code, including this script... resulting in the otherwise incomprehensible comparison of two literal strings that somebody should already have known the answer to.
Instead of testing at build time, whether one path is an anchored substring of the other, and the "relpkgdata" value should be expressed relative to the current directory and modifying this script accordingly, that logical test is actually deferred until runtime, comparing two literals that were substituted in for their placeholders at build time.
I've gone to this amount of detail, not because it will help you troubleshoot, because I suspect it won't. It was, however, just bizarre enough to warrant some further investigation.
If you are having difficulty getting MySQL Server running... well, you shouldn't be, because it's a well-established system and it should work. If /bin/sh on your system isn't a symlink to /bin/bash, you might want to change mysqld_safe's shebang line from #!/bin/sh to #!/bin/bash, but beyond that, I suspect you are sniffing down the wrong rabbit hole by looking at mysqld_safe to get to the bottom of your issue. As convoluted as mysqld_safe is, it can't be said that it isn't time-tested. As they say, "the problem is somewhere else."
If I may, I'll suggest that you familiarize yourself with some of our other communities where you're likely to find the answer you need, particularly Ask Ubuntu, Super User, Server Fault, and Database Administrators. Familiarize yourself with each site's community, scope, and the level of existing expertise that each community expects on the part of those who ask questions there, and search the sites for the specific problem you're encountering. It's very likely someone has seen it and we've fixed it on one of them, if not here on SO.
So the utility Diff works just like I want for 2 files, but I have a project that requires comparisons with more than 2 files at a time, maybe up to 10 at a time. This requires having all those files side by side to each other as well. My research has not really turned up anything, vimdiff seems to be the best so far with the ability to compare 4 at a time.
My question: Is there any utility to compare more than 2 files at a time, or a way to hack diff/vimdiff so it can do multiple comparisons? The files I will be comparing are relatively short so it should not be too slow.
Displaying 10 files side-by-side and highlighting differences can be easily done with Diffuse. Simply specify all files on the command line like this:
diffuse 1.txt 2.txt 3.txt 4.txt 5.txt 6.txt 7.txt 8.txt 9.txt 10.txt
Vim can already do this:
vim -d file1 file2 file3
But you're normally limited to 4 files. You can change that by modifying a single line in Vim's source, however. The constant DB_COUNT defines the maximum number of diffed files, and it's defined towards the top of diff.c in versions 6.x and earlier, or about two thirds of the way down structs.h in versions 7.0 and up.
diff has built-in option --from-file and --to-file, which compares one operand to all others.
--from-file=FILE1
Compare FILE1 to all operands. FILE1 can be a directory.
--to-file=FILE2
Compare all operands to FILE2. FILE2 can be a directory.
Note: argument name --to-file is optional.
e.g.
# this will compare foo with bar, then foo with baz .html files
$ diff --from-file foo.html bar.html baz.html
# this will compare src/base-main.js with all .js files in git repo,
# that has 'main' in their filename or path
$ git ls-files :/*main*.js | xargs diff -u --from-file src/base-main.js
Checkout "Beyond Compare": http://www.scootersoftware.com/
It lets you compare entire directories of files, and it looks like it runs on Linux too.
if your running multiple diff's based off one file you could probably try writing a script that has a for loop to run through each directory and run the diff. Although it wouldn't be side by side you could at least compare them quickly. hope that helped.
Not answering the main question, but here's something similar to what Benjamin Neil has suggested but diffing all files:
Store the filenames in an array, then loop over the combinations of size two and diff (or do whatever you want).
files=($(ls -d /path/of/files/some-prefix.*)) # Array of files to compare
max=${#files[#]} # Take the length of that array
for ((idxA=0; idxA<max; idxA++)); do # iterate idxA from 0 to length
for ((idxB=idxA + 1; idxB<max; idxB++)); do # iterate idxB + 1 from idxA to length
echo "A: ${files[$idxA]}; B: ${files[$idxB]}" # Do whatever you're here for.
done
done
Derived from #charles-duffy's answer: https://stackoverflow.com/a/46719215/1160428
There is a simple an good way to do this = GREP.
Depending on the size of the text you can copy and paste it, or you can redirect the input of the file to the grep command. If you make a grep -vir /path to make a reverse search or a grep -ir /path. This is my way for certification exams.