Compile mysql for AES 256bits - mysql

According to mysql document
"Encoding with a 128-bit key length is used, but you can extend it up to 256 bits by modifying the source."
But they didn't seem to provide instruction where to change. Anyone experience with this situation? which source file should change?
Note: I use these steps to compile.

I found little help from mysql mailing list
file include/my_aes.h
#define AES_KEY_LENGTH 128 /* must be 128 192 or 256 */
as I'm using OpenSuSe 11.1 need to have following tools
sudo zypper install gcc gcc-c++ ncurses-devel
then just compile it by this instruction - here
Credit to LenZ and tripanel.net

It's probably going to be a more maintainable solution to carry out the encryption in the client application.
Moreover, you'll also then get the benefit of then having the data carried over the network encrypted and not sending the key over the network (Of course you can use SSL to connect to mysql to mitigate this anyway).
If this does not seem like a good approach, please post your requirements.
You probably do not want to compile your own mysql binaries; there are more useful things for developers to do than building their own mysql binaries. MySQL / Sun's ones are extensively tested and won't contain performance regressions (we hope).
The mysql AES_ENCRYPT() functions are also potentially not secure because they haven't documented
How they hash the password into the key
What cipher mode they use
If they're done in a vulnerable way, the encryption could be very weak. It depends on your use-case whether this matters.

Related

How to downgrade tcl_platform(osVersion) to 6.1?

My tcl platform(osVersion) is v6.2
% puts $tcl_platform(osVersion)
6.2
How to downgrade tcl_platform(osVersion) to v6.1?
Thank you.
I try to find activetcl v8.5 on internet but the old version all links are invalid...
That value, which describes the version of the operating system that is running the script, is read out of a platform-specific location in your OS during the initialisation of an interpreter (technically, it is copied from data determined during startup of the first Tcl interpreter in a process, where that data is held in a location invisible to you). It is then free to be read or altered by your code... with whatever consequences that may entail.
Permanently changing that value is done by changing what OS is installed. That's totally out of scope for what an ordinary user script can do!
Tcl's implementation mostly doesn't use the OS version. It cares far more about whether API capabilities are exposed to it, and those are almost always at the more granular level of general platform (or transparently adapted around).

Good practices for app configuration storage?

We have a number of loosely coupled apps, some in PHP and some in Python.
It would be beneficial to have some centralized place where they could get both global and app-specific configuration information.
Something like, for Python:
conf=config_server.get_params(url='http://config_server/get/My_app/all', auth=my_auth_data)
and then ideally use parameters as potentially nested attributes, eg. conf.APP.URL, conf.GLOBAL.MAX_SALES
I was considering making my own config server app, but wasn't sure, what would be the pros and cons of such approach vs. eg. storing config in centralized database or any other multiple-site accessible mode.
Also, if I perhaps was missing some readily available tool with good support, which could do this (I had a look at Puppet and Ansible, but they seemed to be very evolved tools for doing so much more than this. I also looked at software recommnedation SE for this, but they have a number of such question unanswered already).
I think it would be a good idea for your configuration mechanism not to be hard-coded to obtain configuration data via a particular technology (such as file, web server or database), but rather be able to obtain configuration data from any of several different technologies. I illustrate this with the following pseudo-code examples:
cfg = getConfig("file.cfg"); # from a file
cfg = getConfig("file#file.cfg"); # also from a file
cfg = getConfig("url#http://config_server/file.cfg"); # from the specified URL
cfg = getConfig("exec#getConfigFromDB.py"); # from stdout of command
The parameter passed to getConfig() might be obtained from, say, a command-line option. The "exec#..." format is a flexible mechanism, but carries the potential danger of somebody specifying a malicious command to execute, for example, "exec#rm -rf /".
This approach means you can experiment with whatever you consider to be an ideal source-of-configuration-data technology and later, if you discover that technology to be inappropriate, it will be trivial to discard it and use a different source-of-configuration-data technology instead. Indeed, the decision for which source-of-configuration-data technology to use might vary from one use case/user to another.
I developed a C++ and Java configuration file parser (sorry, no Python or PHP implementations) called Config4*. If you look at chapters 2 (overview of syntax) and 3 (overview of API) of the Config4* Getting Started Guide, you will notice that it supports the kind of flexible approach I discuss in this answer (the "url#... format is not supported, but "exec#curl -sS ..." provides the same functionality). 99 percent of the time, I end up using configuration files, but I find it comforting to know that my applications can trivially switch to using a different-source-of-configuration-data technology whenever the need might arise.

What's the mariadb tarball different between glibc and non-glibc?

I want to download the mariadb with gzip type ,but I found that it has many files could been downloaded ,such as mariadb-10.2.6-linux-x86_64.tar.gz ,mariadb-10.2.6-linux-glibc_214-x86_64.tar.gz (requires GLIBC_2.14+) ,mariadb-10.2.6-linux-systemd-x86_64.tar.gz (for systems with systemd) .
I don't know what's different between them?
First, please note that tarballs are generic, but not universal. Even though there seems to be many of them, there are still far less than supported systems and flavors. None of tarballs is guaranteed to work on any particular system. The common problem is absence of certain libraries which MariaDB server, client programs or plugins are linked with.
Back to the actual question, the main difference is highlighted in the package names/comments.
mariadb-10.2.6-linux-glibc_214-x86_64.tar.gz (requires GLIBC_2.14+) -- the binaries built on reasonably modern systems. The package most likely contains more plugins/engines, because some of them have requirements for modern compilers and libraries; but it can only be run on systems that have globc 2.14 or higher.
mariadb-10.2.6-linux-systemd-x86_64.tar.gz (for systems with systemd) -- the package with systemd support. It's important if you actually install the service and run it this way. If you just keep the binaries locally and start them manually, it shouldn't matter.
mariadb-10.2.6-linux-x86_64.tar.gz -- the package provided mostly for legacy/compatibility purposes, for older systems which are still not EOL-ed. Generally it has somewhat better chances to be successfully run on an arbitrary system, but you need to check whether it contains everything you need, as it might be not the case.

No Initialization Functions in MySQL Library

I have downloaded the MySQL Connector/C driver from the official website, the version that I believe is supposed to be released next to 5.6.
I then obviously wanted to use the library so I wrote a small application. During linkage, I however got a strange linker errors saying it cannot find the functions mysql_library_init() and mysql_library_end().
When I use a command to check for the functions inside the library, nm /usr/lib64/mysql/libmysqlclient.a > ~/Desktop/symbols, I indeed cannot find the functions the linker mentioned.
The functions I do find however are mysql_server_init and mysql_server_end, which are according to the documentation, marked as deprecated. (There are more functions in there too)
What am I doing wrong? I am using version 6.1.2 of the driver.
It seems like the problem is that the documentation is ahead of the code.
I am a DBA, not a C programmer, though I dabble in server internals. If the file include/mysql.h in the MySQL Server source files is any indication, the mysql_server_* functions are the ones you're looking for.
/*
mysql_server_init/end need to be called when using libmysqld or
libmysqlclient (exactly, mysql_server_init() is called by mysql_init() so
you don't need to call it explicitely; but you need to call
mysql_server_end() to free memory). The names are a bit misleading
(mysql_SERVER* to be used when using libmysqlCLIENT). So we add more general
names which suit well whether you're using libmysqld or libmysqlclient. We
intend to promote these aliases over the mysql_server* ones.
*/
#define mysql_library_init mysql_server_init
#define mysql_library_end mysql_server_end
"We intend to promote these aliases over the mysql_server* ones."
They promoted them in the documentation, apparently.
It looks like Bug #65887 was a report of the same problem, that they never really got around to doing anything with.
You might also find MariaDB's implementation of the C API "for MariaDB and MySQL" to be a viable and perhaps more agreeably-licensed alternative.

How-to rewrite a binary file or modfiy its control flow graph

Essentially I want to rewrite a binary file to perform additional tasks regarding its actual tasks.
Regarding binary rewriting the process seems to be following:
Create a Control Flow Graph from an existing binary
Create a Code Snippet with the desired changes in an appropriate format
Create a binary file from the modified CFG
I came across a couple of tools, which either won't compile on my ubuntu 12.04, are not available for download or I can not find a decent tutorial / howto on how to hot patch / rewrite a binary. Those tools are:
ParseAPI, Code-Surfer/x86, EEL, LEEL, Jakstab, DynInst, Diablo + Lancet
To be more precise I want to analyze a given binary for its most frequently used functions and change it in such a way that before executing these functions, a given set of instructions are performed.
These instructions comprise of loading an array of stored bytes, reading a byte at a certain position and comparing it with a pre-defined value.
I want to make sure that the binary definitely executes these instructions during every trial.
There are 2 alternative approaches I came across which basically alter standard c functions (like memcpy(), strcpy(), printf(), etc.) since I assume these functions to be part of the binary with high probability:
LD_PRELOAD: Define my own libraries and let them get loaded before the ordinary ones
Compile the binary (of sourcecode is given) with own versions of the standard functions using something like gcc -fno-builtin -o strcpy strcpy.c
Drawback of this approach is that eventhough I subsitute standard c functions they do not necessarily have to get called, hence my instruction will not get executed neither.
Do you guys have experience regarding binary rewriting or do your have clues for accomplishing this rather exotic task?
Best regards!
BAP and Dyninst would help you. You may use BAP (http://bap.ece.cmu.edu/) to get the control flow graph of a binary. It have a very easy to use utility to create control flow graph from binaries. And you may use dyninst to instrument binaries and perform your desired operations. BAP absolutely runs on ubuntu12.04. Dyninst might not compile on 12.04 (there might be some linking problems). A simple walk around is that you do instrumentation on 10.04 and run the rewritten binaries on 12.04. Both tools are free.