Can MyBatis log the complete SQL that can run directly
In general case the answer is NO.
If the query has parameters mybatis can't to that even in principle. In order to log the query all parameters should be serialized and represented as strings. For simple data types like String or Integer this is not a problem but for more complex like Timestamp or Blob representation may depend on the database.
When query is executed there's no need to convert parameters to strings because JDBC driver passes them to the database in more efficient (and database dependent) format. But for logging purposes mybatis has only java objects and mybatis does not know how to represent them as database specific string literals.
So the best you can have (and this is supported in mybatis) is to log the query with placeholders and log parameters used separately. Configure DEBUG log level for the logger named after the mapper.
For log4j configuration looks like this:
log4j.logger.my.org.myapp.SomeMapper=DEBUG
If you are in the development environment and use IntelliJ, I think plugin Mybatis Log Plugin can help you.
And if it is in a production environment, you can copy the log and paste it locally. Then use the plugin feature Restore Sql from Selection or Restore Sql from text(new version coming soon)
Detailed introduction:
https://github.com/kookob/mybatis-log-plugin
you can copy com.mysql.cj.jdbc.PreparedStatement to your project directory(keep the same package path) and call log.info(asSql()) method into after PreparedStatement.execute.fillSendPacket. (while using batch operations you can use executeBatchInternal).
Class will be loaded from your project and the origin class will be ignored, you can try this on other framewokrk and database.
Related
We store representations of millions of chemical compounds as BLOBs in a MySQL database. We also keep hashes of these BLOBs when we need to query among these compounds and comparing these hashes in the queries.
Since we found out that standard hash functions(such as CRC) provided by MySQL library collides frequently for our use-case, we used a custom hash function specific to our data, wrapped it as a MySQL plugin and created a User Defined Function with this plugin as below:
CREATE FUNCTION customhash RETURNS INTEGER SONAME 'customhash.so'
Unfortunately, we need to move our MySQL installation to another managed data centre and because of the security reasons & data centre policy, we are not allowed to customize MySQL by adding plugins.
We've recently heard about the XXHash library, we made a few tests on it and we found out it has great performance and it doesn't generate collisions in our data. Also, it turns out it has already been used by MySQL standard distribution internally.
I wonder if it is possible to configure MySQL server to call XXH64_digest function in our MySQL routines without compiling it as a plugin.
I've checked MySQL source code and built-in functions and I could not find any way to run XXHash in MySQL routines. It seems XXHash is used by MySQL internally and it is not user-visible.
In order to run XXHash in MySQL routines, I have developed a plugin in case anyone needs to use XXHash algorithm in MySQL server.
This plug in can be found here: Github repository for xxhash_mysql_plugin.
After installing plug in you can run the xxhash function in your select statements as below:
I am new to Mirth COnnect software. Will somebody guide me how can i populate my destination database. I had successfully setup Oracle Database as Source Channel and Mysql as Destination. But in Destination channel beside providing the basic information i failed to understand how to make Mirth do the required task.
Thanks
You're asking a very broad question. It seems like you need a tutorial about Mirth Connect rather than a specific question. I'll try to answer it here anyway.
First review the tutorials for Mirth Connect at the Mirth Connect Wiki. You will not find an exact example for your use case. You need to learn three things:
1. How to read from a DB
1. How to map variables from source messages to map variables
1. How to write to a DB
Review those examples and pick out the ones that cover the three items listed above.
You will need to create a channel that works like this:
Your source connector will be a Database Reader which queries Oracle for the data you need. This would run a SELECT statement with an optional UPDATE statement which runs after the data is processed.
Your destination will be a Database Writer that runs INSERT or UPDATE statements against MySQL
The hard part is writing the mappings. If you set up your source connector and look at the message view you will see the XML representation Mirth Connect uses for database read operations. Copy this message.
Paste that message into the template for the destination transformer for your MySQL step. You can now use the mapper to choose elements from that source message and map them to variables. You should almost always map them as channelMap variables.
After you have pulled the data from your source reader to map variables you can now use those variables in the database writer template to populate the destination connector with the actual data to write.
I want to compress MySQL result or string inside MySQL query and the result will send to delphi application. I use this to speed up connectivity.
The problem is how to uncompress string from MySQL result inside delphi.
Here is my sample query
Select Compress(AColumn), Compress(BColumn) from ATable
Compress is for storage
The compress keyword in MySQL is not meant to reduce network traffic, but rather to reduce storage requirements.
The details of the compression are not documented and may vary from server to server.
COMPRESS(string_to_compress)
Compresses a string and returns the result as a binary string. This function requires MySQL to have been compiled with a compression library such as zlib. Otherwise, the return value is always NULL. The compressed string can be uncompressed with UNCOMPRESS().
Note that the ability to uncompress depends on the compression library that your MySQL version was compiled with.
Modify the connection string if you want network compression
If you want to compress the network data you specify this in the connection settings:
Add the following string to the Properties property.
UseCompression = true;
A list of all connection properties can be found here:
https://www.connectionstrings.com/mysql/
More info can be found here: http://dev.mysql.com/doc/refman/5.0/es/connector-net-examples-mysqlconnection.html#connector-net-examples-mysqlconnection-connectionstring
(note that this info is quite old, but I'm tweaking the url for the newer versions results in a 404 page not found error).
Note that in Delphi the connectionstring is largly filed in by the properties:
- Database
- Port
- Password
etc
In the Properties property you only supply those ConnectionString options that Delphi does not already cover in its other properties. Multiple arguments are separated by a ;.
Further complications
Different component packs use different names for the extra data you can put into the ConnectionString.
ZEOS calls it properties
Other people call it other things.
I've managed to connect from eclipse Hibernate Tools to my MySql Database used with grails, classes mapped by GORM.
Now I'd like to perform HQL queries on the DB using the Hibernate Tools. However Hibernate Tools tells me for every table that it is not mapped.
My question: Do I really need to write all the class mappings manually into a hibernate.cfg.xml file or is there a way to get it from grails? I mean grails / GORM needs to have an idea about the mappings, right? Or am I going for this the wrong way?
P.S. I know there is a script grails-create-hibernate-cfg-xml, but this only creates a dummy file for some Books class...
Grails has convetions based config so there is no hibernate.cfg.xml but you can easily execute HQL queries int groovy console - just call
grails console
It will open Groovy console connected to your app so you can write groovy scripts interacting with your DB.
I am using Kohana 3. I want to log the MySQL queries being executed by an application. The reason to determine the query of type INSERT,UPDATE and DELETE which are being executed in a process and store them in another MySQL table with date-time for further reference.
Can anybody tell how can I achieve this?
An alternative is to enable profiling for the database module, which will log the queries made to a file.
This will log ALL queries, not just the last one ;)
It shouldn't be too hard to parse the file, or to extend the profiling/logging/caching classes to save it to a database.
Sorry, because of the Kohana tag I approached the problem from the wrong angle. You want the MYSQL server to log the commands directly, so you get ALL of the commands, not just the last one.
See the mysql server docs on logging:
http://dev.mysql.com/doc/refman/5.0/en/server-logs.html
I did this using after() method of the controller. After execution of each controller action this after() method is executed, where I wrote logic to capture last query executed and stored in my db for further reference.