what does max_allowed_packet take into considerations? - mysql

If I add a ; to my MySQL query does that count against the max_allowed_packet length? What about adding spaces? Does that count against it? Does MySQL automatically add a ; if it's not present?
ie.
are the following the same in so far as max_allowed_packet length is concerned?:
INSERT INTO table_name (col_name) VALUES ( 1 ) , ( 2 );
INSERT INTO table_name (col_name) VALUES (1),(2)
If they are the same how can I programmatically know how long MySQL considers the query to be?

No, the whitespace is of no concern for max_allowed_packet, which is a server variable.
As you can check the MySQL Client on the machine converts the String received to proper data type, while doing that it removes the extra unconcerned whitespace, and so does your MySQL Connector.
Which means both the Queries are treated as same
So you should just be concerned about readability of the SQL query and forget the whitespaces.
To Check
using WIRESHARK, capture the mysql packets.
If interested in deep diving. Go through the Source Code for MySQL Client or MySQL Connector that you are using.

The max_allowed_packet specifies the maximum length of a SQL statement to be executed after it has been received and pre processed by the server; that includes, for instance, the creation of the list of insert values (if any). So, by that time, there are no trailing spaces and that sort of things. I'm not sure about the ";", but my best guess is that if it's not present, it's added when pre processing your statement.

Related

How does Hibernate get the AutoIncrement Value on Identity Insert

I am working on a high scale application of the order of 35000 Qps, using Hibernate and MySQL.
A large table has AutoIncrement Primary key, and generation defined is IDENTITY at Hibernate. Show Sql is true as well.
Whenever an Insert happens I see only one query being fired in DB, which is an
Insert statement.
Few Questions Follow:
1) I was wondering how does Hibernate get the AutoIncrement Value after insert?
2) If the answer is "SELECT LAST_INSERT_ID()", why does it not show up at VividCortex or in Show Sql Logs...?
3) How does "SELECT LAST_INSERT_ID()" account for multiple autoincrements in different tables?
4) If MySql returns a value on Insert, why aren't the MySql clients built so that we can see what is being returned?
Thanks in Advance for all the help.
You should call SELECT LAST_INSERT_ID().
Practically, you can't do the same thing as the MySQL JDBC driver using another MySQL client. You'd have to write your own client that reads and writes the MySQL protocol.
The MySQL JDBC driver gets the last insert id by parsing packets of the MySQL protocol. The last insert id is returned in this protocol by a MySQL result set.
This is why SELECT LAST_INSERT_ID() doesn't show up in query metrics. It's not calling that SQL statement, it's picking the integer out of the result set at the protocol level.
You asked how it's done internally. A relevant line of code is https://github.com/mysql/mysql-connector-j/blob/release/8.0/src/main/protocol-impl/java/com/mysql/cj/protocol/a/result/OkPacket.java#L55
Basically, it parses an integer from a known position in a packet as it receives a result set.
I'm not going to go into any more detail about parsing the protocol. I don't have experience coding a MySQL protocol client, and it's not something I wish to do.
I think it would not be a good use of your time to implement your own MySQL client.
It probably uses the standard JDBC mechanism to get generated values.
It's not
You execute it imediately after inserting in one table, and you thus get the values that have been generated by that insert. But that's not what is being used, so it's irrelevant
Not sure what you mean by that: the MySQL JDBC driver allows doing that, using the standard JDBC API
(Too long for a comment.)
SELECT LAST_INSERT_ID() uses the value already available in the connection. (This may explain its absence from any log.)
Each table has its own auto_inc value.
(I don't know any details about Hibernate.)
35K qps is possible, but it won't be easy.
Please give us more details on the queries -- SELECTs? writes? 35K INSERTs?
Are you batching the inserts in any way? You will need to do such.
What do you then use the auto_inc value in?
Do you use BEGIN..COMMIT? What value of autocommit?

maximum character size that a mysql query can handle

I am trying to run this query but there are no values being retrieved , I tried to find out the length of characters till which values are returned. Length was 76 characters.
Any suggestions?
SELECT tokenid FROM tokeninfo where tokenNumber = 'tUyXl/Z2Kpua1AvIjcY5tMG+KlEhnt+V/YfnszF5m1+q8ngYvw%L3ZKrq2Kmtz5B8z7fH5BGQXTWAoqFNY8buAhTzjyLFUS64juuvVVzI7Af5UAVOj79JcjKgdNV4KncdcqaijPQAmy9fP1w9ITj7NA==%';
The problem is not the length of the characters you select, but in the characters, which are stored in database field itself. Check the tokenNumber field in your database schema - if it is varchar, or blob or whatever type, what is the length, etc...
You can insert/select pretty much more than 76 characters in any database, but you can get less that 76, as in your case, it depend on how you handle the field they are stored in.
A quick way to see the tokeninfo table properties is to run this query:
SHOW COLUMNS FROM tokeninfo;
If the data types differ from what you expect them to be based on a CREATE TABLE statement, note that MySQL sometimes changes data types when you create or alter a table. The conditions under which this occurs are described in Section 13.1.10.3, Silent Column Specification Changes.
the max size would be limited by the variable max_allowed_packet
so, if you do a
show variables like 'max_allowed_packet'
it will show you the limit. By default, it is set to
1047552 bytes.
If you want to increase that, add a line to the server's
my.cnf file, in the [mysqld] section :
set-variable = max_allowed_packet=2M
and restart mysql server.

Firefox add-on: Populating sqlite database with a lot of data (around 80000 rows) not working with executeSimpleSQL()

The firefox add-on that I am trying to code needs a big database.
I was advised not to load the database itself from the 'data' directory (using the addon-sdk to develop locally on my linux box).
So I decided to get the content from a csv file and insert it into the database that I created.
The thing is that the csv has about 80 000 rows and I get an error when I try to pass .executeSimpleSQL() the reaaaaally long INSERT statement as a string
('insert into table
values (row1val1,row1val2,row1val3),
(row2val1,row2val2,row2val3),
...
(row80000val1,row80000val2,row80000val3)')
Should I insert asynchronously? Use prepared statements?
Should I consider another approach, loading the database as an sqlite file directly?
You may be crossing some sqlite limits.
From sqlite Implementation Limits:
Maximum Length Of An SQL Statement
The maximum number of bytes in the text of an SQL statement is limited
to SQLITE_MAX_SQL_LENGTH which defaults to 1000000. You can redefine
this limit to be as large as the smaller of SQLITE_MAX_LENGTH and
1073741824.
If an SQL statement is limited to be a million bytes in length, then
obviously you will not be able to insert multi-million byte strings by
embedding them as literals inside of INSERT statements. But you should
not do that anyway. Use host parameters for your data. Prepare short
SQL statements like this:
INSERT INTO tab1 VALUES(?,?,?);
Then use the sqlite3_bind_XXXX()
functions to bind your large string values to the SQL statement. The
use of binding obviates the need to escape quote characters in the
string, reducing the risk of SQL injection attacks. It is also runs
faster since the large string does not need to be parsed or copied as
much.
The maximum length of an SQL statement can be lowered at run-time
using the sqlite3_limit(db,SQLITE_LIMIT_SQL_LENGTH,size) interface.
You cannot use that many records in a single INSERT statement;
SQLite limits the number to its internal parameter SQLITE_LIMIT_COMPOUND_SELECT, which is 500 by default.
Just use multiple INSERT statements.

How to display special characters in SQL server 2008?

I am using SQL server 2008 and have the column in my table set to nvarchar. Data with special characters are getting stored wrongly in this table. Eg: this is one entry
Need to check if doesn’t comes as doesn’t itself and don’t comes asdon’t itself and ensure closure of issues.
The garbage ’ should actually be an apostrophe ('). I have checked my collation string. At database level it is SQL_Latin1_General_CP850_BIN2 and at server level it is SQL_Latin1_General_CP1_CI_AS.
I know for sure the encoding set everywhere else in my application is UTF-8.
How do I store the data correctly in my table? Do I need to change my SQL queries or any settings in the database?
Please advise.
You need to make sure that you're observing two things:
Always use NVARCHAR as datatype for your columns
Always make sure to use the N'....' prefix when dealing with string literals (for example in your INSERT or UPDATE statements)
With those two things in place, SQL Server has no trouble at all storing all Unicode characters you might throw at it...

Use SQL functions for insert/update in ActiveRecord

I want to store IP addresses (v4 and v6) in my rails application. I have installed an extension to MySQL adds functions to convert ip strings to binary which will allow me to query by IP range easily.
I can use unescaped sql statements for SELECT type queries, thats easy.
The hard part is that I also need a way to overwrite the way the field is escaped for insert/update statements.
This ActiveRecord statement
new_ip = Ip.new
new_ip.start = '1.2.3.4'
new_ip.save
Should generate the following SQL statement
INSERT INTO ips(start) VALUES(inet6_pton('1.2.3.4'));
Is there a way to do this? I tried many things, including overriding ActiveRecord::Base#arel_attributes_values, without luck : the generated sql is always converted to binary (if that matters, my column is a MySQL VARBINARY(16)).
ActiveRecord::Base.connection.execute("INSERT INTO ips(start) VALUES(inet6_pton('1.2.3.4'))")