I'm implementing server-side session cookies using Flask-Session.
The implementation largely works fine, but I encounter the following error when using Chrome to access my API:
sqlalchemy.exc.DataError: (pymysql.err.DataError) (1406, "Data too long for column 'session_id' at row 1")
Here's the schema of my sessions table (flask_sessions_table) (as generated by the flask extension):
+------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+------------+--------------+------+-----+---------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| session_id | varchar(255) | YES | UNI | NULL | |
| data | blob | YES | | NULL | |
| expiry | datetime | YES | | NULL | |
+------------+--------------+------+-----+---------+----------------+
I don't experience this error on any other major browser (Firefox, Chromium, Safari, Postman, etc). It appears the session cookies sent from Chrome to my Flask-based app are too long (~300 characters, sometimes even ~1200), whereas from other browses they're about ~50 chars max. See this attached image:
This error is causing my app (API server) to crash since flask-session fails to save the session cookie on every request coming in. I thought of the following workarounds but they couldn't work:
convert flask_sessions_table.session_id from varchar(255) to LONGTEXT. SQLAlchemy (ORM) doesn't seem to support Text types?
truncate and save the first 255 chars from the chrome cookie. There doesn't seem to be a way to 'intercept' the request before saving the session cookie: the extension saves the cookie right away before statements on my request routes are executed
Any other ideas on how to fix this for Chrome?
UPDATE:
I have observed that when the domain is is 'localhost', the cookies are longer than 255 chars, whereas when accessing the same exact instance from ngrok, they're normal (~43 chars). What is it about Chrome on localhost that might cause the long session cookies?
I had this issue except with Firefox. I was able to resolve it by clearing cookies on my browser and restarting Flask.
Related
I've encountered a strange problem when I was using PuTTY to query the following MySQL command: select * from gts_camera
The output seems extremely weird:
As you can see, putty outputs loads of "PuTTYPuTTYPuTTY..."
Maybe it's because of the table attribute set:
mysql> describe gts_kamera;
+---------+----------+------+-----+-------------------+----------------+
| Field | Type | Null | Key | Default | Extra |
+---------+----------+------+-----+-------------------+----------------+
| id | int(11) | NO | PRI | NULL | auto_increment |
| datum | datetime | YES | | CURRENT_TIMESTAMP | |
| picture | longblob | YES | | NULL | |
+---------+----------+------+-----+-------------------+----------------+
This table stores some big pictures and their date of creation.
(The weird ASCII-characters you can see on top of the picture is the content.)
Does anybody know why PuTTY outputs such strange stuff, and how to solve/clean this?
Cause I can't type any other commands afterwards. I have to reopen the session again.
Sincerely,
Michael.
The reason this happens is because of the contents of the file (as you have a column defined with longblob). It may have some characters that Putty will not understand, therefore it will break as it is happening with you.
There is a configuration that may help though.
You can also not select every column in that table (at least not the *blob ones) as:
select id, datum from gts_camera;
Or If you still want to do it use the MySql funtion HEX:
select id, datum, HEX(picture) as pic from gts_camera;
I'm using SQLAlchemy and MySQL, with a files table to store files. That table is defined as follows:
mysql> show full columns in files;
+---------+--------------+-----------------+------+-----+---------+-------+---------------------------------+---------+
| Field | Type | Collation | Null | Key | Default | Extra | Privileges | Comment |
+---------+--------------+-----------------+------+-----+---------+-------+---------------------------------+---------+
| id | varchar(32) | utf8_general_ci | NO | PRI | NULL | | select,insert,update,references | |
| created | datetime | NULL | YES | | NULL | | select,insert,update,references | |
| updated | datetime | NULL | YES | | NULL | | select,insert,update,references | |
| content | mediumblob | NULL | YES | | NULL | | select,insert,update,references | |
| name | varchar(500) | utf8_general_ci | YES | | NULL | | select,insert,update,references | |
+---------+--------------+-----------------+------+-----+---------+-------+---------------------------------+---------+
The content column of type MEDIUMBLOB is where the files are stored. In SQLAlchemy that column is declared as:
__maxsize__ = 12582912 # 12MiB
content = Column(LargeBinary(length=__maxsize__))
I am not quite sure about the difference between SQLAlchemy's BINARY type and LargeBinary type. Or the difference between MySQL's VARBINARY type and BLOB type. And I am not quite sure if that matters here.
Question: Whenever I store an actual binary file in that table, i.e. a Python bytes or b'' object , then I get the following warning
.../python3.4/site-packages/sqlalchemy/engine/default.py:451: Warning: Invalid utf8 character string: 'BCB121'
cursor.execute(statement, parameters)
I don't want to just ignore the warning, but it seems that the files are in tact. How do I handle this warning gracefully, how can I fix its cause?
Side note: This question seems to be related, and it seems to be a MySQL bug that it tries to convert all incoming data to UTF-8 (this answer).
Turns out that this was a driver issue. Apparently the default MySQL driver stumbles with Py3 and utf8 support. Installing cymysql into the virtual Python environment resolved this problem and the warnings disappear.
The fix: Find out if MySQL connects through socket or port (see here), and then modify the connection string accordingly. In my case using a socket connection:
mysql+cymysql://user:pwd#localhost/database?unix_socket=/var/run/mysqld/mysqld.sock
Use the port argument otherwise.
Edit: While the above fixed the encoding issue, it gave rise to another one: blob size. Due to a bug in CyMySQL blobs larger than 8M fail to commit. Switching to PyMySQL fixed that problem, although it seems to have a similar issue with large blobs.
Not sure, but your problem might have the same roots as the one I had several years ago in python 2.7: https://stackoverflow.com/a/9535736/68998. In short, Mysql's interface does not let you be certain if you are working with a true binary string or a text in a binary collation (used because of a lack of case-sensitive utf8 collation). Therefore, a Mysql binding has the following options:
return all string fields as binary strings, and leave the decoding to you
decode only the fields that do not have a binary flag (so much fun when some of the fields are unicode and other are str)
have an option to force decoding to unicode for all string fields, even true binary
My guess is that in your case, the third option is somewhere enabled in the underlying Mysql binding. And the first suspect is your connection string (connection params).
I'm see some very strange outputs from MySQL, and I don't know whether it's my console or my data that's causing this. Here are some screenshots:
Any ideas?
edit:
mysql> describe transformed_step_a1_sfdc_lead_history;
+-------------------+--------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| old_value | varchar(255) | YES | | NULL | |
| new_value | varchar(255) | YES | | NULL | |
+-------------------+--------------+------+-----+---------+-------+
Max
To verify if there is any control characters, you can use -s option, see http://dev.mysql.com/doc/refman/5.5/en/mysql-command-options.html#option_mysql_raw
It's impossible to tell exactly what the problem is from your screenshots, but the text in your database contains control characters. The usual culprit is CR, which moves the cursor back to the beginning of the line and starts overwriting text already there.
If you have programmatic access to your database then you will be able to dump the values with control characters expressed as pintables so that you can see what is actually in there.
I have a table which stores admin login requests.
-- desc AdminLogins
+----------------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+----------------+--------------+------+-----+---------+----------------+
| id | int(10) | NO | PRI | NULL | auto_increment |
| adminID | mediumint(9) | YES | MUL | NULL | |
| loginTimestamp | datetime | YES | | NULL | |
| browser | varchar(255) | YES | | NULL | |
+----------------+--------------+------+-----+---------+----------------+
The browser field contains the user agent, I want to produce some graphs showing each user agents popularity over the last 6 months but I'm stuck with my query.
so far I have
select distinct browser, left(loginTimestamp, 7) from AdminLogins group by left(loginTimestamp, 7);
But it isn't returning what I'm after.
Ideally I'd be grouping by the first 7 characters of the timestring, and seeing the distinct user agents for each period.
select date(loginTimestamp) as logindate,
group_concat(distinct browser) as useragents
from AdminLogins
group by logindate
This is only a partial answer, and doesn't provide the SQL you need, but does provide information that is going to be necessary for you at some point soon.
User agents are complicated, and may contain lots of detail - you can sometimes identify not just the browser, but the OS, the browser version, and sometimes random crap you'll probably never care about like what version of the .NET framework they have installed.
If you're just dumping the complete user agent string into a MySQL table without doing any kind of processing on it beforehand, you will NOT be able to sanely extract human-meaningful information - much less information that can be reasonably used to form pretty graphs - from that table using SQL alone.
Instead, pull out all the user agents for the time period you're interested in and do your processing using a programming language, instead of SQL. If you're using PHP, you'll want to get an up-to-date browscap.ini file and use the get_browser function to parse the user agent.
You may want to consider restructuring your existing tracking code to call get_browser on user agents when you record them, and record all of the details you care about (e.g. browser, OS, major browser version number, minor browser version number) in separate columns. Then it'll be possible in future to extract useful information using just SQL.
I am saving a serialized object to a mysql database blob.
After inserting some test objects and then trying to view the table, i am presented with lots of garbage and "PuTTYPuTTY" several times.
I believe this has something to do with character encoding and the blob containing strange characters.
I am just wanting to check and see if this is going to cause problems with my database, or if this is just a problem with putty showing the data?
Description of the QuizTable:
+-------------+-------------+-------------------+------+-----+---------+----------------+---------------------------------+-------------------------------------------------------------------------------------------------------------------+
| Field | Type | Collation | Null | Key | Default | Extra | Privileges | Comment |
+-------------+-------------+-------------------+------+-----+---------+----------------+---------------------------------+-------------------------------------------------------------------------------------------------------------------+
| classId | varchar(20) | latin1_swedish_ci | NO | | NULL | | select,insert,update,references | FK related to the ClassTable. This way each Class in the ClassTable is associated with its quiz in the QuizTable. |
| quizId | int(11) | NULL | NO | PRI | NULL | auto_increment | select,insert,update,references | This is the quiz number associated with the quiz. |
| quizObject | blob | NULL | NO | | NULL | | select,insert,update,references | This is the actual quiz object. |
| quizEnabled | tinyint(1) | NULL | NO | | NULL | | select,insert,update,references | |
+-------------+-------------+-------------------+------+-----+---------+----------------+---------------------------------+-------------------------------------------------------------------------------------------------------------------+
What i see when i try to view the table contents:
select * from QuizTable;
questionTextq ~ xp sq ~ w
t q1a1t q1a2xt 1t q1sq ~ sq ~ w
t q2a1t q2a2t q2a3xt 2t q2xt test3 | 1 |
+-------------+--------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+
3 rows in set (0.00 sec)
I believe you can use the hex function on blobs as well as strings. You can run a query like this.
Select HEX(quizObject) From QuizTable Where....
Putty is reacting to what it thinks are terminal control character strings in your output stream. These strings allow the remote host to change something about the local terminal without redrawing the entire screen, such as setting the title, positioning the cursor, clearing the screen, etc..
It just so happens that when trying to 'display' something encoded like this, that a lot of binary data ends up sending these characters.
You'll get this reaction catting binary files as well.
blob will completely ignore any character encoding settings you have. It's really intended for storing binary objects like images or zip files.
If this field will only contain text, I'd suggest using a text field.