I have a website running with Pyrocms & CodeIgniter.
My config is the following:
Debian: 5.0.9
Apache: 2.2.9
MySQL: 5.0.51
PHP: 5.2.6-1
I'm facing the following problem:
I receive POST requests to insert messages in my database.
Sometimes the frequency of db insert is up to 100 messages / seconds
Database engine used : innoDD
After a moment I get a white page when i try to reach the homepage or any modules except the admin.
www.project.com is giving HTTP status 200 but page is blank
same for www.project.com/mycontroller is giving HTTP status 200 but page is blank
admin is working fine
In my log files i have following errors :
ERROR - 2011-11-18 15:04:28 --> Severity: Notice --> iconv() [function.iconv]: Detected an illegal character in input string /home/project/system/codeigniter/core/Utf8.php 89
For the moment i do a dump of my db every 30 minutes and rollback to the last working db when the plateform is crashing.
I have made some tests on my database with mysqlcheck but everything seems to be ok in the db.
Thank you for your help.
PHP is probably choking on some characters when converting with iconv. try adding "//IGNORE" to you second argument so it looks something like that
iconv("UTF-8", "ISO-8859-1//IGNORE", $text);
Related
Edit 2: After some sleuthing, there doesn't seem to be anything glaringly wrong. I also checked that the database was properly closing after each request (it was). Ultimately, the only thing that worked was to use Peewee's ReconnectMixin, which automatically reconnects to MySQL if the connection timed out.
It's probably not the ideal solution, but in case anyone else is stuck in this same scenario and happens to be using Flask + Peewee + Gunicorn + Docker, this is the change I've made to my code to grab the database:
class ReconnectMySQLDatabase(ReconnectMixin, MySQLDatabase):
pass
db = ReconnectMySQLDatabase(
database=DATABASE_NAME,
user=DATABASE_USER,
password=DATABASE_PASSWORD,
charset='utf8mb4'
)
Again, I've confirmed that the app is explicitly managing connections. Hopefully applying this bandaid doesn't mean dispensing with best practices.
Edit: I did some more troubleshooting and have narrowed down the issue to Gunicorn. I tried running everything from Flask's development server and the database hasn't timed me out once. I've updated the title accordingly (it previously assumed the issue was Flask- / Peewee- related).
I still haven't figured out what I can configure on Gunicorn's end to fix this, but it's good to know what the source of the problem is.
I'm working on an app that doesn't seem to want to close its connection to the database. In my code, I'm explicitly opening and closing my connections with each request as per this example in Peewee's documentation.
In addition to Flask and Peewee, I'm using Connexion. Both the app and the database run from their own Docker container.
Here's what happens.
The first request to the server always works as expected. Subsequent requests made before wait_timeout amount of time elapses are also fine. As long as one of these requests interact with the database, the connection seems to stay alive with the timer resetting.
Requests made after wait_timeout amount of time elapses result in errors. The first request that fails always shows this error: peewee.OperationalError: (2006, "MySQL server has gone away (ConnectionResetError(104, 'Connection reset by peer'))"). Subsequent requests show peewee.InterfaceError: (0, '').
In the course of troubleshooting, I've determined the following:
The issue is directly related to MySQL's wait_timeout setting; for the purposes of troubleshooting, I've taken this down from its default (600 seconds in MariaDB) to 12 so I can more easily get feedback on whether code changes work (hint, nothing's worked so far)
This issue applies to all requests that touch the database (read/update/insert) even for extremely trivial requests (get the value of one field in one table)
In one troubleshooting attempt, I commented out the blocks for #app.before_request and #app.teardown_request, directly adding g.db.connect() and g.db.close() before and after queries to no avail
Passing reuse_if_open=True to g.db.connect() results in the same errors and doesn't resolve anything
I've also made sure to comment out any initialization code (to create tables, for example) and have tried to limit which parts of the code call g.db.connect() and g.db.close()
The database logs are as follows:
mysqld, Version: 10.4.11-MariaDB-1:10.4.11+maria~bionic-log (mariadb.org binary distribution). started with:
Tcp port: 3306 Unix socket: /var/run/mysqld/mysqld.sock
Time Id Command Argument
200102 2:05:30 8 Connect user#192.168.16.3 as anonymous on app
8 Query SET sql_mode='PIPES_AS_CONCAT'
8 Query SET AUTOCOMMIT = 0
9 Connect user#192.168.16.3 as anonymous on app
9 Query SET sql_mode='PIPES_AS_CONCAT'
9 Query SET AUTOCOMMIT = 0
9 Query SELECT `t1`.`id`, `t1`.`name`, `t1`.`value`, `t1`.`config_type_id`, `t1`.`validation_type`, `t1`.`description`, `t1`.`min_role`, `t1`.`omittable`, `t1`.`conditional_on`, `$
9 Query COMMIT
8 Quit
10 Connect user#192.168.16.3 as anonymous on app
10 Query SET sql_mode='PIPES_AS_CONCAT'
10 Query SET AUTOCOMMIT = 0
10 Quit
200102 2:06:13 11 Connect user#192.168.16.3 as anonymous on app
11 Query SET sql_mode='PIPES_AS_CONCAT'
11 Query SET AUTOCOMMIT = 0
11 Quit
12 Connect user#192.168.16.3 as anonymous on app
12 Query SET sql_mode='PIPES_AS_CONCAT'
12 Query SET AUTOCOMMIT = 0
12 Quit
Does anyone have any ideas for what I could try to get things to work? I'd like to avoid using a workaround (like setting the timeout to its max value and running some cron task to ping the DB ever x hours).
As the title says, works perfect on my local WAMP test server. Same PHP version installed however mysql on local test server is 5.6.12 and the live mysql version is 5.6.39. All other inserts and tables on the live site work except one I had an issue with until I signed into the site under a different user and it worked under their profile. Not having such luck with this last one. Waited a day a it's not inserting into the table under any profile. Also I confirmed its pulling the data properly by using echo statements at the end of the script. So its got the data but refuses to insert into the database table on the live site. Any help or ideas is greatly appreciated.
Here's the code:
$db->Query("INSERT INTO `ysub` (user, url, title, y_av, max_clicks, daily_clicks, cpc, country, sex) VALUES('".$data['id']."', '".$yt_url."', '".$url."', '".$yt_image."', '".$max_clicks."', '".$daily_clicks."', '".$cpc."', '".$country."', '".$gender."') ");
Try adding the error function after the query line to see if an error is being caught.
$db->query('INSERT INTO...');
echo $db->errorInfo();
I have made sure that the columns match the 'column to export' field in the columns tab, not null columns have data, tried in both csv and txt but all i get is a message saying:
import/export job created
Nothing else: no errors, no warning, no completion.
windows 7 os
Version1.4
CopyrightCopyright 2013 - 2017, The pgAdmin Development Team
Python Version2.7.12 (v2.7.12:d33e0cf91556, Jun 27 2016, 15:19:22) [MSC v.1500 32 bit (Intel)]
Flask Version0.11.1
Application ModeDesktop
till then ill try via psql
Just a guess, but are you trying this as a superuser? I'm having the same issue and so trying to write it as a COPY statement, but get this response:
ERROR: must be superuser to COPY to or from a file
SQL state: 42501
Hint: Anyone can COPY to stdout or from stdin. psql's \copy command also works for anyone.
I'm not sure, but maybe the functionality in pgAdmin just constructs a COPY statement, then isn't properly relaying the error message back to you when it fails.
Sounds like psql is the right way to go, though.
I recently developed a Java client which allow me to query my Hive tables from a simple url .
Unfortunately , since last Thursday the queries seems to have some issues . From time to time , my query which worked before , doesn't return me anything .
So I decided to take a look at my logs, and everytime I do a query this occur :
java.sql.SQLException: Query returned non-zero code: 12, cause: FAILED: Hive Internal Error: java.lang.RuntimeExcep
tion(org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create
directory /tmp/hive-root/hive_2015-06-29_09-19-53_268_7855618362212093455. Name node is in safe mode.
Resources are low on NN. Safe mode must be turned off manually.
I think the issue come from the node itself , because I didn't do any changes on my code or in my hive tables . What do you think the problem come from ? And what can I do to resolve it ?
Thank you for your reading my question .
This was due the cluster automatically entered in safe mode. We fixed it by freeing/adding some disk space.
I have coded a Ruby IRC bot which is on github (/ninjex/rubot) which is having some conflicting output with MySQL on a dedicated server I just purchased.
Firstly we have the connection to the database in the MySQL folder (in .gitignore) which looks similar to the following code block.
#con = Mysql.new('localhost', 'root', 'pword', 'db_name')
Then we have an actual function to query the database
def db_query
que = get_message # Grabs query from user i.e,./db_query SELECT * FROM words
results = #con.query(que) # Send query through the connection i.e, #con.query("SELECT * FROM WORDS")
results.each {|x| chan_send(x)} # For each row returned, send it to the channel via
end
On my local machine, when running the command:
./db_query SELECT amount, user from words WHERE user = 'Bob' and word = 'hello'
I receive the output in IRC in an Array like fashion: ["17", "Bob"] Where 17 is amount and Bob is the user.
However, using this same function on my dedicated server results in an output like: 17Bob I have attempted many changes in the code, as well as try to parse the data into it's own variable, however it seems that 17Bob is coming out as a single variable, making it impossible to parse into something like an array, which I could then use to send the data correctly.
This seems odd to me on both my local machine and the dedicated server, as I was expecting the output to first send 17 to the IRC and then Bob like:
17
Bob
For all the functions and source you can check my github /Ninjex/rubot, however you may need to install some gems.
A few notes:
Make sure you are sanitizing query via get_message. Or you are opening yourself up to some serious security problems.
Ensure you are using the same versions of the mysql gem, ruby and MySql. Differences in any of these may alter the expected output.
If you are at your wits end and are unable to resolve the underlying issue, you can always send a custom delimiter and use it to split. Unfortunately, it will muck up the case that is actually working and will need to be stripped out.
Here's how I would approach debugging the issue on the dedicated machine:
def db_query
que = get_sanitized_message
results = #con.query(que)
require 'pry'
binding.pry
results.each {|x| chan_send(x)}
end
Add the pry gem to your Gemfile, or gem install pry.
Update your code to use pry: see above
This will open up a pry console when the binding.pry line is hit and you can interrogate almost everything in your running application.
I would take a look at results and see if it's an array. Just type results in the console and it will print out the value. Also type out results.class. It's possible that query is returning some special result set object that is not an array, but that has a method to access the result array.
If results is an array, then the issue is most likely in chan_send. Perhaps it needs to be using something like puts vs print to ensure there's a new line after each message. Is it possible that you have different versions of your codebase deployed? I would also add a sleep 1 within the each block to ensure that this is not related to your handling of messages arriving at the same time.