It only happens in production, when we update some of the records through browser, the change was not saved. it does not seem to be a cache problem as we verified that the data in mysql was still the old data. However, the controller did get hit and flash message returned as if the change was made successfully.
However, we can make the change manually in rails console or mysql withhout any problem.
Any ideas why this is happening?
btw, we recently reconfigure the site to use SSL, it might have something to do with that.
Is there anything that could've prevented the model from being saved?
One way to ensure that the attributes are set and the model is saved is to use the exception raising version which can help fix problems like this:
def update
#model = Model.find(params[:id])
#model.update_attributes(params[:model])
redirect_to(model_path(#model))
end
This could be improved to a more reliable method:
def update
#model = Model.find(params[:id])
# Use exception-throwing update_attributes!
#model.update_attributes!(params[:model])
redirect_to(model_path(#model))
rescue ActiveRecord::RecordNotFound
render(:partial => 'not_found')
rescue ActiveRecord::RecordInvalid
# Delegate back to edit action, something's invalid
edit
render(:action => 'edit')
end
There are occasions where update_attributes may not successfully save.
If you can perform the same update on the same data with the same methods then that is peculiar.
Related
We have a system where we have a Master / Multiple Slaves .
Currently everything happens on the Master and the slaves are just here for backup .
We use Codeigniter as a development platform .
Now we decided to user the slaves for the Reads and the Master for the Write queries .
I have been told that this is not doable without modifying the source code because proxy can't know the type of the query .
Any idea how to proceed with this without causing too much damages for a perfectly working system ...
We will use this : http://dev.mysql.com/downloads/mysql-proxy/
It does exactly what we want :
More info here :
http://jan.kneschke.de/2007/8/1/mysql-proxy-learns-r-w-splitting/
http://www.infoq.com/news/2007/10/mysqlproxyrwsplitting
http://archive.oreilly.com/pub/a/databases/2007/07/12/getting-started-with-mysql-proxy.html
something i was also looking, few month back i did something like this but i added 3 web server with master slave mysql servers, first web server enabled with mod_proxy to redirect request to read and write server all request will come to this server, if post,put or delete request come to server it will go to write server, all get or normal request will go to read server
here you can find mod_proxy setting which i used
http://pastebin.com/a30BRHFq
here you can read about load balancing
http://www.rackspace.com/knowledge_center/article/simple-load-balancing-with-apache
still looking for better solution with less hardware involved
figure out another solution through CI, create two database connections in database.php file keep save mysql server as default database connection and other connection for write only server
you can use this base model extend
https://github.com/jamierumbelow/codeigniter-base-model
you need to extend your models with this model and need to extend you model with this, it has functionality for callbacks before and after insert,update, delete and get queries, only you need to add one custom method or callback change_db_group
//this method in MY_Model
function change_db_group{
$this->_database = $this->load->database('writedb', TRUE)
}
no your example model
class Example_Model extends MY_Model{
protected $_table = 'example_table';
protected $before_create = array('change_db_group');
protected $before_update = array('change_db_group');
protected $before_delete = array('change_db_group');
}
you database connection will be changed before executing insert,update or delete queries
I have been facing a weird problem for some time, wherein ActiveRecord queries are not getting logged in the terminal, or in the Rails console (using ActiveRecord::Base.logger = Logger.new(STDOUT)).
This is the exception that I get:
Could not log "sql.active_record" event. NameError: undefined
local variable or method `s'
for # ActiveSupport::Notifications::Event:0x007f9ae02a60c0.
I tried out a few things, including reinstalling Rails, but to no avail.
Apart from wondering why this is happening, I’m unable to check the actual SQL queries fired against the database as a result.
I ran into something similar a ways back.
I was able to fix it by adding
# config/environments/development.rb
config.logger = Logger.new(STDOUT)
and
# config/environment.rb
# To prevent log buffering
$stdout.sync = true
This problem was due to a stray (and accidental) 's' being inserted in the instrumenter.rb file. Removing the character helped me fix the problem. I'm not sure about why reinstalling rails did not help me though.
I've got a CakePHP application and I'm interested in reacting to a user's attempt to upload a file that's too large for the MySQL packet size. I'd like to get the MySQL error and then provide an error message to the user based on that.
It looks like CakePHP uses PDO under the hood, but I'm not sure how to access it. I'd rather borrow CakePHP's PDO connection so that I don't have to create a new PDO connection w/ the username, password, etc, etc (also so I don't have to worry about problems from multiple connections to the same DB, etc).
It looks like there's a PDO class in CakePHP (http://api.cakephp.org/2.2/class-PDO.html), but I'm not sure how to actually get to it in order to invoke the errorCode() method.
This is the method you probably need. In your controller after a save operation you can use $this->SomeModel->getDataSource()->lastError() to get the error.
Or.... check this out:
if( is_a( $this->SomeModel->getDataSource(), "DboSource") ) {
$myPDO = $this->SomeModel->getDataSource()->getConnection();
debug($myPDO.errorCode()); // or whatever...
}
Anyways, thanks a ton for your help with this - there was just a couple too many hops in the documentation for me to find this on my own.
(I'm posting this here instead of as a comment to your answer so that it'll stand out better)
Have you ever experienced that the Bootstrap doesn't save (with flush or without flush - same result) anything in the db?
I'm using the Spring security core plugin, and I'm creating roles and user in the init-method.
My app starts up fine, without errors, but I have nothing in my db...
I have made some changes, I'm running a MySQL-database and might have made some wierd changes that enable this behaviour.
Has anyone experienced this?
Are you certain that your objects pass validation?
I always use
object.save(failOnError: true)
for objects I create in BootStrap.groovy. save will throw an exception if validation fails.
An alternative would be to check that your call to save returns true.
I spent few aggravated hours with same problem after which I realized that I had set
dbCreate = "create-drop"
Make sure you have
dbCreate = "update"
I have installed the sfErrorNotifierPlugin. When both options reportErrors/reportPHPErrors reportPHPWarnings/reportWarnings are set to false, everything is ok. But I want to catch PHP exceptions and warnings to receive E-mails, but then all my tasks fail, including clear-cache. After few hours of tests I'm 100% sure that the problem is with set_exception_handler/set_error_handler.
There's a similar question:
sfErrorNotifierPlugin on symfony task but the author there is having problems with a custom task. In my case, even built-in tasks fail.
I haven't used sfErrorNotifierPlugin, but I have run into 'The “default” context does not exist.' messages before. It happens when a call is made to sfContext::getInstance() and the context simply doesn't exist. I've had this happen a lot from within custom tasks. One solution is to add sfContext::createInstance() before the call to sfContext::getInstance(). This will ensure that a context exists.
There's an interesting blog post on 'Why sfContext::getInstance() is bad' that goes into more detail - http://webmozarts.com/2009/07/01/why-sfcontextgetinstance-is-bad/
Well, the problem could not be solved this way, unfortunately. Using sfErrorNotifierPlugin, I have enabled reporting PHP warning/errors (apart from symfony exceptions) and this resulted in huge problems, e.g. built-in tasks such as clear-cache failed.
The solution I chose was to load the plugin only in non-task mode (project configuration class):
public function setup()
{
$this->enableAllPluginsExcept('sfPropelPlugin');
if ('cli' == php_sapi_name()) $this->disablePlugins('sfErrorNotifierPlugin');
}
WHen a task is executed, everything works normally. When an app is fired from the browser, emails are sent when exception/warning occurs (maybe someone will find it useful).
Arms has explained the problem correctly. But usually context does not exist when executing backend/maintenance tasks on the console. And it is easier if you handle the condition yourself.
Check, if you really need the context?
If you do, what exactly do you need it for?
Sometimes you only want a user to populate a created_by field. You can work around by hard-coding a user ID.
If you want to do something more integrated, create a page (which will have a context) and trigger the task from there.
you can test the existance of the instance before doing something inside a class. Like:
if(sfContext::hasInstance())
$this->microsite_id = sfContext::getInstance()->getUser()->getAttribute('active_microsite');
I've been experiencing the same problem using the plugin sfErrorNotifier.
In my specific case, I noticed a warning was raised:
Warning: ob_start(): function '' not found or invalid function name in /var/www/ncsoft_qa/lib/vendor/symfony/lib/config/sfApplicationConfiguration.class.php on line 155
Notice: ob_start(): failed to create buffer in /var/www/ncsoft_qa/lib/vendor/symfony/lib/config/sfApplicationConfiguration.class.php on line 155
So, checking the file: sfApplicationConfiguration.class.php class, line 155,
I've replaced the ' ' for a null, then the warnings disappears, and also the error!
ob_start(sfConfig::get('sf_compressed') ? 'ob_gzhandler' : ''); bad
ob_start(sfConfig::get('sf_compressed') ? 'ob_gzhandler' : null); good