SQL Insert committed but not found on next request? - mysql

I have placed an after_commit callback in the RequestToken model that outputs "Committed Request Token xx". You can see in the log I included below, that the token record is committed and the next request the lookup on the object says it cannot be found. The issue occurs intermittently and if I refresh the page the record is found and the request goes through.
Environment
AWS EC2 + RDS, Ubuntu 10.04, Rails 3.2.8, MySQL2 0.3.11 gem, apache2 2.2.14, phusion passenger 3.0.11
Has anyone seen this before? Any suggestions?
Committed Request Token S8j311QckvEjnDftNW0e7FPHsavGWTelONcsE3X1
Rendered text template (0.0ms)
Completed 200 OK in 28ms (Views: 0.6ms | ActiveRecord: 21.8ms | Sphinx: 0.0ms)
Started GET "/oauth/authorize?oauth_token=S8j311QckvEjnDftNW0e7FPHsavGWTelONcsE3X1" for 96.236.148.63 at 2012-10-15 22:07:32 +0000
Processing by OauthController#authorize as HTML
Parameters: {"oauth_token"=>"S8j311QckvEjnDftNW0e7FPHsavGWTelONcsE3X1"}
Completed 500 Internal Server Error in 5ms
ActiveRecord::RecordNotFound (Couldn't find RequestToken with token = S8j311QckvEjnDftNW0e7FPHsavGWTelONcsE3X1):

200 Doesn't mean it saved. Probably failed a validation.

Related

Openshift 3 , 503 Error (No server is available to handle this request)

I have created a web application using jsp/tiles/struts/mysql/tomcat. I created new project on Openshift 3 console (Openshift online) https://console.preview.openshift.com/console/ then added tomcat/mySql. I was getting 503 error sometimes and other times, same page was working as expected. 503 error came randomly for any page from my project. When I get 503 error, I refresh some no of times and it goes away, and my page is correctly displayed.
Error that I see is:
"503 Service Unavailable
No server is available to handle this request. "
I did some research:
What I understand from this openshift 2 link:
https://blog.openshift.com/how-to-host-your-java-ee-application-with-auto-scaling/
is that to correct 503 error:
SSH into your application gear using rhc ssh --app <app_name>
Change directory to haproxy/conf
change the following in haproxy.cfg option httpchk GET / to option httpchk GET /api/v1/ping
Restart the HAProxy cartridge from your local machine using RHC rhc cartridge-restart --cartridge haproxy
I dont know if it is also applicable to openshift 3. In openshift 3 where is haproxy.log, haproxy.cfg, haproxy/conf or its slightly different in openshift 3. (Nut thanks to Warrens comments, yes he saw 503 error in openshift related to HAProxy)
Now after 1 week after posting this question:
I am getting Quota Reached Error. I am able to build my project but all deployments are failing. I wonder if 503 error that I was getting earlier(either completely or partially) was related to Quota reached. How should I proceed now.
curl -i localhost:8080/GEA
HTTP/1.1 302 Found Server:
Apache-Coyote/1.1
Location: http://localhost:8080/GEA/
Transfer-Encoding: chunked Date: Tue, 11 Apr 2017 18:03:25 GMT
Tomcat logs do not show any application error.
Will Readiness Probe and Liveness Probe help me? I have not set them yet.
Nor do I know how to set them.
Will scaling help me (I dont know how to set it either)
Do I have to set memory/... all at maximum allowed to ensure project runs smooth?
For me I had a similar situation of getting 503's sometimes and sometimes getting my actual page. the reason was because you have haproxy on the frontend handling the requests. Depending on your setup you may even have a few haproxy pods and your request could be funneled between one of the pods. So as in my case one pod was working and the other not.
So basically
oc get pods -n default
NAME READY STATUS RESTARTS AGE
docker-registry-7-i02rh 1/1 Running 0 75d
registry-console-12-wciib 1/1 Running 0 67d
router-1-533cg 1/1 Running 3 76d
router-1-9utld 1/1 Running 1 76d
router-1-uwf64 1/1 Running 1 76d
As you can see in my output default namespace is where my router(haproxy) pods live. If I change to that namespace
oc project default
Then run
oc logs -f router-1-533cg
on each of the pods you will most likely find a sepcific pod that is behaving bad. You can simply delete, and the replication controller will create a new one

Soundcloud embed Timeout & wget returning Error 500

I have a wordpress blog in a VPS (Centos 6.8 x86) which some pages have a soundcloud embed. Whenever I try to view one of this pages, I receive a timeout error (http://prntscr.com/bpmm90).
GET
http://soundcloud.com/oembed
?maxwidth=0
&maxheight=0
&url=https%3A%2F%2Fsoundcloud.com%2F10de10%2Fsemana-dos-10-20-270616
&format=json
Operation timed out after 5000 milliseconds with 0 bytes received
Ok, I thought maybe I was doing something wrong in Wordpress, so I tried to 'wget' the same URL and... ERROR 500.
wget 'http://soundcloud.com'
--2016-07-06 18:46:01-- http://soundcloud.com/
Resolving soundcloud.com... 72.21.91.127
Connecting to soundcloud.com|72.21.91.127|:80... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2016-07-06 18:46:31 ERROR 500: Internal Server Error.
However, it all works fine if I try the same things on another server. I've already thought about if I was somehow blocked from accesing soundcloud through my VPS, but I barely did any call to the service.
The url in question is: http://soundcloud.com/oembed?maxwidth=0&maxheight=0&url=https%3A%2F%2Fsoundcloud.com%2F10de10%2Fsemana-dos-10-20-270616&format=json

Can't verify CSRF token authenticity and email can't be blank

Context:
I'm playing with a Jquery Mobile app and a Rails backend server. I create users from the app, through a POST Ajax call. The back end is, by all purposes for the mobile app, just an API.
I've been all evening reading about this CSRF token authenticity issue, and cannot find how to kill it. I'm on Rails 4 and Devise 3.1 (with token_authenticatable module)
This is the log on console when I try to save a user through the app
Started POST "/users.json" for IP at 2013-09-28 00:05:17 +0200
Processing by RegistrationsController#create as JSON
Parameters: {"user"=>{"username"=>"seba", "email"=>"mail#gmail.com", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]"}}
Can't verify CSRF token authenticity
user:
(0.1ms) begin transaction
(0.1ms) rollback transaction
Completed 422 Unprocessable Entity in 9ms (Views: 0.3ms | ActiveRecord: 0.2ms)
I read on this site, that if I put the following line
skip_before_filter :verify_authenticity_token
in my custom Registration controller, the verification error would be gone. It does, but now there's a warning:
Started POST "/users.json" for IP at 2013-09-28 02:38:16 +0200
DEPRECATION WARNING: devise :token_authenticatable is deprecated. Please check Devise 3.1 release notes for more information on how to upgrade. (called from <class:User> at /home/seba/repos/elsapo/app/models/user.rb:4)
Processing by RegistrationsController#create as JSON
Parameters: {"user"=>{"username"=>"seba", "email"=>"mail#gmail.com", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]"}}
user:
(0.1ms) begin transaction
(0.2ms) rollback transaction
Completed 422 Unprocessable Entity in 65ms (Views: 0.4ms | ActiveRecord: 1.5ms)
The line user: is the logger.debug, to see if I'm storing something.
In both cases, my app continues to show the same message after submitting the date: "email can't be blank" and then "password can't be blank"
Any pointers or suggestions? I anyone needs more information, I'll gladly deliver it.

502 error nginx + ruby on rails application

Application details :
Rails 3.1.0
Ruby 1.9.2
unicorn 4.2.0
resque 1.20.0
nginx/1.0.14
redis 2.4.8
I am using active_admin gem, for all URL's getting response 200,
but only one URL giving 502 error on production.
rake routes :
admin_links GET /admin/links(.:format) {:action=>"index", :controller=>"admin/links"}
And its working on local(development).
localhost log : response code 200
Started GET "/admin/links" for 127.0.0.1 at 2013-02-12 11:05:21 +0530
Processing by Admin::LinksController#index as */*
Parameters: {"link"=>{}}
Geokit is using the domain: localhost
AdminUser Load (0.2ms) SELECT `admin_users`.* FROM `admin_users` WHERE `admin_users`.`id` = 3 LIMIT 1
(0.1ms) SELECT 1 FROM `links` LIMIT 1 OFFSET 0
(0.1ms) SELECT COUNT(*) FROM `links`
(0.2ms) SELECT COUNT(count_column) FROM (SELECT 1 AS count_column FROM `links` LIMIT 10 OFFSET 0) subquery_for_count
CACHE (0.0ms) SELECT COUNT(count_column) FROM (SELECT 1 AS count_column FROM `links` LIMIT 10 OFFSET 0) subquery_for_count
Link Load (0.6ms) SELECT `links`.* FROM `links` ORDER BY `links`.`id` desc LIMIT 10 OFFSET 0
Link Load (6677.2ms) SELECT `links`.* FROM `links`
Rendered /usr/local/rvm/gems/ruby-1.9.2-head/gems/activeadmin-0.4.2/app/views/active_admin/resource/index.html.arb (14919.0ms)
Completed 200 OK in 15663ms (Views: 8835.0ms | ActiveRecord: 6682.8ms | Solr: 0.0ms)
production log : 502 response
Started GET "/admin/links" for 103.9.12.66 at 2013-02-12 05:25:37 +0000
Processing by Admin::LinksController#index as */*
Parameters: {"link"=>{}}
NGinx error log
2013/02/12 07:36:16 [error] 32401#0: *1948 upstream prematurely closed connection while reading response header from upstream
don't know what's happening, could some buddy help me out.
You have a timeout problem.
Tackling it
HTTP/1.1 502 Bad Gateway
Indicates, that nginx had a problem to talk to its configured upstream.
http://en.wikipedia.org/wiki/List_of_HTTP_status_codes#502
2013/02/12 07:36:16 [error] 32401#0: *1948 upstream prematurely closed connection while reading response header from upstream
Nginx error log tells you Nginx was actually able to connect to the configured upstream but the process closed the connection before the answer was (fully) received.
Your development environment:
Completed 200 OK in 15663ms
Apparently you need around 15 seconds to generate the response on your development machine.
In contrast to proxy_connect_timeout, this timeout will catch a server
that puts you in it's connection pool but does not respond to you with
anything beyond that. Be careful though not to set this too low, as
your proxy server might take a longer time to respond to requests on
purpose (e.g. when serving you a report page that takes some time to
compute). You are able though to have a different setting per
location, which enables you to have a higher proxy_read_timeout for
the report page's location.
http://wiki.nginx.org/HttpProxyModule#proxy_read_timeout
On the nginx side the proxy_read_timeout is at a default of 60 seconds, so that's safe
I have no idea how ruby (on rails) works, check the error log - the timeout happens in that part of your stack

How can I debug a PUT request in Rails 3.1?

I am developing a simple iOS app, which uses a Rails app as a server backend. I'm using the Restkit framework for the management of the server networking / requests and on the iOS side all seems to be OK.
When I make a PUT request, I get a 200 response back, and the logs in XCode seem to suggest all is well. When I look at the logs for the Rails app, it also seems to suggest all is well, with the following output:
2011-12-19T18:15:17+00:00 app[web.1]: Started PUT "/lists/3/tasks/6" for 109.156.183.65 at 2011-12-19 18:15:17 +0000
2011-12-19T18:15:17+00:00 app[web.1]: Parameters: {"created_at"=>"2011-12-12 22:37:00 +0000", "id"=>"6", "updated_at"=>"2011-12-12 22:37:00 +0000", "description"=>"Create a home page", "list_id"=>"3", "completed"=>"1"}
2011-12-19T18:15:17+00:00 app[web.1]: Task Load (4.3ms) SELECT "tasks".* FROM "tasks" WHERE "tasks"."id" = $1 LIMIT 1 [["id", "6"]]
2011-12-19T18:15:17+00:00 app[web.1]: Processing by TasksController#update as JSON
2011-12-19T18:15:17+00:00 app[web.1]: (4.7ms) BEGIN
2011-12-19T18:15:17+00:00 app[web.1]: (1.5ms) COMMIT
2011-12-19T18:15:17+00:00 app[web.1]: Completed 200 OK in 48ms (Views: 1.1ms | ActiveRecord: 16.0ms)
2011-12-19T18:15:17+00:00 heroku[nginx]: 109.156.183.65 - - [19/Dec/2011:10:15:17 -0800] "PUT /lists/3/tasks/6 HTTP/1.1" 200 154 "-" "TaskM8/1.0 CFNetwork/485.13.9 Darwin/11.2.0" taskm8.com
2011-12-19T18:15:17+00:00 heroku[router]: PUT taskm8.com/lists/3/tasks/6 dyno=web.1 queue=0 wait=0ms service=114ms status=200 bytes=154
However, when I make another get request, or use the standard web views to look at the data, the change I was expecting from the PUT request (Completed = 1 - which is a BOOL field), no change has been made.
I can see from the rails log that my iOS app is passing the correct parameters, so it seems to be something on the rails side. I've been through the loop of overcoming the CSRF error message, so don't think it's that.
On a local version of the rails app, I've also run general logging against the MySql database to monitor the queries being run, trying to see if the PUT does anything at all, or anything which would fail... in the log, you don't see anything other than:
BEGIN
COMMIT
The same as the rails log.
So, does anyone have any idea about why the PUT is not making the changes to the data, or how I can debug the PUT further?
Apologies if this is a real simple question, I'm slowly getting back into development and am somewhat rusty!
Are you using any kind of automated tests? If not, start there.
Another way (though rubbish) would be to call your controller action from a webpage and see if it works.
You can also add logger.debug in your rails code to add traces.
If you have control over the MySQL server I would suggest that you enable the general log aka the query log, that way you can see what really happens.
SET GLOBAL general_log_file = '/tmp/query.log'
SET GLOBAL general_log = 'ON';